Orcs & Elves
May 2nd, 2006 |
John Carmack's Blog
I'm not managing to make regular updates here, but I'll keep this around
just in case. I have a bunch of things that I want to talk about -- some
thoughts on programming style and reliability, OpenGL, Xbox 360, etc, but we
have a timely topic with the release of our second mobile game, Orcs &
Elves, that has spurred me into making this update.
DoomRPG, our (Id Software's and Fountainhead Entertainment's) first mobile
title, has been very successful, both in sales and in awards. I predict
that the interpolated turn based style of 3D gaming will be widely adopted on
the mobile platform, because it plays very naturally on a conventional cell
phone. Gaming will be a lot better when there is a mass market of phones
that can be played more like a gamepad, but you need to make do with what
you actually have.
One of the interesting things about mobile games is that the sales curve is
not at all like the drastically front loaded curve of a PC or console game.
DoomRPG is selling better now than when it was initially released, and the
numbers are promising for supporting additional development work. However,
unless I am pleasantly surprised, the hardware capabilities are going to
advance much faster than the market in the next couple years, leading to an
unusual situation where you can only afford to develop fairly crude games on
incredibly powerful hardware. Perhaps "elegantly simple" would be the
better way of looking at it, but it will still wind up being like developing
an Xbox title for $500,000. That will wind up being great for many small
game companies that just want to explore an idea, but having resource far in
excess of your demands does minimize the value
of being a hot shot programmer. :-)
To some degree this is already the case on high end BREW phones today. I
have a pretty clear idea what a maxed out software renderer would look like
for that class of phones, and it wouldn't be the PlayStation-esq 3D graphics
that seems to be the standard direction. When I was doing the graphics
engine upgrades for BREW, I started along those lines, but after putting in
a couple days at it I realized that I just couldn't afford to spend the time
to finish the work. "A clear vision" doesn't mean I can necessarily
implement it in a very small integral number of days. I wound up going with
a less efficient and less flexible approach that was simple and robust
enough to not likely need any more support from me after I handed it over
During the development of DoomRPG, I had commented that it seemed obvious
that it should be followed up with a "traditional, Orcs&Elves sort of
fantasy game". A couple people independently commented that "Orcs&Elves"
wasn't a bad name for a game so since we didn't run into any obstacles,
Orcs& Elves it was. Naming new projects is a lot harder than most people
think, because of trademark issues.
In hindsight, we made a strategic mistake at the start of O&E development.
We were fresh off the high end BREW version of DoomRPG, and we all liked
developing on BREW a lot better than Java. It isn't that BREW is inherently
brilliant, it just avoids the deep sucking nature of java for resource
constrained platforms (however, note the above about many mobile games not
being resource constrained in the future), and allows you to work inside
visual studio. O&E development was started high-end first with the low-end
versions done afterwards. I should have known better (Anna was certainly
suspicious), because it is always easier to add flashy features without
introducing any negatives than it is to chop things out without damaging the
core value of a game. The high end version is really wonderful, with all
the graphics, sound, and gameplay we aimed for, but when we went to do the
low end versions, we found that even after cutting the media as we planned,
we were still a long way over the 280k java application limit. Rather than
just butchering it, we went for pain, suffering, and schedule slippage,
eventually delivering a game that still maintained high quality after the
de-scoping (the low end platforms still represent the majority of the
market). It would have been much easier to go the other way, but the high
end phone users will be happy with our mistake.
DoomRPG had three base platforms that were customized for different
phones -- Java, low end BREW, and high end BREW. O&E added a high end java
version that kept most of the quality of the high end BREW version on phones
fast enough to support it from carriers willing to allow the larger
download. The download size limits are probably the most significant
restriction for gaming on the high end phones. I don't really understand
why the carriers encourage streaming video traffic, but balk at a couple
megs of game media.
I am really looking forward to the response to Orcs&Elves, because I think
it is one of the best product evolutions I have been involved in. The core
game play mechanics that were laid out in DoomRPG have proven strong and
versatile (again, I bet we have a stable genre here), but now we have a big
bag of tricks and a year of polishing the experience behind us, along with a
world of some depth. I found it a very good indicator that play testers
almost always lost track of time while playing.
This project was doubly nostalgic for me -- the technology was over a decade
old for me, but the content took me back twenty years. All the computer
games I wrote in high school were adventure games, and my first two
commercial sales were Ultima style games for the Apple II, but Id Software
never got around to doing one. Old timers may recall that we were going to
do a fantasy game called "The Fight For Justice" (starring a hero called
Quake...) after Commander Keen, but Wolfenstein 3D and the birth of the FPS
sort of got in the way. :-)
Cell phone adventures
March 27th, 2005 |
John Carmack's Blog
Cell phone adventures
Im not a cell phone guy.
I resisted getting one at all for years, and even now I rarely carry
it. To a first approximation, I dont
really like talking to most people, so I dont go out of my way to enable
people to call me. However, a little
while ago I misplaced the old phone I usually take to Armadillo, and my wife
picked up a more modern one for me. It
had a nice color screen and a bunch of bad java game demos on it. The bad java games did it.
I am a big proponent of temporarily changing programming
scope every once in a while to reset some assumptions and habits. After Quake 3, I spent some time writing
driver code for the Utah-GLX project to give myself more empathy for the
various hardware vendors and get back to some low-level register
programming. This time, I decided I was
going to work on a cell phone game.
I wrote a couple java programs several years ago, and I was
left with a generally favorable impression of the language. I dug out my old java in a nutshell and
started browsing around on the web for information on programming for cell
phones. After working my way through
the alphabet soup of J2ME, CLDC, and MIDP, Ive found that writing for the
platform is pretty easy.
In fact, I think it would be an interesting environment for
beginning programmers to learn on. I
started programming on an Apple II a long time ago, when you could just do an
hgr and start drawing to the screen, which was rewarding. For years, Ive had misgivings about people
learning programming on Win32 (unix / X would be even worse), where it takes a
lot of arcane crap just to get to the point of drawing something on the screen
and responding to input. I assume most
beginners wind up with a lot of block copied code that they dont really understand.
All the documentation and tools needed are free off the web,
and there is an inherent neatness to being able to put the program on your
phone and walk away from the computer.
I wound up using the latest release of NetBeans with the mobility module,
which works pretty well. It certainly
isnt MSDev, but for a free IDE it seems very capable. On the downside, MIDP debugging sessions are
very flaky, and there is something deeply wrong when text editing on a 3.6 ghz
processor is anything but instantaneous.
I spent a while thinking about what would actually make a
good game for the platform, which is a very different design space than PCs or
consoles. The program and data sizes
are tiny, under 200k for java jar files.
A single texture is larger than that in our mainstream games. The data sizes to screen ratios are also far
out of the range we are used to. A
128x128x16+ bit color screen can display some very nice graphics, but you could
only store a half dozen uncompressed screens in your entire size budget. Contrast with PCs, which may be up to a few
megabytes of display data, but the total game data may be five hundred times
You arent going to be able to make an immersive experience
on a 2 screen, no matter what the graphics look like. Moody and atmospheric are pretty much
out. Stylish and fun is about the best
you can do.
The standard cell phone style discrete button direction pad
with a center action button is a good interface for one handed navigation and
selection, but it sucks for games, where you really want a game boy style
rocking direction pad for one thumb, and a couple separate action buttons for
the other thumb. These styles of input
are in conflict with each other, so it may never get any better. The majority of traditional action games
just dont work well with cell phone style input.
Network packet latency is bad, and not expected to be
improving in the foreseeable future, so multiplayer action games are pretty
much out (but see below).
I have a small list of games that I think would work out
well, but what I decided to work on is DoomRPG sort of Bards Tale meets
Doom. Step based smooth sliding/turning
tile movement and combat works out well for the phone input buttons, and
exploring a 3D world through the cell phone window is pretty neat. We talked to Jamdat about the business side
of things, and hired Fountainhead Entertainment to turn my proof-of-concept
demo and game plans into a full-featured game.
So, for the past month or so I have been spending about a day
a week on cell phone development.
Somewhat to my surprise, there is very little internal conflict
switching off from the high end work during the day with gigs of data and
multi-hundred instruction fragment shaders down to texture mapping in java at
night with one table lookup per pixel and 100k of graphics. Its all just programming and design work.
It turns out that Im a lot less fond of Java for
resource-constrained work. I remember
all the little gripes I had with the Java language, like no unsigned bytes, and
the consequences of strong typing, like no memset, and the inability to read
resources into anything but a char array, but the frustrating issues are
details down close to the hardware.
The biggest problem is that Java is really slow. On a pure cpu / memory / display /
communications level, most modern cell phones should be considerably better
gaming platforms than a Game Boy Advanced.
With Java, on most phones you are left with about the CPU power of an
original 4.77 mhz IBM PC, and lousy control over everything.
I spent a fair amount of time looking at java byte code
disassembly while optimizing my little rendering engine. This is interesting fun like any other
optimization problem, but it alternates with a bleak knowledge that even the
most inspired java code is going to be a fraction the performance of pedestrian
native C code.
Even compiled to completely native code, Java semantic
requirements like range checking on every array access hobble it. One of the phones (Motorola i730) has an
option that does some load time compiling to improve performance, which does
help a lot, but you have no idea what it is doing, and innocuous code changes
can cause the compilable heuristic to fail.
Ha. Hahahahaha. We are only testing on four platforms right
now, and not a single pair has the exact same quirks. All the commercial games are tweaked and compiled individually
for each (often 100+) platform.
Portability is not a justification for the awful performance.
Security on a cell phone is justification for doing
something, but an interpreter isnt a requirement memory management units can
do just as well. I suspect this did
have something to do with Javas adoption early on. A simple embedded processor with no MMU could run arbitrary
programs securely with java, which might make it the only practical
option. However, once you start using
blazingly fast processors to improve the awful performance, a MMU with a
classic OS model looks a whole lot better.
Even saddled with very low computing performance, tighter
implementation of the platform interface could help out a lot. Im not seeing very conscientious work on
the platforms so far. For instance,
there is just no excuse for having 10+ millisecond granularity in timing. Given that the java paradigm is sort of
thread-happy anyway, having a real scheduler that Does The Right Thing with
priorities and hardware interfacing would be an obvious thing. Pressing a key should generate a hardware
interrupt, which should immediately activate the key listening thread, which
should be able to immediately kill an in-process rendering and restart another
one if desired. The attitude seems to
be 15 msec here, 20 there, stick it on a queue, finish up a timeslice, who
I suspect I will enjoy working with BREW, the competing
standard for cell phone games. It lets
you use raw C/C++ code, or even, I suppose, assembly language, which completely
changes the design options.
Unfortunately, they only have a quarter the market share that the J2ME
phones have. Also, the relatively open
java platform development strategy is what got me into this in the first place
one night I just tried writing a program for my cell phone, which isnt
possible for the more proprietary BREW platform.
I have a serious suggestion for the handset designers to go
with my idle bitching. I have been told
that fixing data packet latency is apparently not in the cards, and it isnt
even expected to improve much with the change to 3G infrastructure. Packet data communication seems more modern,
and has the luster of the web, but it is worth realizing that for network games
and many other flashy Internet technologies like streaming audio and video, we
use packets to rather inefficiently simulate a switched circuit.
Cell phones already have a very low latency digital data
path the circuit switched channel used for voice. Some phones have included cellular modems that use either the CSD
standard (circuit switched data) at 9.8Kbits or 14.4Kbits or the HSCSD standard
(high speed circuit switched data) at 38.4Kbits or 57.6Kbits. Even the 9.8Kbit speed would be great for
networked games. A wide variety of two
player peer-to-peer games and multiplayer packet server based games could be
implemented over this with excellent performance. Gamers generally have poor memories of playing over even the
highest speed analog modems, but most of the problems are due to having far too
many buffers and abstractions between the data producers/consumers and the
actual wire interface. If you wrote
eight bytes to the device and it went in the next damned frame (instead of the
OS buffer, which feeds into a serial FIFO, which goes into another serial FIFO,
which goes into a data compressor, which goes into an error corrector, and
probably a few other things before getting into a wire frame), life would be
quite good. If you had a real time
scheduler, a single frame buffer would be sufficient, but since that isnt
likely to happen, having an OS buffer with accurate queries of the FIFO
positions is probably best. The worst
gaming experiences with modems werent due to bandwidth or latency, but to
Welcome, Q3 source, Graphics
December 31st, 2004 |
John Carmack's Blog
Quake 3 Source
December 31, 2004
I get a pretty steady trickle of emails from people hoping
for .plan file updates. There were two
main factors involved in my not doing updates for a long time a good chunk of
my time and interest was sucked into Armadillo Aerospace, and the fact that the
work I had been doing at Id for the last half of Doom 3 development was
basically pretty damn boring.
The Armadillo work has been very rewarding from a
learning-lots-of-new-stuff perspective, and Im still committed to the vehicle
development, even post X-Prize, but the work at Id is back to a high level of
interest now that we are working on a new game with new technology. I keep running across topics that are
interesting to talk about, and the Armadillo updates have been a pretty good
way for me to organize my thoughts, so Im going to give it a more general try
here. .plan files were appropriate ten
years ago, and sort of retro-cute several years ago, but Ill be sensible and
use the web.
Im not quite sure what the tone is going to be there will
probably be some general interest stuff, but a bunch of things will only be of
interest to hardcore graphics geeks.
I have had some hesitation about doing this because there
are a hundred times as many people interested in listening to me talk about
games / graphics / computers as there are people interested in rocket
fabrication, and my mailbox is already rather time consuming to get through.
If you really, really want to email me, add a [JC] in the
subject header so the mail gets filtered to a mailbox that isnt clogged with
spam. I cant respond to most of the
email I get, but I do read everything that doesnt immediately scan as
spam. Unfortunately, the probability of
getting an answer from me doesnt have a lot of correlation with the quality of
the question, because what I am doing at the instant I read it is more
dominant, and there is even a negative correlation for deep questions that I
dont want to make an off-the-cuff response to.
Quake 3 Source
I intended to release the Q3 source under the GPL by the end
of 2004, but we had another large technology licensing deal go through, and it
would be poor form to make the source public a few months after a company paid
hundreds of thousands of dollars for full rights to it. True, being public under the GPL isnt the
same as having a royalty free license without the need to disclose the source,
but Im pretty sure there would be some hard feelings.
Previous source code releases were held up until the last
commercial license of the technology shipped, but with the evolving nature of
game engines today, it is a lot less clear.
There are still bits of early Quake code in Half Life 2, and the
remaining licensees of Q3 technology intend to continue their internal developments
along similar lines, so there probably wont be nearly as sharp a cutoff as
before. I am still committed to making
as much source public as I can, and I wont wait until the titles from the
latest deal have actually shipped, but it is still going to be a little while
before I feel comfortable doing the release.
Random Graphics Thoughts
Years ago, when I first heard about the inclusion of
derivative instructions in fragment programs, I couldnt think of anything off
hand that I wanted them for. As I start
working on a new generation of rendering code, uses for them come up a lot more
often than I expected.
I cant actually use them in our production code because it
is an Nvidia-only feature at the moment, but it is convenient to do experimental
code with the nv_fragment_program extension before figuring out various ways to
build funny texture mip maps so that the built in texture filtering hardware
calculates a value somewhat like the derivative I wanted.
If you are basically just looking for plane information, as
you would for modifying things with texture magnification or stretching shadow
buffer filter kernels, the derivatives work out pretty well. However, if you are looking at a derived
value, like a normal read from a texture, the results are almost useless
because of the way they are calculated.
In an ideal world, all of the samples to be differenced would be
calculated at once, then the derivatives calculated from there, but the
hardware only calculates 2x2 blocks at a time.
Each of the four pixels in the block is given the same derivative, and
there is no influence from neighboring pixels.
This gives derivative information that is basically half the resolution
of the screen and sort of point sampled.
You can often see this effect with bump mapped environment mapping into
a mip-mapped cube map, where the texture LOD changes discretely along the 2x2
blocks. Explicitly coloring based on
the derivatives of a normal map really shows how nasty the calculated value is.
Speaking of bump mapped environment sampling
I spent a little while tracking down a
highlight that I thought was misplaced.
In retrospect it is obvious, but I never considered the artifact before: With a bump mapped surface, some of the
on-screen normals will actually be facing away from the viewer. This causes minor problems with lighting,
but when you are making a reflection vector from it, the vector starts
reflecting into the opposite hemisphere, resulting in some sky-looking pixels
near bottom edges on the model. Clamping
the surface normal to not face away isnt a good solution, because you get
areas that see right through to the environment map, because a reflection
past a clamped perpendicular vector doesnt change the viewing vector. I could probably ramp things based on the
geometric normal somewhat, and possibly pre-calculate some data into the normal
maps, but I decided it wasnt a significant enough issue to be worth any more
development effort or speed hit.
Speaking of cube maps
The edge filtering on cube maps is
showing up as an issue for some algorithms.
The hardware basically picks a face, then treats it just like a 2D
texture. This is fine in the middle of
the texture, but at the edges (which are a larger and larger fraction as size
decreases) the filter kernel just clamps instead of being able to sample the
neighbors in an adjacent cube face.
This is generally a non-issue for classic environment mapping, but when
you start using cube map lookups with explicit LOD bias inputs (say, to
simulate variable specular powers into an environment map) you can wind up with
a surface covered with six constant color patches instead of the smoothly
filtered coloration you want. The
classic solution would be to implement border texels, but that is pretty nasty for
the hardware and API, and would require either the application or the driver to
actually copy the border texels from all the other faces. Last I heard, upcoming hardware was going to
start actually fetching from the other side textures directly. A second-tier chip company claimed to do
this correctly a while ago, but I never actually tested it.
Topics continue to chain together, Ill probably write some
more next week.