This is part three of my game development posts, following on from my post on the artwork in SWAT.
One of the most novel aspects of Okre was its treatment of lighting. We wanted to take full advantage of all the cool pixel and vertex shader technology at our disposal on Xbox, so per-pixel lighting was a given. Additionally, from a game point of view we wanted to be able to let the player shoot out the lights to plunge the enemies into darkness, so that was another consideration. Finally, we wanted a proper shadowing solution that didn’t rely on the texture-based solution of the time — lightmaps, as used in Quake and so on. We didn’t think we could store all the lightmaps for a single level in memory, as our levels were outdoor and rather sprawling.
With that in mind, we considered generating the static scenery shadows geometrically. Nik worked his magic and came up with a solution. In a lengthy, offline process:
A triangle casts a shadow onto two other triangles.
The shadow region is cut out, leaving only the areas in light.
So we end up with a lot more geometry, one piece per light per chunk of original scene geometry. This sounds simple enough but it was hugely problematic:
Nik came up with some great solutions to these problems:
There were still a few cases where the algorithm didn’t work: usually because of broken source artwork. It was left to the artists to fix up the geometry to get the level converting correctly — removing coincident triangles, welding nearly-identical vertices, fixing non-manifold1 edges.
These shadows were great, being relatively cheap at runtime, but they were limited to static scenery and non-moving lights. We could vary the intensity and colour of lights at runtime, but not their position. They also lent themselves well to the PlayStation 2 engine which — if you recall from my earlier post — was a bit of an afterthought.
However, Okre also supported realtime shadows on the Xbox, using a stencil-based approach. In the final cut of SWAT — much to my and Nik’s annoyance — the character shadows were dropped due to a perceived speed problem. They were expensive, particularly on the skinned, animating characters. But they weren’t bad enough to drop them entirely (as far as I recall, anyway)!
I’ll go into how the light polygons and stencils were actually drawn in the next post, where I’ll cover the rest of the rendering engine too.
Another cool feature of the lighting system was its simulation of a real film camera. After rendering, the entire screen was post-processed to simulate film’s non-linear response to light and the “aperture” of our virtual camera. A further post-process would bleed out very bright areas. We sampled back a set of pixels near the centre of the screen every frame, and used this to adjust the aperture for the next frame, simulating auto-exposure.
Looking out from a relatively dark area into the bright outdoors.
When the player went from a dark room to the bright outside, this would momentarily dazzle them until the aperture closed a little. Glancing back at the dark room, the player would then only see a pitch black area, just as in real life. A similar effect was also used to simulate the temporary dazzling caused by the bright light of a flashbang grenade.
Next time I’ll talk about how the renderer worked, at least on Xbox.
Non-manfold edges are where more than two polygons share the edge. A little like the pages of a book where the spine would be non-manifold. ↩
Matt Godbolt is a C++ developer working in Chicago for Aquatic. Follow him on Mastodon.