Tuesday, June 23, 2015

Deferred irradiance volumes

I have some sort of deferred irradiance volumes based global illumination already up and running but there are several caveats I don't really like about it. Let me first show you some screens from my current implementation so can eventually see what difference it makes ( or doesn't ) for indoor scenes. In fact, there are probably much suitable scenes to demonstrate the strength of GI, but then again - this is what I have and this is what is going to be in the final game ( or something very similar )














To be honest, I'm not really sure if such a small difference is going to be notices by the end users and appreciated. A side observer could exclaim : "Man, no one will ever notice that global illumination your are trying your best to achieve, especially if you do not provide direct, side-by-side comparison  as in the pictures  above. People just want a game to be fun, engaging and running at decent speeds on their computers. No one will appreciate a GI solution that make almost no difference to the final picture, but stall their computers as hell."
Anyway, GI is cool and is the next big thing in computer graphics( realtime).

Preprocess :

The above technique looks like roughly like this :
  • Spread light probes all over the place. A regular grid will do.
  • Gather incoming light by rendering a cube map and store it as SH coefficients

At render time:
  • Render deferred lights as probes and for every affected pixel they cover, sample SH with a normalize(pixelPos - probePos) normal. 
  • Add the value as indirect light to the pixel color value.
Well, this obviously works but :
Is semi static. Dynamic objects can sample probes just fine and receive correct lighting, but environment and lighting cannot change without recomputing nearby probes again.
Is kinda slow. To get decent results, many probes must be present on that location. Still, a probe is just (third order) 3 * 9 floats and probes exists only where needed to contribute. Empty space( if no object can enter, including dynamic ones) or space outside the L-shaped level do not probes to be present. A 3D texture on the other hand covers entire box with data, no matter if there is something or not.
Unfortunately, many probes stuffed together to make a dense grid means lots of overdraw -> slow
Also, probes aren't geometry aware so expect lots of light bleeding.

I think I'm going to try some kind of Light propagation volumes approach with volume textures that move along with the camera. Unfortunately Direct3D 9.0 cannot directly render to volume texture, so I will most probably try the geometry injection pass with locking the slices and using some kind of depth peeling.