Tuesday, June 23, 2015

Deferred irradiance volumes

I have some sort of deferred irradiance volumes based global illumination already up and running but there are several caveats I don't really like about it. Let me first show you some screens from my current implementation so can eventually see what difference it makes ( or doesn't ) for indoor scenes. In fact, there are probably much suitable scenes to demonstrate the strength of GI, but then again - this is what I have and this is what is going to be in the final game ( or something very similar )














To be honest, I'm not really sure if such a small difference is going to be notices by the end users and appreciated. A side observer could exclaim : "Man, no one will ever notice that global illumination your are trying your best to achieve, especially if you do not provide direct, side-by-side comparison  as in the pictures  above. People just want a game to be fun, engaging and running at decent speeds on their computers. No one will appreciate a GI solution that make almost no difference to the final picture, but stall their computers as hell."
Anyway, GI is cool and is the next big thing in computer graphics( realtime).

Preprocess :

The above technique looks like roughly like this :
  • Spread light probes all over the place. A regular grid will do.
  • Gather incoming light by rendering a cube map and store it as SH coefficients

At render time:
  • Render deferred lights as probes and for every affected pixel they cover, sample SH with a normalize(pixelPos - probePos) normal. 
  • Add the value as indirect light to the pixel color value.
Well, this obviously works but :
Is semi static. Dynamic objects can sample probes just fine and receive correct lighting, but environment and lighting cannot change without recomputing nearby probes again.
Is kinda slow. To get decent results, many probes must be present on that location. Still, a probe is just (third order) 3 * 9 floats and probes exists only where needed to contribute. Empty space( if no object can enter, including dynamic ones) or space outside the L-shaped level do not probes to be present. A 3D texture on the other hand covers entire box with data, no matter if there is something or not.
Unfortunately, many probes stuffed together to make a dense grid means lots of overdraw -> slow
Also, probes aren't geometry aware so expect lots of light bleeding.

I think I'm going to try some kind of Light propagation volumes approach with volume textures that move along with the camera. Unfortunately Direct3D 9.0 cannot directly render to volume texture, so I will most probably try the geometry injection pass with locking the slices and using some kind of depth peeling.



Monday, April 27, 2015

Tray Racer ( aka Ray Tracer )

No, the title and the article isn't aimed to whiskey drinking competition with crystal glasses full of ice cubes, slashed from ice trays. It's a silly try for a wordplay, meant to make a reference to ray tracing.
I was asked to write a thesis in computer graphics, so I had to refresh my basic knowledge not only in interactive graphics and rasterization, but also and other methods of non-real time graphics like ray tracing.
I find it quite useful to investigate a completely different approach to computer graphics. And to be honest, I like it very much. It is much more elegant and natural way to render things, not to mention it often needs just a small fraction of hacks needed to render proper graphics on screen, compared to rasterization methods. You just have your eye properties, like position, direction and field of view and for every pixel on screen you shoot rays through the scene to test what objects ( their properties- color, etc.) and where those rays hit them ( lighting interaction calculation). In a very simple test case, you can use spheres to test against your eye rays. Spheres can be represented parametrically by a position in space and a scalar value for the radius. 
You can find the intersection point completely algebraically by substituting the sphere equation in the line equation and solving a resulting quadratic equation. The idea is : if you have your ray equation and sphere equation, you can find solutions ( points ) that satisfy both equations at the same time. That's where ray pierce the sphere. For a ray tracer you are most likely interested in the closest point to the origin of the ray.
I'm playing with a simple ray tracer, that supports reflection, shadows, texture mapping ( sphere coordinates turned into UV coordinates ) and even global illumination by calculating several bounces of light. It uses recursion as it provides more elegant way for solving such problems. It's also single threaded.

Tuesday, January 13, 2015

Mapping the Light

Lightmaps are old tech, they eat up a lot of memory, they are static, they need pre-processing, they are expensive to compute and cheap to run(except the memory cost), but they can look very natural on static geometry scenes especially if they are made with radiosity in mind. Ie, contain light diffuse interreflections between surfaces. Radiosity is a global illumination algorithm in the sense that the illumination arriving on a surface comes not just directly from the light sources, but also from other surfaces reflecting light. Radiosity calculations are viewpoint independent which increases the computations involved, but makes them useful for all viewpoints.
In the times of modern GPU's running modern lighting approaches like deferred lighting, where everything is fully dynamic and per-pixel, speaking about old tech like lightmaps is a bit odd at first sight, but if you think about the increasing mobility of the computing in general and modern hand-held devices that run on limited battery power and physical dimensions, you will see why the old stuff is gaining momentum on new devices.
Recently I went on implementing a simple lightmapper ( without radiosity ). I will outline the process and provide some code on a next post. Here are a few screenshots of how it looks like so far.