To be honest, I'm not really sure if such a small difference is going to be notices by the end users and appreciated. A side observer could exclaim : "Man, no one will ever notice that global illumination your are trying your best to achieve, especially if you do not provide direct, side-by-side comparison as in the pictures above. People just want a game to be fun, engaging and running at decent speeds on their computers. No one will appreciate a GI solution that make almost no difference to the final picture, but stall their computers as hell."
Anyway, GI is cool and is the next big thing in computer graphics( realtime).
Preprocess :
The above technique looks like roughly like this :
- Spread light probes all over the place. A regular grid will do.
- Gather incoming light by rendering a cube map and store it as SH coefficients
At render time:
- Render deferred lights as probes and for every affected pixel they cover, sample SH with a normalize(pixelPos - probePos) normal.
- Add the value as indirect light to the pixel color value.
Is semi static. Dynamic objects can sample probes just fine and receive correct lighting, but environment and lighting cannot change without recomputing nearby probes again.
Is kinda slow. To get decent results, many probes must be present on that location. Still, a probe is just (third order) 3 * 9 floats and probes exists only where needed to contribute. Empty space( if no object can enter, including dynamic ones) or space outside the L-shaped level do not probes to be present. A 3D texture on the other hand covers entire box with data, no matter if there is something or not.
Unfortunately, many probes stuffed together to make a dense grid means lots of overdraw -> slow
Also, probes aren't geometry aware so expect lots of light bleeding.
I think I'm going to try some kind of Light propagation volumes approach with volume textures that move along with the camera. Unfortunately Direct3D 9.0 cannot directly render to volume texture, so I will most probably try the geometry injection pass with locking the slices and using some kind of depth peeling.
Have you thought on rendering the indirect light to lighmaps for static geometry? And for dynamic objects sampling the lobes only per vertex? Maybe speedup is good enough to update some lobes per frame and the whole thing becomes dynamic GI.
ReplyDeleteHello,
ReplyDelete. Joe, your name sounds familiar to me.. By any chance the name "Bajul" rings any bells to you ? :) If not, sorry and never mind.
Well, yes - I thought about rendering a static lightmap for static geometry, but I do not like the idea very much because... well - it's static. I need dynamic lighting over static geometry, because there will be doors to open, light bleeding trough doorways and other dynamic objects ( maybe emissive) messing with lighting, etc.
I'm thinking lately about implementing some kind of Light propagation Volumes/ Voxel cone tracing. Render depth peeling approached volume texture ( render slice by slice, copy results from a 2D render target) that will store the geometry properties around the camera, inject lighting into this texture ( maybe VPL generated from a RSMap ) or just injecting light properties into the volume and propagate the lighting along 6 directions ( on the CPU ) taking into account geometry properties rendered in step 1, so light will reflect off surfaces and not bleed through geometry. No preprocessing, no huge volume textures, and probably fast enough to be implemented in an actual game, not just in a graphics demo.
Yep, it's me :)
ReplyDeleteI talk about dynamic lighmaps. If you update few lobes per frame, for each lobe you subtract their old but add their new contribution to each affected lightmap. I don't know how much lightlobes you can update per frame, but updating lightmaps should be fast.
If lightmap resolution is low, dynamic lightmaps could be faster than rendering a volume of lobes in screenspace.
I would not give up on the technique you already have because all others that you mention have their own serious problems and are complex to implement.
The GI solution you show adds no impressive detail that is noticeable to gamers (like all 'realtime GI' games), but it will work well with physically based shading, which is the source of all of todays eye candy.
Nice to hear from you again, and sorry for the delay.
ReplyDeleteI will reconsider lightmap based solutions as you suggest, moreover I have already a variation of a simple lightmapper built in.
By the way, I tried to generate a volume texture that contains irradiance of the scene. Basically I create a volume texture and for every texel of it, I render a cubemap 6*(1x1) and average the six colors to get a single color ( no directional information - single RGB volume texture ) then render the scene by using pixel world position to calculate volume tex coordinates and sample the volume to add the color. With a small 64x64x64 texture, generating a cubemap at all those positions took forever ( no that complex scene) and the end result looked awful :)
Ooops, so i got you totally wrong. I thought you apply the lobes like virtual point lights, so each would take fill rate. But you just need to fetch from a 3D texture - so there is no need to move it to a lightmap.
ReplyDeleteDo you render the cubemaps at coarse distances (1-2 meter)? And why not at some resolution high enough to us them for reflections?
It might be fast enough to update some of them per frame - if rasterization is to slow you could use some approximization (voxels, sphere tree...) and compute shaders.
Why stick on DX9, especially when you don't release your game soon?
Personally i moved from spherical harmonics volume data to small enviroment maps over the surfaces. Volume turned out too slow and inaccurate.
I can light a small quake level with dynamic objects in real time on a single CPU core, sampling light at 10x10cm resolution.
The algorithm is very fast but also very complex - took years. I still need months to do the prerocessing tools and port it to OpenCL or... maybe Vulkan then. It would never work with DX9.
Sorry for not replying you in such a long time( an year and a half :o)
ReplyDeleteCan you post some screen of that Quake level you were talking about, on which you are testing your GI solutions?
Personally, I'm impressed with off-the shelf engines that utilize nVidia VXGI. For example UE4 apartments demos, etc.