I come up an idea: to compensate the geometry information lost during rasterization (since we only use G-buffer for ray tracing), we just need to add extra sample points around where geometry information lost most severely, e.g. where the depth map’s variation exceed a threshold. This idea seem to work at my first glance, I tried out some experiement to show where geometry information are lost. Are they really around the depth map’s edge area (whose variation exceed a threshold)? Let’s see the pictures below:
Unfortunately, my idea breaks for simple geometry like floor and plane in sponza, despite that you could see some red area indeed overlay with the edge of depth buffer. However, I still wonder if we could use image space to do some interesting stuff.
Now the bug due to environment map of image space glossy is solved. However, I got a problem when transforming the vector and traverse it in NDC space. It just doesn’t give me a satisfying result. The brute force searching method might indicate that transformation of the vector is not correct. I guess I need to write a debug function that allows analysis of the image information. This debug function will take textures of direction and position etc. as input and display these information in 3d world.
However, brute force method shows how image space are vulnerable to temporal changes on image space space ignoring the other artifacts like halo.
I spent some time to think about the image space’s techniques. Precise intersection might not be a good way for precise reflection. Reflection needs high frequency information of the scene, therefore it will be sensitive to image space’s change. As cone trace further, the geometry information gets more blurry. How to compensate the information lost in image space is essential. Next step is try to use cone tracing to replace the multiple samples like we did in rough refraction.
Paper read these weeks:
Jan Novk [2011]
Tevs et al. [2008]
Lehtinen et al. [2012]
and some technical articles in GPU Gem etc. about ray height field intersection.
I added ground truth to ISM demo for better comparison. For direct light lit area, I didn’t utilize reflective shadow map information for better sampling, so I simply sample based only on BRDF importance. This is convenient and just like non-explicit ray tracer, however, it makes the indirect illumination darker and noisier. With enough sampling, the noise disappears. Multiple bounces (traverse depth in UI) of light will increase the intensity of whole scene as I expected.
Here are some comments:
For image space glossy, right now, the multiple sampler just doesn’t work.
I made a slight modification to my ISM demo to support glossy GI. The result looks horrible like image-space technique. Due to discrete nature of VPLs, it produces a lot of light blotch. Ordinary clamp doesn’t work in this case (Fig.5).
Increasing VPLs does improve a little bit as in Fig.6.
Perhaps importance wrap for shadow map can increase the precision a little bit. Using a more sophisticated gathering might be better for glossy GI as in Ritschel et al. [2009]. These papers Crassin et al. [2011] Tokuyoshi and Ogaki [2012] also addresses glossy GI for real time applications.
While still trying to find a “better” way to do ray intersection in image space for glossy, I had a look at voxelization via GPU. It doesn’t mean anything related to my research on glossy now, so I did it just for interest of learning the imageLoad/imageStore in GPU shader mode 5. Inspired by the chapter 22 of OpenGL Insights, I implemented a simpler and similar voxel renderer that get an obj file as input mesh then output to a 3d textures or voxels (it doesn’t fill inner voxels inside the mesh now). After voxelization, I displayed it in using ray marching (we sample points along every eye ray with equal step between these points then add them up) and instanced cubes with very naive ambient occlusion (we draw one cube to represent each voxel if the voxel is occupied, then for each cube vertex, we compute the occlusion based on the nearby voxels’ occupations).
Here I also show some random funny image of displaying voxels using ray marching:
Dragon mesh using ray marching and instanced cubes:
Click to get this simple demo & source.
Here are some papers I read recently for glossy:
Estalella et al. [2006]
Ofek and Rappoport [1998]
Chen and Arvo [2000]
Kautz and McCool [2000]
So my first idea is very simple. We did image space refraction and rough refraction. Why don’t we do image space reflection or rough reflection? GPU has been powerful to do simple ray matching for quite a few years. We could use what we did for refraction (intersect ray with back surface) for reflection! The biggest problem is image space after rasterization might lose geometry information, e.g. the wall behind the camera, or geometry totally blocked by other geometry.
Here’re some test results:
The biggest problem seems to be a severe problem:
Again I get horrible flickering in sponza scene:
Here are some papers I read recently for glossy:
Robison and Shirley [2009]
Duvenhage et al. [2010]
Yu et al. [2008]
Laurijssen and Dutré [2009]
Granier and Drettakis [2001]
Aupperle and Hanrahan [1993]
L. Aupperle and P. Hanrahan. Importance and discrete three point transport. In Proceedings of the Fourth Eurographics Workshop on Rendering, pages 85–94. Citeseer, 1993.
M. Chen and J. Arvo. Perturbation methods for interactive specular reflections. Visualization and Computer Graphics, IEEE Transactions on, 6(3):253–264, 2000.
C. Crassin, F. Neyret, M. Sainz, S. Green, and E. Eisemann. Interactive indirect illumination using voxel cone tracing. In Computer Graphics Forum, volume 30, pages 1921–1930. Wiley Online Library, 2011.
B. Duvenhage, K. Bouatouch, and D. Kourie. Exploring the use of glossy light volumes for interactive global illumination. In Proceedings of the 7th International Conference on Computer Graphics, Virtual Reality, Visualisation and Interaction in Africa, pages 139–148. ACM, 2010.
P Estalella, I Martin, G Drettakis, and D Tost. A gpu-driven algorithm for accurate interactive reflections on curved objects. Rendering Techniques, 6, 2006.
X. Granier and G. Drettakis. Incremental updates for rapid glossy global illumination. In Computer Graphics Forum, volume 20, pages 268–277. Wiley Online Library, 2001.
Carsten Dachsbacher Jan Novk, Thomas Engelhardt. Screen-space bias compensation for interactive high-quality global illumination with virtual point lights. ACM SIGGRAPH Interactive 3D Graphics and Games 2011, 2011.
J Kautz and MD McCool. Approximation of glossy reflection with prefiltered environment maps. pages 119–126. Citeseer, 2000.
J. Laurijssen and P. Dutré. Adaptive precomputation of glossy interreflections. CW Reports, 2009.
J. Lehtinen, T. Aila, S. Laine, and F. Durand. Reconstructing the indirect light field for global illumination. ACM Transactions on Graphics (TOG), 31(4):51, 2012.
E. Ofek and A. Rappoport. Interactive reflections on curved objects. In Proceedings of the 25th annual conference on Computer graphics and interactive techniques, pages 333–342. ACM, 1998.
T. Ritschel, T. Engelhardt, T. Grosch, H.P. Seidel, J. Kautz, and C. Dachsbacher. Micro-rendering for scalable, parallel final gathering. In ACM Transactions on Graphics (TOG), volume 28, page 132. ACM, 2009.
A. Robison and P. Shirley. Image space gathering. In Proceedings of the Conference on High Performance Graphics 2009, pages 91–98. ACM, 2009.
A. Tevs, I. Ihrke, and H.P. Seidel. Maximum mipmaps for fast, accurate, and scalable dynamic height field rendering. In Proceedings of the 2008 symposium on Interactive 3D graphics and games, pages 183–190. ACM, 2008.
Y. Tokuyoshi and S. Ogaki. Real-time bidirectional path tracing via rasterization. In Proceedings of the ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games, pages 183–190. ACM, 2012.
X. Yu, R. Wang, and J. Yu. Interactive glossy reflections using gpu-based ray tracing with adaptive lod. In Computer Graphics Forum, volume 27, pages 1987–1996. Wiley Online Library, 2008.