Glossy Effects In Global Illumination

Dai, Zeng

2012-11-14

I come up an idea: to compensate the geometry information lost during rasterization (since we only use G-buffer for ray tracing), we just need to add extra sample points around where geometry information lost most severely, e.g. where the depth map’s variation exceed a threshold. This idea seem to work at my first glance, I tried out some experiement to show where geometry information are lost. Are they really around the depth map’s edge area (whose variation exceed a threshold)? Let’s see the pictures below:


PIC PIC
PIC PIC PIC


Figure 1: These are pictures shows where geometry information are lost. Green fragments only need geometry that can be seen by viewer (or rasterized G-buffer); red fragements need geometry information hidden after rasterization. As shown above, these images show two different scene and different glossiness. It is not hard to identify their glossiness. Disappointed enough, some red fragments appears on smooth varying plane. For example, if you see the floor of the sponza image on the right, you could see the plane have a large part of red area.


Unfortunately, my idea breaks for simple geometry like floor and plane in sponza, despite that you could see some red area indeed overlay with the edge of depth buffer. However, I still wonder if we could use image space to do some interesting stuff.

2012-10-29:

Now the bug due to environment map of image space glossy is solved. However, I got a problem when transforming the vector and traverse it in NDC space. It just doesn’t give me a satisfying result. The brute force searching method might indicate that transformation of the vector is not correct. I guess I need to write a debug function that allows analysis of the image information. This debug function will take textures of direction and position etc. as input and display these information in 3d world.

However, brute force method shows how image space are vulnerable to temporal changes on image space space ignoring the other artifacts like halo.


PIC PIC

Figure 2: I simply move the camera a little bit, producing the two continuous frame images (left and right). Notice the yellow box’s color is significantly changed between these frames. It get more noising when increase the glossiness from current 16 to higher. NO, NO, NO... Don’t look at the FPS :-P!


I spent some time to think about the image space’s techniques. Precise intersection might not be a good way for precise reflection. Reflection needs high frequency information of the scene, therefore it will be sensitive to image space’s change. As cone trace further, the geometry information gets more blurry. How to compensate the information lost in image space is essential. Next step is try to use cone tracing to replace the multiple samples like we did in rough refraction.

Paper read these weeks:
Jan Novk [2011]
Tevs et al. [2008]
Lehtinen et al. [2012]
and some technical articles in GPU Gem etc. about ray height field intersection.

2012-10-15:

I added ground truth to ISM demo for better comparison. For direct light lit area, I didn’t utilize reflective shadow map information for better sampling, so I simply sample based only on BRDF importance. This is convenient and just like non-explicit ray tracer, however, it makes the indirect illumination darker and noisier. With enough sampling, the noise disappears. Multiple bounces (traverse depth in UI) of light will increase the intensity of whole scene as I expected.


PIC PIC PIC

Figure 3: First image is real time imperfect shadow map; second image is ground truth with one bounce; third image is ground truth with two bounce. I didn’t measure the samples for the 2 images. It takes about one minutes or two to converge to them. The phong glossiness coefficient is 16. There are 1024 VPLs.



PIC PIC PIC

Figure 4: First image is real time imperfect shadow map; second image is ground truth with one bounce; third image is ground truth with two bounce. It takes about one minutes or two to converge to them. The phong glossiness coefficient is 0 (diffuse). There are only 64 VPLs.


Here are some comments:

For image space glossy, right now, the multiple sampler just doesn’t work.

2012-10-01:

I made a slight modification to my ISM demo to support glossy GI. The result looks horrible like image-space technique. Due to discrete nature of VPLs, it produces a lot of light blotch. Ordinary clamp doesn’t work in this case (Fig.5).


PIC PIC

Figure 5: Notorious light blotch artifact becomes more noising for glossy materials. Roughness or specular power is simply 16.


Increasing VPLs does improve a little bit as in Fig.6.


PIC PIC PIC

Figure 6: From left to right, the VPL numbers are 64, 256, 1024 respectively with specular power 16.


Perhaps importance wrap for shadow map can increase the precision a little bit. Using a more sophisticated gathering might be better for glossy GI as in Ritschel et al. [2009]. These papers Crassin et al. [2011] Tokuyoshi and Ogaki [2012] also addresses glossy GI for real time applications.

2012-09-03:

While still trying to find a “better” way to do ray intersection in image space for glossy, I had a look at voxelization via GPU. It doesn’t mean anything related to my research on glossy now, so I did it just for interest of learning the imageLoad/imageStore in GPU shader mode 5. Inspired by the chapter 22 of OpenGL Insights, I implemented a simpler and similar voxel renderer that get an obj file as input mesh then output to a 3d textures or voxels (it doesn’t fill inner voxels inside the mesh now). After voxelization, I displayed it in using ray marching (we sample points along every eye ray with equal step between these points then add them up) and instanced cubes with very naive ambient occlusion (we draw one cube to represent each voxel if the voxel is occupied, then for each cube vertex, we compute the occlusion based on the nearby voxels’ occupations).


PIC PIC PIC

Figure 7: Left image is cow with 64x64x64 voxels display using instanced cubes; middle image is cow with 128x128x128 voxels with the same settings; right image is cow with 256x256x256 voxels with the same settings.


Here I also show some random funny image of displaying voxels using ray marching:


PIC PIC

Figure 8: BTW, these 3d textures are constructed voxel by voxel in CPU.


Dragon mesh using ray marching and instanced cubes:


PIC PIC

Figure 9: Dragon mesh.


Click to get this simple demo & source.

Here are some papers I read recently for glossy:
Estalella et al. [2006]
Ofek and Rappoport [1998]
Chen and Arvo [2000]
Kautz and McCool [2000]

2012-08-27:

So my first idea is very simple. We did image space refraction and rough refraction. Why don’t we do image space reflection or rough reflection? GPU has been powerful to do simple ray matching for quite a few years. We could use what we did for refraction (intersect ray with back surface) for reflection! The biggest problem is image space after rasterization might lose geometry information, e.g. the wall behind the camera, or geometry totally blocked by other geometry.

Here’re some test results:


PIC PIC

Figure 10: All images are test scenes with only albedo light plus its reflected light from nearby geometries’ albedo light in image space. If reflected rays cannot intersect with geometries in image space, then we just shoot it to an environment map to get color. Left image has glossiness of 64 while right one has 2938.


The biggest problem seems to be a severe problem:


PIC PIC

Figure 11: As we presume, when camera changes a little bit as shown by the two continuous frame images, the reflection of the left side of the yellow box flickers even when we have a very rough Phong lobe (glossiness 32) in this case.


Again I get horrible flickering in sponza scene:


PIC

Figure 12: An image using sponza scene. Image quality is not quite good.


Here are some papers I read recently for glossy:
Robison and Shirley [2009]
Duvenhage et al. [2010]
Yu et al. [2008]
Laurijssen and Dutré [2009]
Granier and Drettakis [2001]
Aupperle and Hanrahan [1993]

References

   L. Aupperle and P. Hanrahan. Importance and discrete three point transport. In Proceedings of the Fourth Eurographics Workshop on Rendering, pages 85–94. Citeseer, 1993.

   M. Chen and J. Arvo. Perturbation methods for interactive specular reflections. Visualization and Computer Graphics, IEEE Transactions on, 6(3):253–264, 2000.

   C. Crassin, F. Neyret, M. Sainz, S. Green, and E. Eisemann. Interactive indirect illumination using voxel cone tracing. In Computer Graphics Forum, volume 30, pages 1921–1930. Wiley Online Library, 2011.

   B. Duvenhage, K. Bouatouch, and D. Kourie. Exploring the use of glossy light volumes for interactive global illumination. In Proceedings of the 7th International Conference on Computer Graphics, Virtual Reality, Visualisation and Interaction in Africa, pages 139–148. ACM, 2010.

   P Estalella, I Martin, G Drettakis, and D Tost. A gpu-driven algorithm for accurate interactive reflections on curved objects. Rendering Techniques, 6, 2006.

   X. Granier and G. Drettakis. Incremental updates for rapid glossy global illumination. In Computer Graphics Forum, volume 20, pages 268–277. Wiley Online Library, 2001.

   Carsten Dachsbacher Jan Novk, Thomas Engelhardt. Screen-space bias compensation for interactive high-quality global illumination with virtual point lights. ACM SIGGRAPH Interactive 3D Graphics and Games 2011, 2011.

   J Kautz and MD McCool. Approximation of glossy reflection with prefiltered environment maps. pages 119–126. Citeseer, 2000.

   J. Laurijssen and P. Dutré. Adaptive precomputation of glossy interreflections. CW Reports, 2009.

   J. Lehtinen, T. Aila, S. Laine, and F. Durand. Reconstructing the indirect light field for global illumination. ACM Transactions on Graphics (TOG), 31(4):51, 2012.

   E. Ofek and A. Rappoport. Interactive reflections on curved objects. In Proceedings of the 25th annual conference on Computer graphics and interactive techniques, pages 333–342. ACM, 1998.

   T. Ritschel, T. Engelhardt, T. Grosch, H.P. Seidel, J. Kautz, and C. Dachsbacher. Micro-rendering for scalable, parallel final gathering. In ACM Transactions on Graphics (TOG), volume 28, page 132. ACM, 2009.

   A. Robison and P. Shirley. Image space gathering. In Proceedings of the Conference on High Performance Graphics 2009, pages 91–98. ACM, 2009.

   A. Tevs, I. Ihrke, and H.P. Seidel. Maximum mipmaps for fast, accurate, and scalable dynamic height field rendering. In Proceedings of the 2008 symposium on Interactive 3D graphics and games, pages 183–190. ACM, 2008.

   Y. Tokuyoshi and S. Ogaki. Real-time bidirectional path tracing via rasterization. In Proceedings of the ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games, pages 183–190. ACM, 2012.

   X. Yu, R. Wang, and J. Yu. Interactive glossy reflections using gpu-based ray tracing with adaptive lod. In Computer Graphics Forum, volume 27, pages 1987–1996. Wiley Online Library, 2008.