Tag Archives: optix

Global illumination rendering using path tracing and bidirectional path tracing

This year I started a course on Advanced Computer Graphics in Aalto University by Jaakko Lehtinen. The focus was on methods for global illumination (GI). In a nutshell, global illumination means more realistic lighting compared to “traditional”, direct lighting, where light is reflected by all surfaces (including diffuse ones). Therefore the shading includes indirect light, colour bleeding, caustics etc. Think of a room with only one window and no additional light, the wall with the window will be also illuminated, but only by light that was reflected from the floor and other walls.

There are many methods of implementing GI, for instance photon mapping, instant radiosity, path tracing and bidirectional path tracing or the quite involved method of Metropolis light transport (MLT). It was obligatory to implement path tracing, but I implemented bidirectional path tracing in addition. As a bonus, I implemented both of the methods on the graphics card using the OptiX Framework from NVIDIA, but please don’t make the same mistake of using OptiX.

Path tracing

Path tracing means tracing a path from the camera, bouncing off of surfaces in a random direction, and doing so until the ray hits the light. On every bounce the reflectance function is used to calculate the amount of reflected light. For instance a red surface would reflect only red light, a white surface all of it and a black one nothing. Naturally one sample is not enough, several samples are taken per pixel and the average gives the actual colour. This results in an unbiased estimation of the colour at the pixel. The technical term for this is Monte Carlo integration.

There are many ways to speed up the convergence, I implemented the following (some of them reused for the bidirectional path tracer):

  • At every bounce one additional ray is shot into the direction of a light. This is especially important if the light source is small and hitting it randomly has a small probability. In order not to add the energy from the light twice, a random hit should be ignored.
  • The randomly reflected ray should not be sampled according to an uniform distribution, but to a distribution matching the reflectance function and a geometrical term (importance sampling). This gets even more important with a micro-facet model, which was used to simulate partly reflective surfaces (the same model was used for the normal path tracer and the bidirectional one).
  • Numbers from a random number generator tend to cluster, which is not optimal for Monte Carlo. A Low-discrepancy sequence is better, because the area of integration is covered more evenly. The Wikipedia Article about the Halton sequence shows the difference.

Bidirectional path tracing

Difference between bidirectional and normal path tracing (from Veach's PhD thesis).
Difference between bidirectional and normal path tracing (from Veach’s PhD thesis).

The idea of this method is quite simple, but the implementation is involved. Normal path tracing sometimes has difficulties to sample the light, especially when there are caustics or the light is a bit hidden as it is in the scene above. The reason in both cases is that it can’t be sampled directly (shooting a ray into the direction of the light) and hitting it randomly is improbable. More generally, normal path tracing has difficulties to sample important paths in certain situations, resulting in a noisy image.

Different bidirectional sampling techniques, taken from the Mitsuba documentation.
Different bidirectional sampling techniques, taken from the Mitsuba documentation.

Bidirectional path tracing shoots rays from both directions, from the light and from the camera. This results in two paths consisting of vertices, the light and the eye/camera path. Then each camera vertex is connected to each of the light vertices, forming different paths for transporting light. Eric Veach calls them bidirectional sampling techniques. These techniques have different “fields of expertise”, meaning that certain techniques are only efficient for certain effects. The figure on the right shows a collection of such sampling techniques.

In order to compile them into an image, they are weighted and added up. Computing the weight is a bit complex, but basically it’s computing the probability of generating the specific path divided by the sum of generating any path with vertices at the same positions. This results in switching off the techniques for effects that they can’t sample efficiently and therefore results in an image with less noise. This can be observed in the bottom part of the figure on the top right. The technical term is multiple importance sampling and is also covered in depth in Veach’s thesis.

After implementing the base of the algorithm with perfectly diffuse surfaces, I also added perfectly refractive and reflective surfaces as well as micro facet type surfaces (math taken from here).

Sub Surface Scattering

Finally I also implemented sub surface scattering. This book contains an excellent explanation of it (the papers (1, 2) by Jensen et al. aren’t so good in my opinion). I chose the accelerated version of the algorithm. This means that I have an octree containing surface points of cached irradiance. It seemed like this is not so cool as expected, the cache needs a lot of space in memory and the objects are lacking the montecarlo noise, which makes them look a bit odd. Blender’s renderer, Cycles, also doesn’t use the acceleration structure, instead they sample the surface directly. If I ever have to implement SSS another time, I’ll do it the Blender way. But for this project I chose to use a hack: In order to fix the unusual look, I mixed in the normal diffuse reflection. So a SSS surface is now 50% SSS and 50% perfectly diffuse.

Results

Because I implemented so many features, I didn’t have enough time to choose nice scenes.

There is one Pangolin, based on a model made by my sister:

Rendering of a Pangolin
The body contains a term for SSS, it filled up the video memory almost completely: 96%.

then there is the famous Sponza:

Sponza, featuring a simple path tracer and diffuse surfaces
Sponza, featuring a simple path tracer and diffuse surfaces

and finally a demo scene for the bidirectional tracer:

Bidirectional Test Scene
The glass was modeled by me, the rest are primitives from Blender. The scene features caustics, reflective and refractive surfaces, micro-facet model surfaces, some lights and diffuse surfaces.

Code

I was asked to provide the code. Since the project is based on the OptiX Samples framework and the framework has a proprietary license, I can’t give you the full project. Instead I made a diff to the sample directory provided with OptiX 3.0 (as this was the current version when I started coding).

To make it work you need:
1. download OptiX 3.0
2. apply the patch, under Linux it should be possible with the patch tool, something similar should exist on windows
3. if you want to compile and execute my code, download and compile Assimp and FreeImage (the dll’s and header files are assumed to be in folders next to the project folder)
4. follow the instructions in the readme file.

If you have any questions or problems with compiling, feel free to contact me. Preferable via the answer function, so others can also benefit..

Here is the patch.

Why you should never use NVIDIA OptiX: bugs

In my most recent project, I implemented a GPU path and bidirectional path tracer. I already previously used OptiX for my bachelor thesis, and I thought it is a good idea to use it also for this. I mean, it was created for ray tracing, so it should be predestined for such kind of GPU application.

One of the biggest advantages for me were the reliable and fast acceleration structures, which are crucial for ray tracing. But it proved to cause more pain than benefit due to several bugs. And the overall speed was not that great in the end, probably due to lacking optimisation in my code, which in turn is at least partly caused by lacking profilers (which would be available for CUDA).

But the big issue were really the bugs, which prevented me actively from implementing stuff. Some of the bugs are totally ridiculous and don’t get fixed..

I already encountered the first bug when coding for the bachelor thesis 2 years ago. Unfortunately I ignored it and decided again on OptiX. There is a special printf function, which takes into account the massively parallel architecture of the graphics card. For example, it is possible to limit the printing just to one pixel. But, ehm

// the following line works:
rtPrintf("asdf\n");

// on the contrary the following two don't
rtPrintf("asdf\n");
rtPrintf("asdf\n");

The latter crashes with

OptiX Error: Invalid value (Details: Function “RTresult _rtContextLaunch2D(RTcontext, unsigned int, RTsize, RTsize)” caught exception: Error in rtPrintf format string: “”, [7995632])

The difference is really only in calling the rtPrintf once or twice with the same text. Sometimes it could be even a different text and sometimes it worked with the same text. I reduced the code to a minimal example in order to eliminated possible stack corruption, but to no avail. This problem was
reported on the NVIDIA forum in June 2013. It’s possible to use stdio’s printf in conjunction with some ‘if’ guards as a workaround, as proposed in the forum.

The second one is also connected to rtPrintf:

RT_PROGRAM void pathtrace_camera() {
   BiDirSubPathVertex lightVertices[2];
   lightVertices[0].existing = false;
   lightVertices[1].existing = false;

   for(unsigned int i=0; i<2; i++) {
      sampleLightPath();
      if(!(lightVertices[i].existing)) break;
   }
   // rtPrintf("something\n");
   sampleEye();

   output_buffer[launch_index] = make_float4(1.f, 1.f, 1.f, 1.f);
}

This code would crash on two tested operating systems (Windows 7 and Linux) and on two different computers (my own workstation and one from the uni).

OptiX Error: Unknown error (Details: Function “RTresult _rtContextLaunch2D(RTcontext, unsigned int, RTsize, RTsize)” caught exception: Encountered a CUDA error: Kernel launch returned (700): Launch failed, [6619200])

.
It runs fine though, when the rtPrintf is not commented. I reported it here. Again I made a minimal example, in the end the source was only two files, containing ~30 and ~60 lines, but it constantly and reliably crashed depending on weather the rtPrintf was commented or not.

So, later, if the program crashed, the first attempt to resolve the issue was spreading randomly rtPrintfs. This was also one of the solutions to another problem presented in the next paragraphs.

But before I come to it, I quickly have to explain the compilation process. There are two steps, first, during “compile time”, the C++ code is turned into binaries and the OptiX source into intermediate .ptx files. Those contain sort of a GPU assembler, which is then compiled during runtime by the NVIDIA GPU driver into actual binaries executed on the device. This is triggered by a C++ function call, usually context->compile().

Now, the problem is that these context->compile()s don’t return always. Sometimes they run until the host memory is full. Once it helped to spread calls to the mentioned rtPrintf in the code, another time the resolution was an extra RT_CALLABLE_PROGRAM construct, report with additional information here.

Generally those problems are painful and demotivating. Especially taking into account, that it seems like NVIDIA doesn’t care, at least they didn’t answer any of my reports. But the reason for this could also be, that they are phasing out OptiX already due to little success. Any way, it was certainly the last time I used OptiX and I don’t recommend anybody to start with it.

Real Time 3d Mandelbulb

Render of a 3d Mandelbulb
Render of a 3d Mandelbulb

It’s actually already almost one year since I finished the work on an Erasmus project in Universitat Politècnica de València, Spain. The goal was to speed up the computation of distance estimated 3d fractals in a way so that they could be computed in real time and therefore make it possible to explore them in an interactive fly through program.

Fractals are graphics produced by recursive mathematical formulas. Sine the graphics are purely based on math, one can zoom in infinitely into them without loosing quality, well – as long as the float precision is enough. One classical 2d fractal is the Mandelbrot. Some years ago a formula for a 3d Fractal was found that resembles a Mandelbrot and the structure was called Mandelbulb.

Although there already exist programs that do the fractal computation on the graphics card, none of the ones I found was able to do it in acceptable quality and real time. Some of them produced nicer images, but at the cost of having long rendering times, others where sort of interactive, but the quality was bad.

I first implemented a quick proof of concept using NVIDIA OptiX, based on their Julia example, but OptiX quickly showed its limitations, especially when I started to work on a method to speed the thing up. So I switched to NVIDIA CUDA and was satisfied with that decision (and anyway, as I will blog soon, I recommend to stay away from OptiX as far as possible : ).

 

I’ll explain in a few sentences how the rendering works, details for the traditional method can be found on the page of Mikael Hvidtfeldt Christensen and for both, the traditional and improved one  in my report linked below.

Figure explaining ray marching
ray marching

There is a function, which returns the maximum lower bound for the distance to the fractal, called the distance estimator (DE). When walking along a ray, for instance shot by the camera, it is safe to walk this distance, because we have the guarantee that the fractal won’t intersect the ray in this interval. After one step, the distance is estimated again and we march as long, as the estimated distance is above a certain threshold.

My idea to speed up the rendering was to decrease the amount of computation by letting neighbouring rays “ride” on the previous rays.

Principle of the new algorithm
Principle of the new algorithm

This works in two passes, first, “primary” rays are shot with a distance of several pixels to each other, recording the DE values. Secondly, the neighbours, called secondary rays in the figure, are shot. Those can use the information calculated in the first pass to jump straight to the end of the stepping. So, that’s it, in short. Don’t confuse the terms primary and secondary rays with bouncing light of ray tracing here.

Benchmarks of the new algorithm compared to the old one from the report
Benchmarks of the new algorithm compared to the old one from the report

Comparing the traditional ray marching algorithm with the new, faster one, shows a speed up of up to 100% and interactive exploration in almost all views without shadows. It would be probably possible to also implement this speed up for shadow rays, but I had no time to do it. Unfortunately there are some artefacts due to the ray “riding”, details can be found in the report. If you are interested into the source code, please write me a message.