Distributed Ray Tracing

In a rather involved assignment, I implemented BVH as an acceleration structure. When constructing the BVH, instead of putting the same object to two of the leaf nodes, I preferred storing a null pointer and a boolean isLeaf flag. I would have instantly regretted that decision if I were to read what I have written in this paragraph earlier. Having null checks and boolean flag checks not only loiters the codebase, but they also introduce subpar performance. However, the results are always eye-pleasing.

With the help of multisampling, effects such as area lights, motion blur, and depth of field is possible. When implementing the multisampling, I used jittered sampling. However, the lesson to be learned is that one should not create multiple rays and store them in a container, instead the rays should be created depending on the image sample and one ray should contribute 1/sampleSize much of illumination in the final image. Storing multisampled rays in a std::vector like structures may result in 20GB+ RAM usage when one is dealing with 900 samples for a 800x800 image.

Also, a look-at camera is implemented. It is a rather straightforward implementation, using simple trigonometry that is involving tangent functions.

It is possible to render depth of field, by modeling a lens. I choose to treat camera as the lens position and sent the ray to the focal plane. At first I had no depth of field even though I thoroughly followed the implementation details. Later, I painfully figured out that the random generator is producing the same numbers all the time.



The area lights suffered from the same random number problem, but when I figured out how they work, the area light was doing just fine.


It is also possible to simulate glossy surfaces, which was rather straightforward. One just perturbates the surface normal at the point of the intersection in a clever&uniform way, resulting in such scenes:


Then it is possible to render almost-marble-like surfaces, with the help of BVH:


The tap scene was about smoothing the normals. I calculated vertex normals by keeping track of which triangles did they belong while parsing the triangle data. Then for each vertex, the vertex normal is the average of all of its neighboring triangles' normals.





The velocity vector allows us to simulate motion blur. In this case each ray is assigned to a time variable between 0 and 1 when they are first generated. Then depending on the time of the ray and velocity of the given object, intersections will be calculated accordingly. One should also note that when you have moving objects that are requiring to capture motion blur, the bounding box of the object should be updated as well so that the intersections happen between 0 and 1 normalized time. Otherwise, the initial bounding box is just the bounding box at time 0.



Here are some wrong renders that I faced during this process.

This happens when you do not respect the squared attenuation.


This happens when you accidentally put radiance in the way of intensity.


This happens when you forget multiplying your light intensity with the surface area of your area light.


This is a problem related with random number generation, the glossy surface acts like a noisy mirror.


This happens when you forget to normalize the last vector you obtained when you are averaging the neighbouring normals.

Comments