This iteration includes content that has many subtle details that have to be taken care of. In addition to the advanced lights, HDR imaging and tonemapping support are also added to the raytracer.
Directional lights are the most straightforward type of light to implement. They only have a radiance and a direction, and in that direction, the radiance is applied. During the illumination of an object, one has to calculate the vector magnitude of normals and light directions. In my case, they are both are normalized, and their magnitudes are 1 anyway, so I do not calculate them. The absolute value of the cosine angle between the surface normal and the inverted light direction should be multiplied by the radiance. One can clearly tell where the light is coming from in this scene:
The next type of light is the spotlight. This has excellent eye candy, with little cost. It is defined by the position, coverage and falloff angles, and a direction. One has to define a vector between the light's position and the ray's hit position. The angle between the light direction, and this vector defines which region does the point reside. There are three possibilities:
1. The vector makes an angle of less than half of the falloff angle: The object is illuminated with respect to the inverse squared attenuation, just like a point light.
2. The vector makes an angle of between the half of the falloff angle and half of the coverage angle: The object is illuminated with respect to the following: Let's say the vector and the direction has the angle of theta in between. Then ((cos(theta) - cos(coverage)) / ((cos(falloff) - cos(coverage)) should give an illumination factor. This factor is then raised to some power, which defines how smooth the transition between falloff and coverage angles occur. In the image below, the factor is raised to the power of 4:
For area lights, one has to define an orthonormal basis around the position of the area light. This basis will then be served with uniform random numbers to generate a light position. The nature of the process suggests that the light position is not stationary; thus it looks good when coupled with multisampling. Here are the results with 100 samples:
Now, this one looks noisier than the reference image given in the homework. I apply a slight epsilon of 1e-6 to the area light position, which did not help. Thus, I fiddled around with the random number generation. In the picture above, I used std::mt19937_64 and std::uniform_real_distribution<double>(-0.5, 0.5). Later, I learned about the tr1 implementations of those, namely std::tr1::mt19937_64 and std::tr1::uniform_real_distribution<double>(-0.5, 0.5). tr1 is not part of the C++ standard, so I had to add _SILENCE_TR1_NAMESPACE_DEPRECATION_WARNING to the compiler directives. The result was like this:
I see no difference. In a final attempt, I cranked up the number of samples from 100 to 484, and this how it looks:
Looks less noisy, and the ripple around the area light is more apparent. My conclusion at that point was that the difference could be because of box filtering. Then I thought whether the g++ and MSVC compilers yield different random number distributions under Linux and Windows. So I run this mini-experiment:
For std::mt19937 and std::mt19937_64, I ran these uniform_real_distribution between (-0.5, 0.5) and write 10000 (x,y) random number pairs in a file, comma-separated. Here are the plots for each of them, running on different platforms:
std::mt19937 on Ubuntu, g++
std::mt19937 on Windows, MSVC
std::mt19937_64 on Ubuntu, g++
std::mt19937_64 on Windows, MSVC
Rather than a uniform distribution, there seem to be significant gaps between the points. It is quite pseudorandom, just like the shape of my area light. Then, I learned that there are sequences such as Van der Corput sequence, which would yield more balanced random numbers. A comparison is provided here, which demonstrates a dramatic difference. Unfortunately, the time constraints did not allow me to test out this sequence, yet it looks like a handy tool for some other purposes someday.
Later, I have figured out that, the random functions in C++ standard library are not thread-safe. Thus, I declared thread_local static random number generators and Mersenne twisters in the scope of methods. Prior, I stored them as fields in classes, which is not the best practice in this context. The shared state of random number generator across multiple threads resulting in different threads using the same sequence. Here are some discussions around the issue: Discussion1 Discussion2 Discussion3
Now, I use the following:
thread_local static std::random_device dev;
thread_local static std::uniform_real_distribution<double> dist(-0.5, 0.5);
thread_local static std::mt19937_64 mt(dev());
before creating the U and V vectors for the area light. The result is noticeably better with 100 samples:
No hard feelings, this was a great lesson for life about random number generation, so every minute was well-spent.
Environment lights turn out the be trickiest of all. EXR images are used as radiance maps so that they provide the illumination at a given point. The environment lights, just as area lights, require many samples to work correctly. Before the shadow ray test, one creates a random L vector, which is later to be used to convert longitude and latitude on the EXR image. Despite being slow due to rejecting many samples on the road, rejection sampling is a sure way of coming up with random direction vectors that reside on the same hemisphere with the normal vector. One can generate many vectors and check whether its length is less than or equal to one to ensure that it is inside a sphere. Also, its dot product with the normal vector should be greater than zero so that they point at the same side of the upper hemisphere. Then, you normalize that vector and create the orthonormal base around the normal vector. The angles between the components of the orthonormal base, theta, and phi, then translate to UV coordinates just as it is in the spherical texture mapping. Since we select the L vector from any other vector on the upper hemisphere, we need to scale the radiance value by (1/2pi), which is the probability of selecting that sample. Here's how it looks without any tonemapping:
In this scene, the camera has a handedness parameter. Until now, the camera is assumed to be in the right-handed coordinate space. To convert it to the left-handed coordinate space, one has to take the cross product of gaze vector and up vector in this order. Also, if it is a look at camera, the up vector should be recalculated with the cross product of u-Vector and the gaze vector.
HDR Images and Tonemapping:
With the help of tonemapping, we can render the dynamic range of the scenes and create images that highlight certain features. In this assignment, I used tinyexr library to read and write .exr images. Reading the textures and writing them is quite straightforward, and I mostly followed the examples provided by tinyexr to do so. For the tonemapping, I implemented Reinhard's global tonemapping operator. Here is an image without any tonemapping applied:
Now, the same image with tonemapping:
While implementing the tonemapping operator, one has to be careful while computing the log-averaged luminances, as the 1/N in the formula should be inside the exp. Other than that, here is the high-level idea of how I implemented it, as stated in the forum:
1. Collect the colors and divide them by 255. (this step could be optional)
The head is similar to the bracket's output in this one, and this image has no black pixels.
Directional lights are the most straightforward type of light to implement. They only have a radiance and a direction, and in that direction, the radiance is applied. During the illumination of an object, one has to calculate the vector magnitude of normals and light directions. In my case, they are both are normalized, and their magnitudes are 1 anyway, so I do not calculate them. The absolute value of the cosine angle between the surface normal and the inverted light direction should be multiplied by the radiance. One can clearly tell where the light is coming from in this scene:
The next type of light is the spotlight. This has excellent eye candy, with little cost. It is defined by the position, coverage and falloff angles, and a direction. One has to define a vector between the light's position and the ray's hit position. The angle between the light direction, and this vector defines which region does the point reside. There are three possibilities:
1. The vector makes an angle of less than half of the falloff angle: The object is illuminated with respect to the inverse squared attenuation, just like a point light.
2. The vector makes an angle of between the half of the falloff angle and half of the coverage angle: The object is illuminated with respect to the following: Let's say the vector and the direction has the angle of theta in between. Then ((cos(theta) - cos(coverage)) / ((cos(falloff) - cos(coverage)) should give an illumination factor. This factor is then raised to some power, which defines how smooth the transition between falloff and coverage angles occur. In the image below, the factor is raised to the power of 4:
For area lights, one has to define an orthonormal basis around the position of the area light. This basis will then be served with uniform random numbers to generate a light position. The nature of the process suggests that the light position is not stationary; thus it looks good when coupled with multisampling. Here are the results with 100 samples:
Now, this one looks noisier than the reference image given in the homework. I apply a slight epsilon of 1e-6 to the area light position, which did not help. Thus, I fiddled around with the random number generation. In the picture above, I used std::mt19937_64 and std::uniform_real_distribution<double>(-0.5, 0.5). Later, I learned about the tr1 implementations of those, namely std::tr1::mt19937_64 and std::tr1::uniform_real_distribution<double>(-0.5, 0.5). tr1 is not part of the C++ standard, so I had to add _SILENCE_TR1_NAMESPACE_DEPRECATION_WARNING to the compiler directives. The result was like this:
I see no difference. In a final attempt, I cranked up the number of samples from 100 to 484, and this how it looks:
Looks less noisy, and the ripple around the area light is more apparent. My conclusion at that point was that the difference could be because of box filtering. Then I thought whether the g++ and MSVC compilers yield different random number distributions under Linux and Windows. So I run this mini-experiment:
For std::mt19937 and std::mt19937_64, I ran these uniform_real_distribution between (-0.5, 0.5) and write 10000 (x,y) random number pairs in a file, comma-separated. Here are the plots for each of them, running on different platforms:
std::mt19937 on Ubuntu, g++
std::mt19937 on Windows, MSVC
std::mt19937_64 on Ubuntu, g++
std::mt19937_64 on Windows, MSVC
Rather than a uniform distribution, there seem to be significant gaps between the points. It is quite pseudorandom, just like the shape of my area light. Then, I learned that there are sequences such as Van der Corput sequence, which would yield more balanced random numbers. A comparison is provided here, which demonstrates a dramatic difference. Unfortunately, the time constraints did not allow me to test out this sequence, yet it looks like a handy tool for some other purposes someday.
Later, I have figured out that, the random functions in C++ standard library are not thread-safe. Thus, I declared thread_local static random number generators and Mersenne twisters in the scope of methods. Prior, I stored them as fields in classes, which is not the best practice in this context. The shared state of random number generator across multiple threads resulting in different threads using the same sequence. Here are some discussions around the issue: Discussion1 Discussion2 Discussion3
Now, I use the following:
thread_local static std::random_device dev;
thread_local static std::uniform_real_distribution<double> dist(-0.5, 0.5);
thread_local static std::mt19937_64 mt(dev());
before creating the U and V vectors for the area light. The result is noticeably better with 100 samples:
No hard feelings, this was a great lesson for life about random number generation, so every minute was well-spent.
Environment lights turn out the be trickiest of all. EXR images are used as radiance maps so that they provide the illumination at a given point. The environment lights, just as area lights, require many samples to work correctly. Before the shadow ray test, one creates a random L vector, which is later to be used to convert longitude and latitude on the EXR image. Despite being slow due to rejecting many samples on the road, rejection sampling is a sure way of coming up with random direction vectors that reside on the same hemisphere with the normal vector. One can generate many vectors and check whether its length is less than or equal to one to ensure that it is inside a sphere. Also, its dot product with the normal vector should be greater than zero so that they point at the same side of the upper hemisphere. Then, you normalize that vector and create the orthonormal base around the normal vector. The angles between the components of the orthonormal base, theta, and phi, then translate to UV coordinates just as it is in the spherical texture mapping. Since we select the L vector from any other vector on the upper hemisphere, we need to scale the radiance value by (1/2pi), which is the probability of selecting that sample. Here's how it looks without any tonemapping:
In this scene, the camera has a handedness parameter. Until now, the camera is assumed to be in the right-handed coordinate space. To convert it to the left-handed coordinate space, one has to take the cross product of gaze vector and up vector in this order. Also, if it is a look at camera, the up vector should be recalculated with the cross product of u-Vector and the gaze vector.
HDR Images and Tonemapping:
With the help of tonemapping, we can render the dynamic range of the scenes and create images that highlight certain features. In this assignment, I used tinyexr library to read and write .exr images. Reading the textures and writing them is quite straightforward, and I mostly followed the examples provided by tinyexr to do so. For the tonemapping, I implemented Reinhard's global tonemapping operator. Here is an image without any tonemapping applied:
Now, the same image with tonemapping:
While implementing the tonemapping operator, one has to be careful while computing the log-averaged luminances, as the 1/N in the formula should be inside the exp. Other than that, here is the high-level idea of how I implemented it, as stated in the forum:
1. Collect the colors and divide them by 255. (this step could be optional)
2. Compute the luminances, I used sRGB assumption here with coefficients R=0.2126, G=0.7125, B=0.0722.
3. Compute the log average luminances, with 1e-5 epsilon (I also used 1e-6).
4. Scale the luminances by multiplying them by (key_value / log average luminance).
5. Keep a sorted scaled luminance array, pick the %(100.0 - burn_percent)th element as Lwhite, and take its square.
6. Put the scaled luminance values and Lwhite-squared to Eq.4 in the paper to obtain lum_D.
7.1 Get the RGB values for the corresponding pixel if it is zero, and luminance of that pixel is zero, set R'G'B' to zeroes as well. (This was to avoid weird NaN and INF issues dealing with pow).
7.2. If the RGB values for the corresponding pixel is nonzero & it has nonzero luminance, R' = pow(R / luminance, Saturation) * lum_D . Do the same for G and B as well.
8. Clamp R'G'B' to 0,1 range.
9. To apply gamma, R' = pow(R', 1/Gamma), same for G' and B' as well.
10. Now, all values should be clamped between 0, 1. Multiply them by 255 to get the final tonemapped image.
The result for the spherical_point_hdr_texture is as following:
The improvement in the image is apparent at first sight.
One thing to note is that the tonemapping operator is global. I believe this causes me to have some issues with the head_env_light scene. When I run it through the global tonemapping operator, the result is as following:
In the reference scene, the image has a background texture, but in our input, we did not have that. I suspect that this affected the results in the tonemapping as many black pixels would dominate the global luminance average. However, when the same image is fed to bracket, it tonemaps it without any flares:
Same HDR image, tonemapped by bracket |
So, I checked the histograms of both images. The results were as following:
Histogram for the reference image |
Histogram for the render |
The histogram shows that the dynamic range of the render is between two extremes. I am suspecting that a global operator might not get the range correctly. Upon Bahadir's suggestion, I was able to get a render such that the reference image is a background image to the render. Here is how it looks:
The head is similar to the bracket's output in this one, and this image has no black pixels.
The bloopers for this iteration are:
These are outputs due to the transformations that do not update the tmin values in the intersections correctly. I happen to find it in this set of transformations, and luckily there are only two objects in the scene, which makes debugging easier.
Because of the tmin bug I mentioned above, I wanted to make sure that intersections work correctly before going on to implement the spotlight. I set the ambient component of the dragon to solid red, the plane to solid green, and background to white. Then I figured out the image looks like the flag of Wales by accident. Interestingly, the dragon on the flag is not standardized as well.
Comments
Post a Comment