Saturday, July 14, 2012

Random Point Sampling On Surfaces

Just a heads up, this post is admittedly more of a brain dump for myself than it is anything else.

A while back I implemented a couple of fast methods to generate random points on geometry surfaces, which will be useful for a number of applications, such as direct lighting calculations involving area lights.


The way I'm sampling random points varies by geometry type, but all methods are pretty simple. Right now the system is implemented such that I can give the renderer a global point density to follow, and points will be generated according to that density value. This means the number of points generated on each piece of geometry is directly linked to the geometry's surface area.

For spheres, the method I use is super simple: get the surface area of the sphere, generate random UV coordinates, and map those coordinates back to the surface of the sphere. This method is directly pulled from this Wolfram Mathworld page, which also describes why the most naive approach to point picking on a sphere is actually wrong.

My approach for ellipsoids unfortunately is a bit brute force. Since getting the actual surface area for an ellipsoid is actually fairly mathematically tricky, I just approximate it and then use plain old rejection sampling to get a point.

Boxes are the easiest of the bunch; find the surface area of each face, randomly select a face weighted by the proportion of the total surface area that face comprises, and then pick a random x and y coordinate on that face. The method I use for meshes is similar, just on potentially a larger scale: find the surface area of all of the faces in the mesh and select a face randomly weighted by the face's proportion of the total surface area. Then instead of generating random cartesian coordinates, I generate a random barycentric coordinate, and I'm done.


The method that I'm using right now is purely random, so there's no guarantee of equal spacing between points initially. Of course, as one picks more and more points, the spacing between any given set of points will converge on something like equally spaced, but that would take a lot of random points. I've been looking at this Dart Throwing On Surfaces Paper for ideas, but at least for now, this solution should work well enough for what I want it for (direct lighting). But we shall see!


Also, as I am sure you can guess from the window chrome on that last screenshot, I've successfully tested Takua Render on Linux! Specifically, on Fedora!

Thursday, July 5, 2012

Thoughts on Ray Bounce Depth

I finally got around to doing a long overdue piece of analysis on Takua Render: looking at the impact of ray bounce depth on performance and on the final image.

Of course, in real life, light can bounce around (almost) indefinitely before it is either totally absorbed or enters our eyeballs. Unfortunately, simulating this behavior completely is extremely difficult in any type of raytracing solution because in a raytrace solution, letting a ray bounce around indefinitely until it does something interesting can lead to extremely extremely long render times. Thus, one of the first shortcuts that most raytracing (and therefore pathtracing) systems take is cutting off rays after they bounce a certain number of times. This strategy should not have much of an impact on the final visual quality of a rendered image, since the more a light ray bounces around, the less each successive bounce contributes to the final image anyway.

With that in mind, I did some tests with Takua Render in hopes of finding a good balance between ray bounce depth and quality/speed. The following images or a glossy white sphere in a Cornell Box were rendered on a quad-core 2.5 GhZ Core i5 machine.

For a reference, I started with a render with a maximum ray bounce depth of 50 and 200 samples per pixel:

Max Bounce Depth of 50, 200 iterations, took 1325 seconds to render.

Then I ran a test render with a maximum of just 2 bounces; essentially, this represents the direct lighting part of the solution only, albeit generated in a Monte Carlo fashion. Since I made the entire global limit 2 bounces, no reflections show up on the sphere of the walls, just the light overhead. Note the total lack of color bleeding and the dark shadow under the ball.

Max Bounce Depth of 2, 200 iterations, took 480 seconds to render.

The next test was with a maximum of 5 bounces. In this test, nice effects like color bleeding and indirect illumination are back! However, compared to the reference render, the area under the sphere still has a bit of dark shadowing, much like what one would expect if an ambient occlusion pass had been added to the image. While not totally accurate to the reference render, this image under certain artistic guidelines might actually be acceptable, and renders considerably faster.

Max Bounce Depth of 5, 200 iterations, took 811 seconds to render.

Differencing the 5 bounce render from the reference 50 bounce render shows that the 5 bounce one is ever so slightly dimmer and that most of the difference between the two images is in the shadow area under the sphere. Ignore the random fireflying pixels, which is just a result of standard pathtracing variance in the renders:

5 bounce test differenced with the 50 bounce reference.

The next test was 10 bounces. At 10 bounces, the resultant images is essentially visually indistinguishable from the 50 bounce reference, as shown by the differenced image included. This result implies that beyond 10 bounces, the contributions of successive bounces to the final image are more or less negligible.

Max Bounce Depth of 10, 200 iterations, took 995 seconds to render.

10 bounce test differenced with the 50 bounce reference. Note that there is essentially no difference.

Finally, a test with a maximum of 20 bounces is still essentially indistinguishable from both the 10 bounce test and the 50 bounce reference:

Max Bounce Depth of 20, 200 iterations, took 1277 seconds to render.

Interestingly, render times do not scale linearly with maximum bounce depth! The reason for this relationship (or lack thereof) can be found in the fact that the longer a ray bounces around, the more likely it is to find a light source and terminate. At 20 bounces, the odds of a ray finding a light source is very very close to the odds of a ray finding a light source at 50 bounces, explaining the smallness of the gap in render time between 20 and 50 bounces (especially compared to the difference in render time between, say, 2 and 5 bounces).