Showing posts with label Raytracer. Show all posts
Showing posts with label Raytracer. Show all posts

Monday, March 12, 2012

Pathtracer with KD-Tree

I have finished my KD-Tree rewrite! My new KD-Tree implements the Surface-Area Heuristic for finding optimal splitting planes, and stops splitting once a node has either reached a certain sufficiently small surface area, or has a sufficiently small number of elements contained within itself. Basically, very standard KD-Tree stuff, but this time, properly implemented. As a result, I can now render meshes much quicker than before.

Here's a cow in a Cornell Box. Each iteration of the cow took about 3 minutes, which is a huge improvement over my old raytracer, but still leaves a lot of room for improvement:

 
...and of course, the obligatory Stanford Dragon test. Each iteration took about 4 minutes for both of these images (the second one I let converge for a bit longer than the first one), and I made these renders a bit larger than the cow one:



So! Of course the KD-Tree could still use even more work, but for now it works well enough that I think I'm going to start focusing on other things, such as more interesting BSDFs and other performance enhancements.

Sunday, March 11, 2012

First Pathtraced Image!

Behold, the very first image produced using my pathtracer!


Granted, the actual image is not terribly interesting- just a cube inside of a standard Cornell box type setup, but it was rendered entirely using my own pathtracer! Aside from being converted from a BMP file to a PNG, this render has not been modified in any way whatsoever outside of my renderer (I have yet to name it). This render is the result of a thousand iterations. Here are some comparisons of the variance in the render at various iteration levels (click through to the full size versions to get an actual sense of the variance levels):

Upper Left: 1 iteration. Upper Right: 5 iterations. Lower Left: 10 iterations. Lower Right: 15 iterations.

Upper Left: 1 iteration. Upper Right: 250 iterations. Lower Left: 500 iterations. Lower Right: 750 iterations.

Each iteration took about 15 seconds to finish.

Unfortunately, I have not been able to move as quickly with this project as I would like, due to other schoolwork and TAing for CIS277. Nonetheless, here's where I am right now:

Currently the renderer is in a very very basic primitive state. Instead of extending my raytracer, I've opted for a completely from scratch start. The only piece of code brought over from the raytracer was the OBJ mesh system I wrote, since that was written to be fairly modular anyway. Right now my pathtracer works entirely through indirect lighting and only supports diffuse surfaces... like I said, very basic! Adding direct lighting should speed up render convergence, especially for scenes with small light sources. Also, right now the pathtracer only uses single direction pathtracing from the camera into the scene... adding bidirectional pathtracing should lead to another performance boost.

I'm still working on rewriting my KD-tree system, that should be finished within the next few days. 

Something that is fairly high on my list of things to do right now is redesign the architecture for my renderer... right now, for each iteration, the renderer traces a path through a pixel all the way to its recursion depth before moving on to the next pixel. As soon as possible I want to move the renderer to use an iterative (as opposed to recursive) accumulated approach for each iteration (slightly confusing terminology, here i mean iteration as in each render pass), which, oddly enough, is something that my old raytracer already does. I've already started moving towards the accumulated approach; right now, I store the first set of raycasts from the camera and reuse those rays in each iteration.

One cool thing that storing the initial ray cast allows me to do is to generate a z-depth version of the render for "free":

 
Okay, hopefully by my next post I'll have the KD-tree rewrite done!

Thursday, December 22, 2011

Basic Raytracer and Fun with KD-Trees

The last assignment of the year for CIS460/560 (I'm still not sure what I'm supposed to call that class) is the dreaded RAYTRACER ASSIGNMENT.

The assignment is actually pretty straightforward: implement a recursive, direct lighting only raytracer with support for Blinn-Phong shading and support for basic primitive shapes (spheres, boxes, and polygon extrusions). In other words, pretty much a barebones implementation of the original Turner Whitted raytracing paper.

I've been planning on writing a global-illumination renderer (perhaps based on pathtracing or photon mapping?) for a while now, so my own personal goal with the raytracer project was to use it as a testbed for some things that I know I will need for my GI renderer project. With that in mind, I decided from the start that my raytracer should support rendering OBJ meshes and include some sort of acceleration system for OBJ meshes.

The idea behind the acceleration system goes like this: in the raytracer, one obviously needs to cast rays into the scene and track how they bounce around to get a final image. That means that every ray needs to have intersection tests against objects in the scene in order to determine what ray is hitting what object. Intersection testing against mathematically defined primitives is simple, but OBJ meshes present more of a problem; since an OBJ mesh is composed of a bunch of triangles or polygons, the naive way to intersection test against an OBJ mesh is to check for ray intersections with every single polygon inside of the mesh. This naive approach can get extremely expensive extremely quickly, so a better approach would be to use some sort of spatial data structure to quickly figure out what polygons are within the vicinity of the ray and therefore need intersection testing.

After talking with Joe and trawling around on Wikipedia for a while, I picked a KD-Tree as my spatial data structure for accelerated mesh intersection testing. I won't go into the details of how KD-Trees work, as the Wikipedia article does a better job of it than I ever could. I will note, however, that the main resources I ended up pulling information from while looking up KD-Tree stuff are Wikipedia, Jon McCaffrey's old CIS565 slides on spatial data structure, and the fantastic PBRT book that Joe pointed me towards.

Implementing the KD-Tree for the first time took me the better part of two weeks, mainly because I was misunderstanding how the surface area splitting heuristic works. Unfortunately, I probably can't post actual code for my raytracer, since this is a class assignment that will repeated in future incarnations of the class. However, I can show images!

The KD-Tree meant I could render meshes in a reasonable amount of time, so I rendered an airplane:


The airplane took about a minute or so to render, which got me wondering how well my raytracer would work if I threw the full 500000+ poly Stanford Dragon at it. This render took about five or six minutes to finish (without the KD-Tree in place, this same image takes about 30 minutes to render):


Of course, the natural place to go after one dragon is three dragons. Three dragons took about 15 minutes to render, which is pretty much exactly a three-fold increase over one dragon. That means my renderer's performance scales more or less linearly, which is good.


For fun, and because I like space shuttles, here is a space shuttle. Because the space shuttle has a really low poly count, this image took under a minute to render:


For reflections, I took a slightly different approach from the typical recursive method. The normal recursive approach to a raytracer is to begin with one ray, and trace that ray completely through recursion to its recursion depth limit before moving onto the next pixel and ray. However, such an approach might not actually be idea in a GI renderer. For example, from what I understand, in pathtracing a better raytracing approach is to actually trace everything iteratively; that is, trace the first bounce for all rays and store where the rays are, then trace the second bounce for all rays, then the third, and so on and so forth. Basically, such an approach allows one to set an unlimited trace depth and just let the renderer trace and trace and trace until one stops the renderer, but the corresponding cost of such a system is slightly higher memory usage, since ray positions need to be stored for the previous iteration.

Adding reflections did impact my render times pretty dramatically. I have a suspicion that both my intersection code and my KD-Tree are actually far from ideal, but I'll have to look at that later. Here's a test with reflections with the airplane:


...and here is a test with three reflective dragons. This image took foooorrreeevvveeeerrrr to render.... I actually do not know how long, as I let it run overnight:


I also added support for multiple lights with varying color support:


Here are some more images rendered with my raytracer:




In conclusion, the raytracer was a fun final project. I don't think my raytracer is even remotely suitable for actual production use, and I don't plan on using it for any future projects (unlike my volumetric renderer, which I think I will definitely be using in the future). However, I will definitely be using stuff I learned from the raytracer in my future GI renderer project, such as the KD-tree stuff and the iterative raytracing method. I will probably have to give my KD-tree a total rewrite, since it is really really far from optimal here, so that is something I'll be starting over winter break! Next stop, GI renderer, CIS563, and CIS565!

As an amusing parting note, here is the first proper image I ever got out of my raytracer. Awww yeeeaaahhhhhh: