Friday, October 14, 2011

A Volumetric Renderer for Rendering Volumes

The first assignment of the semester for CIS460 was to write, from scratch in C++, a volumetric renderer. Quite simply, a volumetric renderer is a program that can create a 2D image from a 3D discretized data set. Such data set is more often referred to as a voxel grid. In other words, a volumetric renderer makes pictures from voxels. Such renderers are useful in visualizing medical imaging data and some forms of 3D scans and blah blah blah...

...or you can make pretty clouds.


One of the first things I ever tried to make when I first was introduced to Maya was a cloud. I quickly learned that there simply is no way to get a nice fluffy cloud using polygonal modeling techniques. Ever since then I've kept the idea of making clouds parked in the back of my head, so when we were assigned the task of writing a volumetric renderer that could produce clouds, obviously I was pretty excited. 

The coolest part of studying computer graphics from the computer science side of things has got to be the whole idea of "well, I want to make X, but I can't seem to find any tool that can do X, so I guess.... I'LL JUST WRITE MY OWN PROGRAM TO MAKE X." 

I won't go into detailed specifics about implementing the volumetric renderer, as that is a topic well covered by many papers written by authors much smarter than me. Also, future CIS460 students may stumble across this blog, and half the fun of the assignment is figuring out the detailed implementation for oneself. I don't want to ruin that for them ;) Instead, I'll give a general run-through of how this works. 

The way the volumetric renderer works is pretty simple. You start with a big ol' grid of voxels, called... the voxel grid or voxel buffer. From the camera, you shoot an imaginary ray through each pixel of what will be the final picture and trace that ray to see if it enters the voxel buffer. If the ray does indeed hit the voxel buffer, then you slowly sample along the ray a teeny step at a time and accumulate the color of the pixel based on the densities of the voxels traveled through. Lighting information is easy too: for each voxel reached, figure out how much stuff there is between that voxel and any light sources, and use a fancy equation to weight the amount of shadow a voxel receives. "But where does that voxel grid come from?", you may wonder. In the case of my renderer, the voxel grid can either be loaded in from text files containing voxel data in a custom format, or the grid can be generated by sampling a Perlin noise function for each voxel in the grid. 

So obviously volumetric renderers are pretty good for rendering clouds, as one can simply represent a cloud as a bunch of discrete points where each point has some density value. However, discretizing the world has a distinct disadvantage: artifacting. In the above render, some pixel-y artifacting is visible because the voxel grid I used wasn't sufficiently high resolution enough to make each voxel indistinguishable. The problem is even more obvious in this render, where I stuck the camera right up into a cloud: 

 

(sidenote for those reading out of interest in CIS460: I implemented multiple arbitrary light sources in my renderer, which is where those colors are coming from) 

There are four ways to deal with the artifacting issue. The first is to simply move the camera further away. Once the camera is sufficiently far away, even a relatively low resolution grid will look pretty smooth: 

 

A second way is to simply dramatically increase the resolution of the voxel mesh. This technique can be very very very memory expensive though. Imagine a 100x100x100 voxel grid where each voxel requires 4 bytes of memory... the total memory required is about 3.8 MB, which isn't bad at all. But lets say we want a grid 5 times higher in resolution... a 500^3 grid needs 476 MB! Furthermore, a 1000x1000x1000 grid requires 3.72 GB! Of course, we could try to save memory by only storing non-empty voxels through the use of a hashmap or something, but that is more computationally expensive and gives no benefit in the worst case scenario of every voxel having some density. 

 A third alternative is to use trilinear interpolation or some other interpolation scheme to smooth out the voxel grid as its being sampled. This technique can lead to some fairly nice results: 


At least in the case of my renderer, there is a fourth way to deal with the artifacting: instead of preloading the voxel buffer with values from Perlin noise, why not just get rid of the notion of a discretized voxel buffer altogether and directly sample the Perlin noise function when raymarching? The result would indeed be a perfectly smooth, artifact free render, but the computational cost is extraordinarily high compared to using a voxel buffer.

Of course, one could just box blur the render afterwards as well. But doing so is sort of cheating. 

I also played with trying to get my clouds to self illuminate, with the hope of possibly eventually making explosion type things. Ideally I would have done this by properly implementing a physically accurate black body system, but I did not have much time before the finished assignment was due to implement such a system. So instead, my friend Stewart Hills and I came up with a fake black body system where the emmitance of each voxel was simply determined by how far the voxel is from the outside of the cloud. For each voxel, simply raycast in several random directions until each raymarch hits zero density, pick the shortest distance, and plug that distance into some exponential falloff curve to get the voxel's emittance. Here's a self-glowing cloud: 

 
...not even close to physically accurate, but pretty good looking for a hack that was cooked up in a few hours! A closeup shot: 


So! The volumetric renderer was definitely a fun assignment, and now I've got a cool way to make clouds! Hopefully I'll be able to integrate this renderer into some future projects!

Thursday, October 6, 2011

Building/Installing Alembic for OSX

Alembic is a new open-source computer graphics interchange framework being developed by Sony Imageworks and ILM. The basic idea is that moving animation rigs and data and whatnot between packages can be a very tricky procedure since every package has its own way to handle animation, so why not bake out all of that animation data into a common interchange format? So, for example, instead of having to import a Maya rig into Houdini, you could rig/animate in Maya, bake out the animation to Alembic, bring that into Houdini to conduct simulations with, and then bake out the animation and bring it back into Maya. This is a trend that a number of studios including Sony, ILM, Pixar, etc. have been moving toward for some time.

I’ve been working on a project lately (more on that later) that makes use of Alembic, but I found that the only way to actually get Alembic is to build it from source. That’s not terribly difficult, but there’s not really any guides out there for folks who might not be as comfortable with building things from source. So, I wrote up a little guide!

Here’s how to build Alembic for OSX (10.6 and 10.7):

1. Alembic has a lot of dependencies that can be annoying to build/install by hand, so we’re going to cheat and use Homebrew. To install Homebrew:

/usr/bin/ruby -e "$(curl -fsSL https://raw.github.com/gist/323731)"

2. Get/build/install cmake with Homebrew:

brew install cmake

3. Get/build/install Boost with Homebrew:

brew install Boost

4. Get/build/install HDF5 with Homebrew:

brew install HDF5

HDF5 has to make install itself, so this may take some time to run. Be patient.

5. Unfortunately, ilmbase is not a standard UNIX package, so we can’t use Homebrew. We’ll have to build ilmbase manually. Get it from:

http://download.savannah.nongnu.org/releases/openexr/ilmbase-1.0.2.tar.gz

Untar/unzip to a readily accessible directory and cd into the ilmbase directory. Run:

./configure

After that finishes, we get to the annoying part: ilmbase by default makes use of a deprecated GCC 3.x compiler flag called Wno-long-double, which no longer exists in GCC 4.x. We’ll have to deactivate this flag in ilmbase’s makefiles manually in order to build correctly. In each and every of the following files:

/Half/Makefile
/HalfTest/Makefile
/Iex/Makefile
/IexTest/Makefile
/IlmThread/Makefile
/Imath/Makefile
/ImathTest/Makefile

Find the following line:

CXXFLAGS = -g -O2 -D_THREAD_SAFE -Wno-long-double

and delete it from the makefile.

Once all of that is done, you can make and then make install like normal.

Now move the ilmbase folder to somewhere safe. Something like /Developer/Dependencies might work, or alternatively /usr/include/

6. Time to actually build Alembic. Get the source tarball from:

http://code.google.com/p/alembic/wiki/GettingAlembic

Untar/unzip into a readily accessible directory and then create a build root directory parallel to the source root you just created:

mkdir ALEMBIC_BUILD

The build root doesn’t necesarily have to be parallel, but here we’ll assume it is for the sake of consistency.

7. Now cd into ALEMBIC_BUILD and bootstrap the Alembic build process. The bootstrap script is a python script:

python ../[Your Alembic Source Root]/build/bootstrap/alembic_bootstrap.py

The script will ask you for a whole bunch of paths:

For “Please enter the location where you would like to build the Alembic”, enter the full path to your ALEMBIC_BUILD directory.

For “Enter the path to lexical_cast.hpp:”, enter the full path to your lexical_cast.hpp, which should be something like /usr/local/include/boost/lexical_cast.hpp

For “Enter the path to libboost_thread:”, your path should be something like /usr/local/lib/libboost_thread-mt.a

For “Enter the path to zlib.h”, your path should be something like /usr/include/zlib.h

For “Enter the path to libz.a”, we’re actually not going to link against libz.a. We’ll be using libz.dylib instead, which should be at something like /usr/lib/libz.dylib

For “Enter the path to hdf5.h”, your path should be something like /usr/local/include/hdf5.h

For “Enter the path to libhdf5.a”, your path should be something like /usr/local/Cellar/hdf5/1.x.x/lib/libhdf5.a (unless you did not use Homebrew for installing hdf5, in which case libhdf5.a will be in whatever lib directory you installed it to)

For “Enter the path to ImathMath.h”, your path should be something like /usr/local/include/OpenEXR/ImathMath.h

For “Enter the path to libImath.a”, your path should be something like /usr/local/lib/libImath.a

Now hit enter, and let the script finish running!

8. If everything is bootstrapped correctly, you can now make. This will take a while, be patient.

9. Once the make finishes successfully, run make test to check for any problems.

10. Finally, run make install, and we’re done! Alembic should install to something like /usr/bin/alembic-1.x.x/.