Friday, May 25, 2012

New Photoblog!

I have a new photoblog!

Karl Has A Camera

Why start a photoblog when I already have this blog? Well, I originally started this blog as just a general catch-all blog for my projects, adventures, and such, but over the past year or so, this blog has evolved into a heavily computer-graphics projects focused blog, hence the name change from "Karu Blog" to "Code and Visuals" a while back. As such, I've decided to spin off my photos and adventuring into "Karl Has A Camera" in order to keep things more organized.

I'm going to keep my backup archive (explanation here) unified, but from now on, for my public facing stuff, "Code and Visuals" is for computer graphics projects and occasional doodles and art projects, and "Karl Has A Camera" is for photos and adventuring.

My first post on "Karl Has A Camera" is about East Nanjing Road in Shanghai!

Actually, this reminds me: I am currently also working on redesigning yiningkarlli.com and Likimmun, and also working on the waaaayyyy overdue Omjii relaunch. Those will all be coming soon!

We now return to our regularly scheduled computer graphics programming.

Sunday, May 20, 2012

Subsurface Scattering and New Name

I implemented subsurface scattering in my renderer!

Here's a Stanford Dragon in a totally empty environment with just one light source providing illumination. The dragon is made up of a translucent purple jelly-like material, showing off the subsurface scattering effect:


Subsurface scattering is an important behavior that light exhibits upon hitting some translucent materials; normal transmissive materials will simply transport light through the material and out the other side, but subsurface scattering materials will attenuate and scatter light before releasing the light somewhere not necessarily along a line from the entry point. This is what gives skin and translucent fruit and marble and a whole host of other materials their distinctive look.

There are currently a whole host of methods to rapidly approximate subsurface scattering, including some screen-space techniques that are actually fast enough for use in realtime renderers. However, my implementation at the moment is purely brute-force monte-carlo; while extremely physically accurate, it is also very very slow. In my implementation, when a ray enters a subsurface scattering material, I generate a random scatter direction via isotropic scattering, and then calculate light accumulation attenuation based on an absorption coefficient defined for the material. This approach is very similar to the one taken by Peter and me in our GPU pathtracer.

At some point in the future I might try out a faster approximation method, but for the time being, I'm pretty happy with the visual result that brute-force monte-carlo scattering produces.

Here's the same subsurface scattering dragon from above, but now in the Cornell Box. Note the cool colored soft shadows beneath the dragon:


Also, I've finally settled on a name for my renderer project: Takua Render! So, that is what I shall be calling my renderer from now on!

Saturday, May 5, 2012

More Fun with Jello

At Joe's request, I made another jello video! Joe suggested I make a video that shows the simulation both in the actual simulator's GL view, and rendered out from Maya, so this video does just that. The starting portion of the video shows what the simulation looks like in the simulator GL view, and then shifts to the final render (done with Vray, my pathtracer still is not ready yet!). The GL and final render views don't quite line up with each other perfectly, but its close enough that you get the idea.

There is a slight change in the tech involved too- I've upgraded my jello simulator's spring array so that simulations should be more stable now. The change isn't terribly dramatic; all I did was add in more bend and shear springs in my simulation, so jello cubes now "try" harder to return to a perfect cube shape.

This video is making use of my Vray white-backdrop studio setup! The pitcher was just a quick 5 minute model, nothing terribly interesting there.

 

...and of course, some stills:
 









Smoke Sim + Volumetric Renderer

Something I've had on my list of things to do for a few weeks now is mashing up my volumetric renderer from CIS460 with my smoke simulator from CIS563.

Now I can cross that off of my list! Here is a 100x100x100 grid smoke simulation rendered out with pseudo Monte-Carlo black body lighting (described in my volumetric renderer post):

 

The actual approach I took to integrating the two was to simply pipeline them instead of actually merging the codebases. I added a small extension to the smoke simulator that lets it output the smoke grid to the same voxel file format that the volumetric renderer reads in, and then wrote a small Python script that just iterates over all voxel files in a folder and calls the volumetric renderer over and over again.

I'm actually not entirely happy with the render... I don't think I picked very good settings for the pseudo-black body, so a lot of the render is overexposed and too bright. I'll probably tinker with that some later and re-render the whole thing, but before I do that I want to move the volumetric renderer onto the GPU with CUDA. Even with multithreading via OpenMP, the rendertimes per frame are still too high for my liking... Anyway, here are some stills!