Thursday, December 22, 2011

Basic Raytracer and Fun with KD-Trees

The last assignment of the year for CIS460/560 (I'm still not sure what I'm supposed to call that class) is the dreaded RAYTRACER ASSIGNMENT.

The assignment is actually pretty straightforward: implement a recursive, direct lighting only raytracer with support for Blinn-Phong shading and support for basic primitive shapes (spheres, boxes, and polygon extrusions). In other words, pretty much a barebones implementation of the original Turner Whitted raytracing paper.

I've been planning on writing a global-illumination renderer (perhaps based on pathtracing or photon mapping?) for a while now, so my own personal goal with the raytracer project was to use it as a testbed for some things that I know I will need for my GI renderer project. With that in mind, I decided from the start that my raytracer should support rendering OBJ meshes and include some sort of acceleration system for OBJ meshes.

The idea behind the acceleration system goes like this: in the raytracer, one obviously needs to cast rays into the scene and track how they bounce around to get a final image. That means that every ray needs to have intersection tests against objects in the scene in order to determine what ray is hitting what object. Intersection testing against mathematically defined primitives is simple, but OBJ meshes present more of a problem; since an OBJ mesh is composed of a bunch of triangles or polygons, the naive way to intersection test against an OBJ mesh is to check for ray intersections with every single polygon inside of the mesh. This naive approach can get extremely expensive extremely quickly, so a better approach would be to use some sort of spatial data structure to quickly figure out what polygons are within the vicinity of the ray and therefore need intersection testing.

After talking with Joe and trawling around on Wikipedia for a while, I picked a KD-Tree as my spatial data structure for accelerated mesh intersection testing. I won't go into the details of how KD-Trees work, as the Wikipedia article does a better job of it than I ever could. I will note, however, that the main resources I ended up pulling information from while looking up KD-Tree stuff are Wikipedia, Jon McCaffrey's old CIS565 slides on spatial data structure, and the fantastic PBRT book that Joe pointed me towards.

Implementing the KD-Tree for the first time took me the better part of two weeks, mainly because I was misunderstanding how the surface area splitting heuristic works. Unfortunately, I probably can't post actual code for my raytracer, since this is a class assignment that will repeated in future incarnations of the class. However, I can show images!

The KD-Tree meant I could render meshes in a reasonable amount of time, so I rendered an airplane:


The airplane took about a minute or so to render, which got me wondering how well my raytracer would work if I threw the full 500000+ poly Stanford Dragon at it. This render took about five or six minutes to finish (without the KD-Tree in place, this same image takes about 30 minutes to render):


Of course, the natural place to go after one dragon is three dragons. Three dragons took about 15 minutes to render, which is pretty much exactly a three-fold increase over one dragon. That means my renderer's performance scales more or less linearly, which is good.


For fun, and because I like space shuttles, here is a space shuttle. Because the space shuttle has a really low poly count, this image took under a minute to render:


For reflections, I took a slightly different approach from the typical recursive method. The normal recursive approach to a raytracer is to begin with one ray, and trace that ray completely through recursion to its recursion depth limit before moving onto the next pixel and ray. However, such an approach might not actually be idea in a GI renderer. For example, from what I understand, in pathtracing a better raytracing approach is to actually trace everything iteratively; that is, trace the first bounce for all rays and store where the rays are, then trace the second bounce for all rays, then the third, and so on and so forth. Basically, such an approach allows one to set an unlimited trace depth and just let the renderer trace and trace and trace until one stops the renderer, but the corresponding cost of such a system is slightly higher memory usage, since ray positions need to be stored for the previous iteration.

Adding reflections did impact my render times pretty dramatically. I have a suspicion that both my intersection code and my KD-Tree are actually far from ideal, but I'll have to look at that later. Here's a test with reflections with the airplane:


...and here is a test with three reflective dragons. This image took foooorrreeevvveeeerrrr to render.... I actually do not know how long, as I let it run overnight:


I also added support for multiple lights with varying color support:


Here are some more images rendered with my raytracer:




In conclusion, the raytracer was a fun final project. I don't think my raytracer is even remotely suitable for actual production use, and I don't plan on using it for any future projects (unlike my volumetric renderer, which I think I will definitely be using in the future). However, I will definitely be using stuff I learned from the raytracer in my future GI renderer project, such as the KD-tree stuff and the iterative raytracing method. I will probably have to give my KD-tree a total rewrite, since it is really really far from optimal here, so that is something I'll be starting over winter break! Next stop, GI renderer, CIS563, and CIS565!

As an amusing parting note, here is the first proper image I ever got out of my raytracer. Awww yeeeaaahhhhhh:

Friday, October 14, 2011

A Volumetric Renderer for Rendering Volumes

The first assignment of the semester for CIS460 was to write, from scratch in C++, a volumetric renderer. Quite simply, a volumetric renderer is a program that can create a 2D image from a 3D discretized data set. Such data set is more often referred to as a voxel grid. In other words, a volumetric renderer makes pictures from voxels. Such renderers are useful in visualizing medical imaging data and some forms of 3D scans and blah blah blah...

...or you can make pretty clouds.


One of the first things I ever tried to make when I first was introduced to Maya was a cloud. I quickly learned that there simply is no way to get a nice fluffy cloud using polygonal modeling techniques. Ever since then I've kept the idea of making clouds parked in the back of my head, so when we were assigned the task of writing a volumetric renderer that could produce clouds, obviously I was pretty excited. 

The coolest part of studying computer graphics from the computer science side of things has got to be the whole idea of "well, I want to make X, but I can't seem to find any tool that can do X, so I guess.... I'LL JUST WRITE MY OWN PROGRAM TO MAKE X." 

I won't go into detailed specifics about implementing the volumetric renderer, as that is a topic well covered by many papers written by authors much smarter than me. Also, future CIS460 students may stumble across this blog, and half the fun of the assignment is figuring out the detailed implementation for oneself. I don't want to ruin that for them ;) Instead, I'll give a general run-through of how this works. 

The way the volumetric renderer works is pretty simple. You start with a big ol' grid of voxels, called... the voxel grid or voxel buffer. From the camera, you shoot an imaginary ray through each pixel of what will be the final picture and trace that ray to see if it enters the voxel buffer. If the ray does indeed hit the voxel buffer, then you slowly sample along the ray a teeny step at a time and accumulate the color of the pixel based on the densities of the voxels traveled through. Lighting information is easy too: for each voxel reached, figure out how much stuff there is between that voxel and any light sources, and use a fancy equation to weight the amount of shadow a voxel receives. "But where does that voxel grid come from?", you may wonder. In the case of my renderer, the voxel grid can either be loaded in from text files containing voxel data in a custom format, or the grid can be generated by sampling a Perlin noise function for each voxel in the grid. 

So obviously volumetric renderers are pretty good for rendering clouds, as one can simply represent a cloud as a bunch of discrete points where each point has some density value. However, discretizing the world has a distinct disadvantage: artifacting. In the above render, some pixel-y artifacting is visible because the voxel grid I used wasn't sufficiently high resolution enough to make each voxel indistinguishable. The problem is even more obvious in this render, where I stuck the camera right up into a cloud: 

 

(sidenote for those reading out of interest in CIS460: I implemented multiple arbitrary light sources in my renderer, which is where those colors are coming from) 

There are four ways to deal with the artifacting issue. The first is to simply move the camera further away. Once the camera is sufficiently far away, even a relatively low resolution grid will look pretty smooth: 

 

A second way is to simply dramatically increase the resolution of the voxel mesh. This technique can be very very very memory expensive though. Imagine a 100x100x100 voxel grid where each voxel requires 4 bytes of memory... the total memory required is about 3.8 MB, which isn't bad at all. But lets say we want a grid 5 times higher in resolution... a 500^3 grid needs 476 MB! Furthermore, a 1000x1000x1000 grid requires 3.72 GB! Of course, we could try to save memory by only storing non-empty voxels through the use of a hashmap or something, but that is more computationally expensive and gives no benefit in the worst case scenario of every voxel having some density. 

 A third alternative is to use trilinear interpolation or some other interpolation scheme to smooth out the voxel grid as its being sampled. This technique can lead to some fairly nice results: 


At least in the case of my renderer, there is a fourth way to deal with the artifacting: instead of preloading the voxel buffer with values from Perlin noise, why not just get rid of the notion of a discretized voxel buffer altogether and directly sample the Perlin noise function when raymarching? The result would indeed be a perfectly smooth, artifact free render, but the computational cost is extraordinarily high compared to using a voxel buffer.

Of course, one could just box blur the render afterwards as well. But doing so is sort of cheating. 

I also played with trying to get my clouds to self illuminate, with the hope of possibly eventually making explosion type things. Ideally I would have done this by properly implementing a physically accurate black body system, but I did not have much time before the finished assignment was due to implement such a system. So instead, my friend Stewart Hills and I came up with a fake black body system where the emmitance of each voxel was simply determined by how far the voxel is from the outside of the cloud. For each voxel, simply raycast in several random directions until each raymarch hits zero density, pick the shortest distance, and plug that distance into some exponential falloff curve to get the voxel's emittance. Here's a self-glowing cloud: 

 
...not even close to physically accurate, but pretty good looking for a hack that was cooked up in a few hours! A closeup shot: 


So! The volumetric renderer was definitely a fun assignment, and now I've got a cool way to make clouds! Hopefully I'll be able to integrate this renderer into some future projects!

Thursday, October 6, 2011

Building/Installing Alembic for OSX

Alembic is a new open-source computer graphics interchange framework being developed by Sony Imageworks and ILM. The basic idea is that moving animation rigs and data and whatnot between packages can be a very tricky procedure since every package has its own way to handle animation, so why not bake out all of that animation data into a common interchange format? So, for example, instead of having to import a Maya rig into Houdini, you could rig/animate in Maya, bake out the animation to Alembic, bring that into Houdini to conduct simulations with, and then bake out the animation and bring it back into Maya. This is a trend that a number of studios including Sony, ILM, Pixar, etc. have been moving toward for some time.

I’ve been working on a project lately (more on that later) that makes use of Alembic, but I found that the only way to actually get Alembic is to build it from source. That’s not terribly difficult, but there’s not really any guides out there for folks who might not be as comfortable with building things from source. So, I wrote up a little guide!

Here’s how to build Alembic for OSX (10.6 and 10.7):

1. Alembic has a lot of dependencies that can be annoying to build/install by hand, so we’re going to cheat and use Homebrew. To install Homebrew:

/usr/bin/ruby -e "$(curl -fsSL https://raw.github.com/gist/323731)"

2. Get/build/install cmake with Homebrew:

brew install cmake

3. Get/build/install Boost with Homebrew:

brew install Boost

4. Get/build/install HDF5 with Homebrew:

brew install HDF5

HDF5 has to make install itself, so this may take some time to run. Be patient.

5. Unfortunately, ilmbase is not a standard UNIX package, so we can’t use Homebrew. We’ll have to build ilmbase manually. Get it from:

http://download.savannah.nongnu.org/releases/openexr/ilmbase-1.0.2.tar.gz

Untar/unzip to a readily accessible directory and cd into the ilmbase directory. Run:

./configure

After that finishes, we get to the annoying part: ilmbase by default makes use of a deprecated GCC 3.x compiler flag called Wno-long-double, which no longer exists in GCC 4.x. We’ll have to deactivate this flag in ilmbase’s makefiles manually in order to build correctly. In each and every of the following files:

/Half/Makefile
/HalfTest/Makefile
/Iex/Makefile
/IexTest/Makefile
/IlmThread/Makefile
/Imath/Makefile
/ImathTest/Makefile

Find the following line:

CXXFLAGS = -g -O2 -D_THREAD_SAFE -Wno-long-double

and delete it from the makefile.

Once all of that is done, you can make and then make install like normal.

Now move the ilmbase folder to somewhere safe. Something like /Developer/Dependencies might work, or alternatively /usr/include/

6. Time to actually build Alembic. Get the source tarball from:

http://code.google.com/p/alembic/wiki/GettingAlembic

Untar/unzip into a readily accessible directory and then create a build root directory parallel to the source root you just created:

mkdir ALEMBIC_BUILD

The build root doesn’t necesarily have to be parallel, but here we’ll assume it is for the sake of consistency.

7. Now cd into ALEMBIC_BUILD and bootstrap the Alembic build process. The bootstrap script is a python script:

python ../[Your Alembic Source Root]/build/bootstrap/alembic_bootstrap.py

The script will ask you for a whole bunch of paths:

For “Please enter the location where you would like to build the Alembic”, enter the full path to your ALEMBIC_BUILD directory.

For “Enter the path to lexical_cast.hpp:”, enter the full path to your lexical_cast.hpp, which should be something like /usr/local/include/boost/lexical_cast.hpp

For “Enter the path to libboost_thread:”, your path should be something like /usr/local/lib/libboost_thread-mt.a

For “Enter the path to zlib.h”, your path should be something like /usr/include/zlib.h

For “Enter the path to libz.a”, we’re actually not going to link against libz.a. We’ll be using libz.dylib instead, which should be at something like /usr/lib/libz.dylib

For “Enter the path to hdf5.h”, your path should be something like /usr/local/include/hdf5.h

For “Enter the path to libhdf5.a”, your path should be something like /usr/local/Cellar/hdf5/1.x.x/lib/libhdf5.a (unless you did not use Homebrew for installing hdf5, in which case libhdf5.a will be in whatever lib directory you installed it to)

For “Enter the path to ImathMath.h”, your path should be something like /usr/local/include/OpenEXR/ImathMath.h

For “Enter the path to libImath.a”, your path should be something like /usr/local/lib/libImath.a

Now hit enter, and let the script finish running!

8. If everything is bootstrapped correctly, you can now make. This will take a while, be patient.

9. Once the make finishes successfully, run make test to check for any problems.

10. Finally, run make install, and we’re done! Alembic should install to something like /usr/bin/alembic-1.x.x/.

Monday, September 5, 2011

Installing Numpy for Maya 2012 64-bit on OSX 10.7

On OSX 10.6, installing Numpy for Maya 2012 was simple enough. You could do it either by directly copying the Numpy install folder into Maya's Python's site-packages folder or by adding a sys.path.append to Maya's UserSetup.py. The process was quite simple since OSX 10.6's default preinstalled version of Python was 2.6.x and Maya 2012 uses Python 2.6.x as well.

However, OSX 10.7 comes with Python 2.7.x, so a few extra steps are needed:

For Maya 2012 64-bit:
1. OSX 10.7 comes with Python 2.7.x, but we need 2.6.x, so install 2.6.x using the official installer from here: http://www.python.org/ftp/python/2.6.6/python-2.6.6-macosx10.3.dmg
2. Since we're using 64-bit Maya with 64-bit Python, we'll need a 64-bit build of Numpy. The official version distributed on scipy.numpy.org is 32-bit, so we'll need a 64-bit build. Thankfully, there is an unofficial 64-bit build in the form of the Scipy Superpack for Mac OSX. Even though we're on OSX 10.7, we'll want the OSX 10.6 variety of the script since the OSX 10.7 is Python 2.7.x dependent: http://idisk.mac.com/fonnesbeck-Public/superpack_10.6_2011.07.10.sh 

EDIT (01/12/2012): I've been informed by Michael Frederickson that the link originally posted to the unofficial 64 bit Scipy Superpack build for 10.6 no longer works. Fortunately, I've backed up both the script and the required dependencies. The install script can be found here: http://yiningkarlli.com/files/osx10.7numpy2.6/superpack_10.6_2011.07.10.sh

3. Go to where the script downloaded to and in Terminal:
chmod +x superpack_10.6_2011.07.10.sh
./superpack_10.6_2011.07.10.sh

If you don't already have GNU Fortran, make sure to answer 'yes' when the script asks.
4. Once the script is done installing, in Terminal:
ls /Library/Python/2.7/site-packages/ | grep numpy
You should get something like: numpy-2.0.0.dev_b5cdaee_20110710-py2.6-macosx-10.6-universal.egg
Even though we installed Numpy for Python 2.6.x, on Lion it installs to the 2.7 folder for some reason. No matter, you can either leave it there or move it to 2.6.
5. Go to /Users/[your username]/Library/Preferences/Autodesk/maya/2012-x64/scripts
6. If you don't have a file named userSetup.py, make one and open it in a text editor. If yes, open it.
7. Add these lines to the file:
import os
import sys
sys.path.append('/Library/Python/2.7/site-packages/
[thing you got from step 4]')
8. Sidenote: installing Python 2.6.x sets your default OSX Python to 2.6.x, but if you want to go back to 2.7.x, just edit your ~/.bash_profile and remove these lines:
PATH="/Library/Frameworks/Python.framework/Versions/2.6/bin:${PATH}"
export PATH


....and you should be done! In Maya, you should be able to just use import numpy and you'll be good to go!

Thursday, September 1, 2011

Why Backups are Important

Here is a cautionary tale about why backing up one's harddrive is EXTREMELY IMPORTANT.

Over the summer, I started making a little scene based off of the GH House Challenge from RonenBekerman.com, partially as a way to learn Vray and partially just for fun. I was working off of my laptop for the entire project, since I was in California at the time and didn't have access to more powerful machines at home. Being out in California for the summer, I brought as little stuff with me as possible.

One of the things I decided to leave home was my backup Time Machine drive. "Oh, I won't need this over the summer, what are the odds of file corruption or harddrive issues anyhow? I'll be fine", I thought to myself.

Which means, of course, that halfway through the summer a bunch of my files got corrupted and were therefore lost forever, and of course that block of lost data included my in-progress GH House project. NEVER ASSUME THAT YOU DO NOT NEED BACKUP.

What follows are some random in-progress renders that survived through being in posts I made to Facebook and Tumblr.

Here are a series of small in-progress renders showing shading and lighting tests:
 


I also started playing with some ideas for the interior:
 

...and finally, some larger in-progress renders. These renders represent where the project was when I lost all of the data:
 




In the end, the fact that I lost the project isn't as important as the fact that I learned quite a lot from tinkering with this project. However, losing all of the data for this project was definitely a major bummer. But, lesson learned: BACK UP ALL THE TIME.

Sunday, May 8, 2011

Animation Final Project Stills

For my Computer Animation class's final, I decided to go for a change in pace and work in 2D instead of in Maya. I want to tweak a few things before I post the finished animation, but I have two more finals to get through first. So for now, here are some stills:











Thursday, May 5, 2011

Why cd when you can go?

I learned a sweet trick from fellow Penn CIS student Alexey Komissarouk's blog today: the 'go' command!

So in a standard *nix bash CLI, you have you're typical cd command. We all know how to use cd.

But have you ever accidentally cd'd a file? "cd /stuff/blah.txt" makes no sense and just gets you a "Not a directory" error. So then you have to backtrack and use vim or emacs or nano or whatever... blarg. If you're using emacs or vim, you like efficiency and you've already lost efficiency by wasting a perfectly good moment trying to cd into a file.

Enter the 'go' command!

Add this bit of code to your .bashrc file and replace $EDITOR with the CLI text editor of your choice:

go()
{
if [ -f $1 ]
then
$EDITOR $1
else
cd $1 && ls
fi
}

and you're all done! Now when you go to a directory, bash will cd and when you go to a file, bash will fire up vim or emacs or whatever.

As a side note, it might be fun to modify the 'go' command even further to automatically launch actions for other filetypes as well, like run javac whenever a .java is encountered or launch .jar files or run gcc or make whenever C++ makefiles are encountered. That's left as an exercise to the reader though ;)

Friday, April 29, 2011

Chairs…. now with Balloons!

Oops, I haven't posted in a while...

A few weeks back I decided to try out overhauling one of my previous projects with VRay. I figured the chairs project would be fun, so...


Wwwwaaaayyyy prettier than before. I really like VRay, although I feel that setting it up is a bit more involved than MentalRay is. Still haven't made too many inroads with Photorealistic Renderman yet, so I can't comment on that quite yet.

Oh, also, as you can see, I added balloons too. I like balloons.


I decided to add balloons after seeing an article on RonenBerkerman.com a while back about shading balloons using VRay in 3DSMax. I'm using VRay in Maya, however, so I had to figure out how to recreate the shader in Maya's Hypershade. The shader network winded up looking like this:


It's *almost* fully procedural, minus that one black and white ramp image that I wound up using for a lot of things. Replacing that image with a procedural ramp shader to make the entire shader fully procedural probably wouldn't be very hard at all, but I got lazy :p

I was originally going to post breakdowns of all of the settings for each node in the shading network as well, but again, I'm lazy. So instead, here's the shader in a Maya .ma file!

A few more renders:







As soon as my last finals are over in about a week, I'll catch up with my backlog of things that need to be posted. I'm planning on posting a series of posts introducing some concepts in graphics programming that I learned in CIS277 this semester. I'm not going to go super duper in depth (for that, take CIS277! Dr. Norm Badler is an awesome professor.), but at the very least I'll highlight some of the cooler things I learned. That class was really neat, we wound up writing our own 2D animation software from scratch and our final team project assignment was to build our own 3D modeling software. Basically, we made mini-Maya. My team (Adam Mally, Stewart Hills, and me) got some really neat stuff to work.

Speaking of Stewart, Stewart and I both will be interning at Pixar this summer! We got into their Pixar Undergraduate Program... uh... program. PUP essentially is a 10 week crash course on Pixar's production pipeline, so we'll be learning about everything from modeling to simulation to using Photorealistic Renderman. I'm really looking forward to that. :D

Monday, March 28, 2011

VRay Tree

After being frustrated with Mentalray for a few weeks, I've decided to start experimenting with VRay. VRay is... pretty amazing.

I've been continuing my tree experiments using VRay. VRay's Sun&Sky system is much nicer than Mentalray's system and VRay has this crazy useful two-sided material for flat two dimensional planes... such as leaves. Here's what I managed to cook up over the weekend:



I'm still working out some kinks in my new tree workflow. I'll post a full breakdown in a few days.

Saturday, March 19, 2011

Mo’ Tree (and Grass) Experimenting

I experimented with subsurface scatter based shaders for plant leaves today! I'm still working on it, so I won't be writing up what I've found until a bit later. But for now, here's what I've managed to get!



The grass is just Maya fur with a custom shader (woot subsurface scatter!) and the tree is modeled after a Japanese Maple and was made the same way as the one I posted a few days back.

Back to exploring!

Friday, March 18, 2011

Things I'm Bad At...

I am really. really. really. really. really. really. really. bad at painting. Like, it's not even funny how bad I am at painting, both in Photoshop and with real paint.

So I'm going to practice until I get better!

For the past few days I've been playing with using a limited color palette. So... here's what I made.



Blah. Blaarrrggggg. I'm not happy with it at all. But... it's where I'm starting from. So... let this be a base line. Hopefully I'll learn quickly...

Thursday, March 17, 2011

Autumn Tree!

Every couple of months I find myself trying to make trees again in Maya. Today I found myself tackling the tree problem yet again...

I've found that using XFrog's plant modeler program is my favorite way to create base meshes for plants. It sure as heck beats hand modeling all of those leaves... Speaking of models and leaves, the method I've settled on for tree leaves is to just use planes where the leaves should go and then make the planes look like leaves through alpha mapping.

Anyhoo, here's where I managed to get tonight:









The displacement on the bark is really really weird right now and the color of the leaves is weird too. I think I'm going to try subsurface scattering on the leaves... see if that helps. More updates later...

Friday, March 11, 2011

Demoreel Update!

So... after interviewing with Paul Kanyuk from Pixar, I've decided to update my reel a bit...



Breakdown is here (PDF).

So why the updated reel? Interviewing with Paul and chatting with the other two people from Pixar that visited Penn was a really interesting. Paul had a lot of suggestions for my work during our interview, so I've decided to go ahead and incorporate a lot of the changes that Paul suggested.

So changelog time!

Overall Changes:
-New song! The new song is an instrumental version of Baby Universe from the We Love Katamari OST. As usual, the version of the reel I'm actually sending out to studios for internships has not music, though.
-I've replaced "Postcards from Prague" with a new project, "Chairs"
-Shuffled around the order of some pieces.

Apples:
-Recomposited with slightly better z-depth using a new depth of field plugin for After Effects I found called Frischluft Lenscare. Apparently Alex Roman uses it, and if Alex Roman uses it, then gosh golly I'd better give it a try. Hahaha.
-Slightly tweaked color grading
-There's a little more footage of the second shot of the apples bouncing than there was in the previous reel

Hermit Crab:
-Recomposited with tweaked ambient occlusion in the turntable. Paul pointed out that there were some odd light leaking issues on the underside of the shell's opening, so I've increased the intensity of the AO there to try to make it a bit darker.
-Fixed a small problem with the transition between the Untextured Lambert and the Fully Textured parts of the turntable

White Room:
-Every shot's depth of field has been redone using Frischluft Lenscare
-The first shot of the underside of the stairs was lengthened, rerendered, recomposited, and re color graded.
-The second shot has new color grading and altered AO.
-The third shot was rerendered with new contrast settings and re color gaded.

Clock:
-Paul pointed out that a major flaw with the clock was that the highlight on the glass washed out everything under the glass, so I fixed that by changing the reflective properties of the glass slightly and giving the glass more of a curve to break up the highlight
-Turntables were sped up to help the reel's overall pacing
-Textures on the clock face are sharper than before

Raincoat Girls:
-Turntable was rerendered to get rid of the light blue band that appeared partway through the turntables in the previous reel
-Environment shots were re color graded

Monday, February 21, 2011

Demoreel and New Site!

Look! I finally cut together a demoreel!



I'm getting interviewed by PIXAR! :DDDDDDDDD Actually, Pixar gets a slightly different version of my reel. The Pixar version simply has no music... they're not very big fans of music in demoreels, apparently.

Which is why I got off of my lazy bum and finally cut a reel together.

Oh, I have a reel breakdown too! Check it out here (PDF).

The song in my reel is "I Like Van Halen Because My Sister Says They Are Cool" by El Ten Eleven. I've recut it slightly to fit the length of the reel.

I also finally put together a personal site thing at www.yiningkarlli.com.

Speaking of new sites... I should probably get around to redesigning Omjii.com soon. Hm.

Saturday, February 19, 2011

How 'Bout Them Apples?

Earlier this week my mom gave my roommates and me a gigormous sack of apples, so I've been eating apples all week. Which is good, because I love apples. :)

So I had an apple sitting on my desk, and I had Maya open, and I was a little bit bored, so... I made some apples in Maya!



Over the past few months I've developed a bit of an... odd... workflow for texturing/shading irregularly shaped objects (apples... muddy boots... hermit crabs...). I start with modeling and whatnot in Maya, as usual:





Then I go into Photoshop and use various reference (images found online, photos taken with my Nikon D60, etc) to paint a tile of the texture I want. For example, for the apples I took some photos of the apples and then extracted textures from the photos to create this texture tile:



Next, I bring the object mesh and the texture tile into Mudbox and use Mudbox's projection stencil tool to paint the mesh using the texture tile as the stencil. The nice thing about bringing things into Mudbox for texturing is that I don't really have to worry too much about UV mapping. Mudbox will automagically take care of all of the UV stuff as long as the imported mesh doesn't have any overlapping UV coordinates. So instead of messing with the UV editor in Maya before texturing, I can just use Maya's Automatic UV mapping tool to make sure that no UVs overlap and bring that into Mudbox. After painting in Mudbox, I got a texture image like this:



After texture painting in Mudbox, deriving spec and bump maps in Photoshop is a relatively straightforward affair. Once texturing and shading is done, I render out the beauty pass and z-depth pass and other passes...



...and bring all those passes into After Effects for compositing and color grading, and I'm done!