I've been interested in 3D graphics for many years. I'm currently working on academic projects involving fast methods for simulating deformable bodies in graphics and engineering. Graphics is a fascinating field, and the best part is that you get really interesting images as output :)
|3D Modeling Projects||3D Image Gallery|
Environment mapping is a graphics technique that involves taking a single image that records light from all directions. This image is painted onto the surface of a 3D object as if that light were being reflected from it. The result is that the object appears to have a mirrored finish and to be embedded into the scene that was initially photographed.
A 360 degree image may be obtained by taking a picture of mirrored sphere from a great distance. Alternately I synthesized such an image from two fisheye images taken with my digital camera and a 180 degree fisheye lens. Two such images are shown below. The second image is taken "backwards" from the first.
|Fisheye front image||Fisheye rear image|
Next both images are combined into a single 360 degree image as if a mirrored ball were placed into the scene and photographed. This is accomplished by associating a direction with all pixels in each fisheye image. The 360 degree image is formed by sampling the light from every such direction in the two fisheye images. Two such images can be created, depending which image is considered to be "in front". In theory these two images are equivilent, but due to sampling limitations and pixel size, there either front or back will be sampled with more detail.
A slight discontinuity occrs at roughly 70% of the width of the circle due to an imperfect match between the two fisheye images. This is due to imperfections in rotating the camera.
|Environment Map A||Environment Map B|
The results of placing an object in the environment is shown below. Here a mirrored double-donut is placed into the Quad at the University of Washington.
Below are some more images. These images are all rendered in real time using a modern graphics card. The effect is much greater when the object is being spun in 3D by the user.
|Another pose||A single donut|
I worked for McNeel and Associates, the makers of Rhino 3D writing CAD translator modules. I implemented the Parasolid export module to allow Rhino to better communicate with Unigraphics and SolidWorks. I'm a big fan of Rhino! It's the best free-form surface modeler around. The software team at McNeel is a great group. To learn more about my work with Rhino, please visit my projects page.
I also enjoy using Inspire3d from NewTek, the makers of LightWave 3D. Inspire/Lightwave have a very open plugin architecture for feature expansion.
I've authored the plugin SuperSize which allows greater control of the scaling and positioning of objects in Inspire and Lightwave than the standard tools do. A screenshot of SuperSize is below. If you're interested in learning more about SuperSize, or downloading it, please click here.
Static images don't really do this project justice. The topics of this course centered around the animation of structures of linked bodies, like a human skeleton. The screenshots below are of a final project I worked on with Joel Hindorff. The goal of the project was to take a given crude hand made animation of a walk cycle and to "physify" it (to add real world physics) by adding assumed ground constraints and softening any spike in joint torques that might occur and by stabilizing the velocity of the center of mass. Again the screenshots do not do it justice; this was a very challenging project!
I was lucky enough to have such great partners in Kaun Yong and Patti McLain that our team won grand prize at the 1998 CS348B Final Ray Tracing competition! Grand prize was an all expenses paid trip to Siggraph '98 in Orlando Florida!!
The competion rules were to accurately generate images of a real world object. Mind you, the only software legal for the competion was a C/C++ compiler!! All of the following images were ray traced using our own engine written from scratch over the course of the quarter. To view all gallery entries check out the Stanford CS348B gallery homepage. (Instructor: Marc Levoy)
This fractal mountain was modeled using midpoint displacements from the method presented in The Science of Fractal Images. I added procedural snow growth taking into account altitude and slope. High and/or flat areas were much more likely hold snow than low or steep areas. Rivers were grown by begining at random nucleation points and following the gradient along the path of steepest descent. If a local valley was encountered the water was pooled up in a spiral fashion (or failing that, random jumps) until the path could continue down to the sea. I was amazed at the realism produced in the aerial map. Notice the convincing formation of fans and deltas and snow coverage.
The globe shows off Patti's Distributed Ray Tracing. Instead of standard diffuse/specular models, a probability distribution density function was used to spawn packets of relfected rays resulting in soft focus refraction and smooth shadows and reflectance. Note the blueish (very) diffuse reflection of the globe on the ground surface.
The cards were proof of concept for our texture (diffuse, specular, ambient, bump, transparency maps). Subtle bump maps were used to enhance surface finish and transparency maps were used to round the corners of the single-polygon cards. The card images were scanned from a real deck.
Three views of the Smoking Cigarette plus detail of the tip. The cigarette was our final real-world object. The smoke was generated by a volumetric algorithm implemented by Kuan, who was also responsible for the caustic reflections off of the water on to the column. The smoke is also animated! Check out the Animated Smoke Movie (Approx 1MB). The tip was actually modeled in Matlab using a custom script to generate SGI OpenInventor model files. The tip contains about 2000 similarly sized polys arranged in concentric rings with various noise functions applied to the vertices. Our rather severe bump mapping is given away by the edge of the rock wall. The top three entries in the contest were known as the "Three Sins". Smoke, Drink and Gambling scenes stole top honors...
The white circle is actually a ray traced image of a 100% white sphere. The rotated ellipse is actually the same sphere to which affine transformations have been applied. OK, not too impressive, but believe it or not, getting these was a big milestone on the way to generating the images below!
The images on the left and the right are the same scene! The left image has false color applied with no recursion. The right image has a reflection/refraction recursion limit of 20.
This is the result of 2 weeks of non-stop programming. The first ray-tracer was required to handle diffuse/specular highlights, reflection and refraction.
The second ray tracer was accelerated by bounding volume hierarchies to handle scenes with large numbers of elements. Theses scenes would have taken the first ray-tracer hours. Per vertex normals and materials were implemented. Rendering times are for a 250MHz Silicon Graphics Indigo2.
|Chess Set||Polygons: 5951||Time: 44s|
|Olympic Pavilion||Polygons: 5339||Time: 4m 12s|
|Sculpture||Polygons: 10365||Time: 5m 4s|
|Shell||Polygons: 5169||Time: 10m 7s|
(Yeah, my times were only middling. Mostly due to interpolating shading parameters on all intersections, not just the ones that matter. That's what I get for being a Mechancal Engineer. Come to think of it no one on our team was a CS major :)
These images are from CS248 (precursor to CS348, Instructor: Pat Hanrahan). Here are 3 textured teapots: marble, wood, and superdiscolicous. (this was a required image!)
My First Raytracing! Well, my first useful tracing anyway. This is a still frame from a very stylized movie of an amusement park ride of the future, part of a Sophomore mechanical design course. The movie was about 1 minute long, 30fps. It was rendered with POV-Ray the best freeware ray tracer money can buy! The movie was output to video on a Macintosh Quadra 660AV.