
An Asian Exhibition
 Apsara between two Dragons and Mount Meru 
Rendering Competition 2007
by Richard Socher and Miguel Granados
Final Image
Example:
Code:
Explanation:
A fixed number of directional lights are added to the scene based on the iluminance values in a high dynamic range (HDR) image.
The algorithm implemented for computing the direction and intensity of the lights was the Median Cut Algorithm.
In the input image, columns and rows correspond to the spherical coordinates phi and theta of an environmental HDR photography.
Reference:
Paul Debevec. Rendering Synthetic Objects into Real Scenes: Bridging Traditional and Imagebased
Graphics with Global Illumination and High Dynamic Range Photography.
Computer Graphics (Proceedings of SIGGRAPH 98), 32(4):189  198, 1998.
Example:
In the final image. Or in a simpler setting like:
mountEdgy (before there were smooth triangles) 
131072 smooth triangles 
mountain with height field after tone mapping 



Code:
FractalGeometryObjects.hxx
Explanation:
Fractal Geometry is created by defining 4 points and a height. Then the mid points of the four points and
the centroid of the four points are calculated. The latter is then offset by a random height change.
In the first recursion this offset is the maximum height, which is passed as a parameter.
The offset becomes smaller in each step. The centroid together with 3 midpoints forms one of four
new quadrilaterals in each recursion step. Each quadrilateral is then used as a new starting point for further
subdivision. This is done n times. In a final step all quadrilaterals are subdivided into 2 triangles by their
four points.
The first step of the algorithm is shown in the following figure:
Depending on the height of the triangle it is assigned a different color, to have green , grey and white mountains.
Reference
Subdivision idea from Paul Bourke and the lecture.
The color change is a quick modification to make it look better.
Example:
In the final image, on the floor, the bases for the exhibition pieces, etc.
Reflective (blue), refractive (green), and transparent (green) spheres. 
At an early stage, you can look behind the mountain. 


Code:
Explanation:
The shader is completely dependent on the material. We tried for a while to use .mtl files directly exported from blender.
However since they are not standard conform and did not use proper indexes for the principal kind of illumination,
you now have to modify them a little bit to fit our ray tracer.
Snell's law is used for refraction and the formula we derived as an exercise is used for reflection. The Fresnel term
determines their influence.
Reference
Snell's Law and
James Foley, Andries van Dam, Steven Feiner, and John Hughes. Computer Graphics  Principles
and Practice, second edition in C. Addison Wesley, 1997
Example:
only the blue sphere is really sharp, the red one behind it is blured. 

Code:
Explanation:
We define a distance from the camera that shall be sharp. Afterwards, the origin of each ray is changed a slight bit.
The new direction is then computed by substracting the new origin by the intersection point on the focal plane.
This way everything on the focal plane is still sharp, but all objects in front or behind are blurred.
Reference
Andrew Glassner. Principles of Digital Image Synthesis. Morgan Kaufmann, 1995.
Example:
In the final image. We use high dynamic range illumination and textures, hence we needed to apply tone mapping in order to save a suitable image.
High luminance cut of the HDR result 
Low luminance cut of the HDR result 
Toned mapped image using Ward's operator 



The rendered HDR image can be seen
using the HDRView software.
Code:
Explanation and Reference
We use Ward's Operator to preserv perceived contrast, as explained in
G. Ward. A contrastbased scalefactor for luminance display. In Graphics Gems IV, chapter VII.2, p. 415421.
There is
also a nice wrap up in
this thesis.
The factor that is calculated expresses the minimal discernible luminance changes.
Code:
Explanation:
Implementation of a Kdtree structure using the Surface Area Heuristic (SAH).
Both the naive and the SAH versions of the Kdtree were implemented, with the following features:

SAH Kdtree 
Naive Kdtree 
Splitting location: 
Minimizing traversal/intersection cost 
Middle point of bounding box 
Spliting plane: 
Minimizing traversal/intersection cost 
Round robbin 
Recursion stop criteria: 
Minimizing traversal/intersection cost 
Maximum depth and minimum number of primitives 
Implemented construction time: 
O(N*log^2(N)) 
O(N*log(N)) 
Benchmark:
Benchmark perform using four different resolutions of the Stanford dragon model.
Each render was perform using 9 samples per pixel and 64 light sources.
Number of triangles 
Render time SAH (s) 
Render time Naive (s) 
Improvement 
Model 
11102 
85.71s 
146.63s 
41% less time 

47794 
101.19s 
156.39s 
35% less time 

202520 
119.25s 
171.8s 
31% less time 

871306 
117.96s 
186.7s 
37% less time 

Reference:
Ingo Wald and Vlastimil Havran. On building fast kdtrees for ray tracing, and on doing that in
O(N*log N). In Proceedings of the 2006 IEEE Symposium on Interactive Ray Tracing, 2006. (accepted
for publication, minor revision pending).
The woman came from http://modelsbank.3dm3.com. We changed some details.
The high dynamic range images used where taken from the Light Probe Image Gallery.
The small dragon model is taken from the The Stanford 3D Scanning Repository.
The big dragon was freely available on a graphics website.
The model was created in Blender by Richard Socher.
See some more pictures we created along the way.

