Friday, October 12, 2012

Mesh Loader

A gourd using the obj file from http://people.sc.fsu.edu/~jburkardt/data/obj/obj.html
Ran this on my friend's desktop as my graphics card gave up when I tried running it
1000 iterations

Incorporated an obj loader into the path tracer. Making too many mistakes while writing code, need to concentrate more. But finally got it working. My laptop cannot handle too complex a scene as I did not implement a K-D tree nor any sort of acceleration algorithms to decrease the number of intersection tests. Had fun getting the data into the GPU. Lot of areas for cleanup of code, but don't have time right now as the project is due today. Following is a mesh loaded into the scene.

3000 iterations

Also you can see motion blur for an object, which is just a hardcoded translated value. I haven't considered frames for calculating motion blur. My goal was to create a spline using the frame data and then render the scene based on the spline. This will have to wait as I have lot of other stuff to take care of right now.

Still to come when I get time (hopefully):
  • Spline based motion blur
  • Subsurface scattering
My fresnel was behaving weirdly, have rectified it now. An image showing reflection, refraction, glossy and diffuse surfaces
Reflection, refraction and diffuse surfaces

Thursday, October 11, 2012

Depth of Field contd...

 
 Reflecting sphere in focus. 

Finally got some time to work on the project. The depth of field is working correctly now as you can see from the image above. For this effect, I moved the image plane behind the camera and tried to mimick a true camera. The camera was changed from a pin hole camera to one with an aperture. The aperture is controlled by the user. The user can now choose where to have focus on the screen by passing a parameter for distance of object from the camera.

The algorithm is as follows:
  • Shoot a ray from the image pixel passing through the center of the lens into the scene.
  • Calculate the point of intersection with the focal plane.
  • Calculate the radius of the actual lens depending on its f-Number (aperture value).
  • Shoot random rays within the aperture to this point. Smaller f-Number values (bigger radius of opening) will cause objects farther from the focal plane to be more out of focus. Also more areas come into focus as the f-Number increases.
Below are some of the images with diffeent f-Numbers (aperture values):

 Aperture: f/1.8 (F-Number: 1.8)
 
Aperture: f/8 (F-Number: 8)

Aperture: f/18 (F-Number: 18)
 
Pin hole camera
 

Tuesday, October 9, 2012

Depth of field


I am currently working on the Depth of field effect. Just an initial image using the effect. Right now I am not able to focus at a certain distance, as it zooms into the image. Looking into this and will post my method and better images soon. I have to shift focus to some other work I have and will hopefully be able to come back to this in a day or two.

Monday, October 8, 2012

Basic Tracer



Got the basic path tracer working. Basic algorithm is:
  • Shoot rays from a pixel by jittering it around the pixel.
  • Trace the rays into the scene.
  • On hitting a diffuse surface, generate random rays based on a cosine distribution around a hemisphere.
  • For a specular surface reflect the ray.
  • For a transparent object, refract and reflect (only one ray is generated at one time based on the Russian Roulette technique).
  • If the ray hits a light source, multiply all colors along the way and color the pixel, else no color added to the pixel.
  • Keep repeating the above procedure till noise in the image reduces and it converges.

Current state:
  • Diffuse surface getting rendered.
  • Stream compaction working on the GPU using thrust. I bundled the rays as done in my ray tracer and then call thrust::remove_if to remove rays which are done processing. This greatly reduces the number of rays being shot for the next bounce.
  • Reflection accomodated in objects.
  • Refraction is working based on the material's refractive index (Based on Fresnel's equation and Snell's law).
Faced few issues along the way:
  • Screw up while using string compaction which wasted a lot of my time. Was about to start with my own implementation when I figured it out.
  • Wrote code for Cook_Torrence BRDF, but having some problems with using it correctly. Objects black out with it. Have to look more into that.
Next steps:
  • Currently looking into Depth of Field implementation. Planning to move the screen behind the lens, like a real camera and allow users to focus on objects in scene. Lets see how it goes.
  • Implement Obj loader to load up polygonal models.
  • Solve the Cook-Torrence BRDF issue.
  • Maybe implement more BSDFs.
  • Look into sub-surface scattering.
  
The above image clearly shows color bleeding, soft shadows, anti-aliasing and caustics which we get for free with Path Tracing. This was created using 200 iterations. Currently I get around 6-7 frames per second on my 650M card.

Sunday, October 7, 2012

Introduction

I was working on a CUDA ray tracer (http://cudaraytracer.blogspot.com/) as an initial stab into a GPU path tracer that I want to build. This is part of my GPU coursework at University of Pennsylvania.
 
I am using the GeForce GT 650M card with 384 Cuda cores. Specifications can be found here.
 
I will be working off my ray tracer which was built on a base code provided by the TA for the class.
 
We need to implement the following features:
  • Full global illumination (including soft shadows, color bleeding, etc.) by pathtracing rays through the scene.
  • Properly accumulating emittance and colors to generate a final image.
  • Supersampled antialiasing.
  • Parallelization by ray instead of by pixel via string compaction.
  • Perfect specular reflection.
Atleast 2 of the following features also need to be implemented:
  • Additional BRDF models, such as Cook-Torrance, Ward, etc. Each BRDF model may count as a separate feature. 
  • Texture mapping
  • Bump mapping
  • Translational motion blur
  • Fresnel-based Refraction, i.e. glass
  • OBJ Mesh loading and rendering without KD-Tree
  • Interactive camera
  • Integrate an existing stackless KD-Tree library, such as CUKD (https://github.com/unvirtual/cukd)
  • Depth of field
Alternatively, implementing just one of the following features can satisfy the "pick two" feature requirement, since these are correspondingly more difficult problems:
  • Physically based subsurface scattering and transmission
  • Implement and integrate your own stackless KD-Tree from scratch.
  • Displacement mapping
  • Deformational motion blur
Really excited to start off!!!