Got the basic path tracer working. Basic algorithm is:
- Shoot rays from a pixel by jittering it around the pixel.
- Trace the rays into the scene.
- On hitting a diffuse surface, generate random rays based on a cosine distribution around a hemisphere.
- For a specular surface reflect the ray.
- For a transparent object, refract and reflect (only one ray is generated at one time based on the Russian Roulette technique).
- If the ray hits a light source, multiply all colors along the way and color the pixel, else no color added to the pixel.
- Keep repeating the above procedure till noise in the image reduces and it converges.
Current state:
- Diffuse surface getting rendered.
- Stream compaction working on the GPU using thrust. I bundled the rays as done in my ray tracer and then call thrust::remove_if to remove rays which are done processing. This greatly reduces the number of rays being shot for the next bounce.
- Reflection accomodated in objects.
- Refraction is working based on the material's refractive index (Based on Fresnel's equation and Snell's law).
- Screw up while using string compaction which wasted a lot of my time. Was about to start with my own implementation when I figured it out.
- Wrote code for Cook_Torrence BRDF, but having some problems with using it correctly. Objects black out with it. Have to look more into that.
- Currently looking into Depth of Field implementation. Planning to move the screen behind the lens, like a real camera and allow users to focus on objects in scene. Lets see how it goes.
- Implement Obj loader to load up polygonal models.
- Solve the Cook-Torrence BRDF issue.
- Maybe implement more BSDFs.
- Look into sub-surface scattering.
The above image clearly shows color bleeding, soft shadows, anti-aliasing and caustics which we get for free with Path Tracing. This was created using 200 iterations. Currently I get around 6-7 frames per second on my 650M card.
No comments:
Post a Comment