60 likes | 175 Views
This project implements stereoscopic 3D rendering using anaglyph techniques. By utilizing two camera positions for ray tracing, the program captures two distinct images that are superimposed with different color channels. The brain perceives depth through selective color filters that each eye sees separately. The code defines the `RT3DCamera` class, allowing computation of the camera positions in relation to geometries within the scene. The rendering process includes pixel color calculations and filter settings tailored for popular anaglyph systems like Red-Cyan and Green-Magenta.
E N D
Joshua Smith and Garrick Solberg CSS 552 – Topics in Rendering 3D Anaglyph implementation demo
Project Goal • Implement stereoscopic “3D” via anaglyph rendering • Depth is achieved by ray tracing from two camera positions. Each position records different color channels, resulting in two images superimposed upon each other • Brain is tricked into seeing one image and perceiving depth via color filters that block an image for each eye
RT3DCamera public class RT3DCamera : RTCamera{ protected Vector3 leftCamera, rightCamera; public RT3DCamera(CommandFileParser parser, SceneDatabasesceneDB) : base(parser) { computeCameraPositions(sceneDB); } public void computeCameraPositions(SceneDatabasesceneDB) { … for (int i = 0; i < sceneDB.GetNumGeom(); i++){ RTGeometryg = sceneDB.GetGeom(i); maxDist= (mEye - g.Max).Length(); minDist= (mEye - g.Min).Length(); if (maxDist < minDist) curDist= maxDist; else curDist= minDist; if (curDist < shortestDist) shortestDist= curDist; }// end for float eyeSeparation = shortestDist / 30; leftCamera= mEye + mSideVec * eyeSeparation / 2; rightCamera= mEye - mSideVec * eyeSeparation / 2; } // end class
RTCore_Compute : ComputeImage … for (inti = 0; i < mImagespec.NumSamplesPerPixel; i++) { … if (!mAnaglyph) { computePixelColor(r, refpixelColor, SINGLE); } else { Ray left = new Ray(mCamera.LeftEyePosition, pixelPos); computePixelColor(left, refpixelColor, LEFT); Ray right = new Ray(mCamera.RightEyePosition, pixelPos); computePixelColor(right, refpixelColor, RIGHT); } …
RTCore_Compute : ComputePixelColor privatevoidComputePixelColor(Ray r, ref Vector3 pixelColor, int channel) { IntersectionRecord rec = newIntersectionRecord(); // what can we see? ComputeVisibility(r, rec, RTCore.kInvalidIndex); Vector3 sampleColor = mBgColor; // what color should it be? if (rec.GeomIndex != RTCore.kInvalidIndex) sampleColor = ComputeShading(rec, 0); if(channel == SINGLE) pixelColor+= sampleColor; elseif (channel == LEFT) pixelColor+= sampleColor * leftLens; elseif (channel == RIGHT) pixelColor+= sampleColor * rightLens; }
RTCore_Compute : SetFilters private void SetFilters() { if (mAnaglyphList == Anaglyph.No3D) { leftLens = new Vector3(1, 1, 1); rightLens = leftLens; mAnaglyph = false; } else{ if (mAnaglyphList == Anaglyph.RedGreen) { leftLens = new Vector3(1, 0, 0); rightLens = new Vector3(0, 1, 0); } else if (mAnaglyphList == Anaglyph.RedCyan) { leftLens = new Vector3(1, 0, 0); rightLens = new Vector3(0, 1, 1); } else if (mAnaglyphList == Anaglyph.GreenMagenta) { leftLens = new Vector3(0, 1, 0); rightLens = new Vector3(1, 0, 1); … } mAnaglyph = true; } }