1 / 32

PBRT core

PBRT core. Digital Image Synthesis Yung-Yu Chuang. with slides by Pat Hanrahan. This course. Study of how state-of-art ray tracers work. Ray casting. p. 5D light field. Rendering equation (Kajiya 1986). Basic components. Cameras Ray-object intersection Light distribution Visibility

zahi
Download Presentation

PBRT core

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. PBRT core Digital Image Synthesis Yung-Yu Chuang with slides by Pat Hanrahan

  2. This course • Study of how state-of-art ray tracers work

  3. Ray casting

  4. p 5D light field Rendering equation (Kajiya 1986)

  5. Basic components • Cameras • Ray-object intersection • Light distribution • Visibility • Surface scattering • Recursive ray tracing • Ray propagation (in volume)

  6. pbrt • pbrt (physically-based ray tracing) attempts to simulate physical interaction between light and matter based on ray tracing. • Structured using object-oriented paradigm: abstract base classes are defined for important entities and their interfaces • It is easy to extend the system. New types inherit from the appropriate base class, are compiled and linked into the system.

  7. pbrt abstract classes (see source browser)

  8. Phases of execution • main() in renderer/pbrt.cpp int main(int argc, char *argv[]) { Options options; vector<string> filenames; <Process command-line arguments> pbrtInit(options); if (filename.size() == 1) { // Parse scene from standard input ParseFile("-"); } else { // Parse scene from input files for (int i = 1; i < filenames.size(); i++) if (!ParseFile(filenames[i])) Error("Couldn't open …, filenames[i]); } pbrtCleanup(); return 0; }

  9. Example scene LookAt 0 10 100 0 -1 0 0 1 0 Camera "perspective" "float fov" [30] PixelFilter "mitchell" "float xwidth" [2] "float ywidth" [2] Sampler "bestcandidate" Film "image" "string filename" ["test.exr"] "integer xresolution" [200] "integer yresolution" [200] # this is a meaningless comment WorldBegin AttributeBegin CoordSysTransform "camera" LightSource "distant" "point from" [0 0 0] "point to" [0 0 1] "color L" [3 3 3] AttributeEnd rendering options id “type” param-list “type name” [value]

  10. Example scene AttributeBegin Rotate 135 1 0 0 Texture "checks" "color" "checkerboard" "float uscale" [8] "float vscale" [8] "color tex1" [1 0 0] "color tex2" [0 0 1] Material "matte" "texture Kd" "checks" Shape "sphere" "float radius" [20] AttributeEnd WorldEnd

  11. aggregate … … primitive primitive shape material Scene parsing (Appendix B) • core/pbrtlex.ll and core/pbrtparse.yy • After parsing, a scene object is created (core/scene.*) class scene { Primitive *aggregate; vector<Light *> lights; VolumeRegion *volumeRegion; BBox bound; };

  12. Rendering • Rendering is handled by Renderer class. class Renderer { … virtual void Render(Scene *scene) = 0; virtual Spectrum Li(Scene *scn, RayDifferential &r, Sample *sample, RNG &rng, MemoryArena &arena, Intersection *isect, Spectrum *T) const = 0; virtual Spectrum Transmittance(Scene *scene, RayDifferential &ray, Sample *sample, RNG &rng, MemoryArena &arena) const = 0; }; given a scene, render an image or a set of measurements computer radiance along a ray for MC sampling transmittance return transmittance along a ray The later two are usually relayed to Integrator

  13. SamplerRenderer class SamplerRenderer : public Renderer { … private: // SamplerRenderer Private Data Sampler *sampler; Camera *camera; SurfaceIntegrator *surfaceIntegrator; VolumeIntegrator *volumeIntegrator; }; choose samples on image plane and for integration determine lens parameters (position, orientation, focus, field of view) with a film calculate the rendering equation

  14. The main rendering loop • After scene and Renderer are constructed, Renderer:Render() is invoked.

  15. Renderer:Render() void SamplerRenderer::Render(const Scene *scene) { … surfaceIntegrator->Preprocess(scene,camera,this); volumeIntegrator->Preprocess(scene,camera,this); Sample *sample = new Sample(sampler, surfaceIntegrator, volumeIntegrator, scene); int nPixels = camera->film->xResolution * camera->film->yResolution; int nTasks = max(32 * NumSystemCores(), nPixels / (16*16)); nTasks = RoundUpPow2(nTasks); scene-dependent initialization such photon map sample structure depends on types of integrators We want many tasks to fill in the core (see histogram next page). If there are too few, some core will be idle. But, threads have overheads. So, we do not want too many either. at least 32 tasks for a core a task is about 16x16 power2 easier to divide

  16. Histogram of running time • The longest-running task took 151 times longer than the slowest.

  17. Renderer:Render() vector<Task *> renderTasks; for (int i = 0; i < nTasks; ++i) renderTasks.push_back(new SamplerRendererTask(scene,this,camera,reporter, sampler, sample, nTasks-1-i, nTasks)); EnqueueTasks(renderTasks); WaitForAllTasks(); for (int i = 0; i < renderTasks.size(); ++i) delete renderTasks[i]; delete sample; camera->film->WriteImage(); } all information about renderer must be passed in task id total tasks

  18. SamplerRenderTask::Run • When the task system decided to run a task on a particular processor, SamplerRenderTask::Run() will be called. void SamplerRendererTask::Run() { // decided which part it is responsible for … int sampleCount; while ((sampleCount=sampler -> GetMoreSamples(samples, rng)) > 0) { // Generate camera rays and compute radiance

  19. SamplerRenderTask::Run for (int i = 0; i < sampleCount; ++i) { float rayWeight = camera-> GenerateRayDifferential(samples[i], &rays[i]); rays[i].ScaleDifferentials( 1.f / sqrtf(sampler->samplesPerPixel)); if (rayWeight > 0.f) Ls[i] = rayWeight * renderer->Li(scene, rays[i], &samples[i], rng, arena, &isects[i], &Ts[i]); else { Ls[i] = 0.f; Ts[i] = 1.f; } for (int i = 0; i < sampleCount; ++i) camera->film->AddSample(samples[i], Ls[i]); } for vignetting ray differential for antialiasing

  20. SamplerRenderTask::Li Spectrum SamplerRenderTask::Li(Scene *scene, RayDifferential &ray, Sample *sample, …, Intersection *isect, Spectrum *T) { Spectrum Li = 0.f; if (scene->Intersect(ray, isect)) Li = surfaceIntegrator->Li(scene,this, ray, *isect, sample,rng, arena); else { // ray that doesn't hit any geometry for (i=0; i<scene->lights.size(); ++i) Li += scene->lights[i]->Le(ray); } Spectrum Lvi = volumeIntegrator->Li(scene, this, ray, sample, rng, T, arena); return *T * Li + Lvi; }

  21. Lo Surface integrator’s Li

  22. Lvi Li T SamplerRenderTask::Li

  23. Whitted model

  24. Whitted integrator • in integrators/whitted.cpp class WhittedIntegrator:public SurfaceIntegrator Spectrum WhittedIntegrator::Li(Scene *scene, Renderer *renderer, RayDifferential &ray, Intersection &isect, Sample *sample, RNG &rng, MemoryArena &arena) const { BSDF *bsdf = isect.GetBSDF(ray, arena); //Initialize common variables for Whitted integrator const Point &p = bsdf->dgShading.p; const Normal &n = bsdf->dgShading.nn; Vector wo = -ray.d; n ωi ωo p

  25. Whitted integrator // Compute emitted light if ray hits light source L += isect.Le(wo); // Add contribution of each light source direct lighting for (i = 0; i < scene->lights.size(); ++i) { Vector wi; float pdf; VisibilityTester visibility; Spectrum Li = scene->lights[i]->Sample_L(p, isect.rayEpsilon, LightSample(rng), ray.time, &wi, &pdf, &visibility); if (Li.IsBlack() || pdf == 0.f) continue; Spectrum f = bsdf->f(wo, wi); if (!f.IsBlack() && visibility.Unoccluded(scene)) L += f * Li * AbsDot(wi, n) * visibility.Transmittance(scene, renderer, sample, rng, arena) / pdf; }

  26. Whitted integrator if (ray.depth + 1 < maxDepth) { // Trace for specular reflection and refraction L += SpecularReflect(ray, bsdf, rng, isect, renderer, scene, sample, arena); L += SpecularTransmit(ray, bsdf, rng, isect, renderer, scene, sample, arena); } return L; }

  27. SpecularReflect In core/integrator.cpp utility functions Spectrum SpecularReflect(RayDifferential &ray, BSDF *bsdf, RNG &rng, const Intersection &isect, Renderer *renderer, Scene *scene, Sample *sample, MemoryArena &arena) { Vector wo = -ray.d, wi; float pdf; const Point &p = bsdf->dgShading.p; const Normal &n = bsdf->dgShading.nn; Spectrum f = bsdf->Sample_f(wo, &wi, BSDFSample(rng), &pdf, BxDFType(BSDF_REFLECTION | BSDF_SPECULAR));

  28. SpecularReflect Spectrum L = 0.f; if (pdf > 0.f && !f.IsBlack() && AbsDot(wi, n) != 0.f) { <Compute ray differential _rd_ for specular reflection> Spectrum Li = renderer->Li(scene, rd, sample, rng, arena); L = f * Li * AbsDot(wi, n) / pdf; } return L; }

  29. Code optimization • Two commonly used tips • Divide, square root and trigonometric are among the slowest (10-50 times slower than +*). Multiplying 1/r for dividing r. • Being cache conscious

  30. Cache-conscious programming • alloca • AllocAligned(), FreeAligned() make sure that memory is cache-aligned • Use union and bitfields to reduce size and increase locality • Split data into hot and cold

  31. w h h Cache-conscious programming • Arena-based allocation allows faster allocation and better locality because of contiguous addresses. • Blocked 2D array, used for film w … …. …. ….

More Related