HXA7241 (logo)

four cornell box variants

PERCEPTUUM - details

a global illumination renderer development project

basic features

  • c++ on win32
  • renderer
  • global illumination by montecarlo path tracing and photon mapping
  • progressive refinement
  • temporal
  • frame-coherent sampling
  • triangle mesh models
  • instancing hierarchy
  • multi parameter image texturing
  • brdf light interaction
  • cie XYZ light
  • homogeneous medium fogging approximation
  • tone-mapped png (plain/srgb) image output
  • ward-realpixels png image output

rendering sequence

  1. read in options, scene, sample control ordinates and accumulation; construct illumination and rasterizer
  2. update scene for time point
  3. make illumination from scene and sample control
  4. rasterize scene to image accumulation with sample control and illumination
  5. analyse image accumulation
  6. if image not complete and user hasnt interrupted, repeat from 2)

progressive refinement

a rendering is split into frames. each frame is a limited sampling of the whole, a series of frames is a series of refinements. the mean of any set - or the mean of the series so far - is a candidate image.

image accumulation

the image is continuously accumulated from frames to form a running mean. along with the mean are stored other statistics for each pixel, presently count and variance. from these statistics some indication can be got of where samples are needed more than others, and maybe if the image is converged enough to be considered completed. analysis is done after each frame.

since each pixel is handled separately, a subset of the pixels can be sampled and accumulated each frame.

sampling control

all stochastic generators are centralized for ease of persistence and determinism, enabling suspension, continuation and repeatability of renders.

the frame number provides the index for most of the sampling control, the quasi/pseudo/random sequences advancing in step with the frame. this frame coherent sampling eliminates image-wide montecarlo style point noise, in exchange for localised strobe-like artifacts, which looks better.

main sampling dimensions are: scene time, light emission positions and directions, ray/photon interaction direction and absorbtion, subpixel ray start position, lens position.

the top level, frame stepped, sequences need to be quasi-random like the halton sequence or pseudo-random like standard generators, since they must gradually fill their range over an unknown sequence length - like an incrementally refined stratification.

scene/objects/modelling

the overall scene contains an objects collection, and a materials collection.

strangely, the objects collection is also a tree, since you can navigate upwards in a tree like way. its the sets of transforms which make it tree-like, the objects pointed at are just sort of singletons of their type, the objects are 'types', what makes them into 'instances' is the transform used to address them. rendering is not of objects in the scene, but of the scene tree - render the root (the scene), which does its sub-objects, and so on recursively. but if you want to index the scene, then just iterate through the collection linearly, indexing each object on its own non-recursively, since for indexing, the transforms are irrelevent, the subobjects are just pointers to 'types' which just appear in the collection once.

objects are principally defined by the ability to: intersect a ray, give qualities at an intersection point, provide a mesh (for projective rendering), and give an emission point.

object types:

  • ObjectRenderable
    • bounding box - axis aligned
    • overall emittance - precalculated
    • overall specularity - precalculated
  • ObjectMesh (inherits ObjectRenderable)
    • a terminal object with no accessible subobjects, like a box, sphere or torus.
    • contains a collection of vertexs, and a collection of triangles, the corners being indexs into the vertex collection. the mesh must form a single, non-intersecting, closed surface, made of triangle faces.
    • contains a material index into the scene materials collection.
  • ObjectInstance (inherits ObjectRenderable)
    • a collection of sub parts, each having, or defined algorithmically: an index/pointer of another ObjectRenderable, and a pair of opposing transforms
    • transform out to project, transform ray in to intersect.
  • ObjectInstanceMoving (inherits ObjectInstance)
    • adds time dependent 'drivers' for the transforms in its base.
    • currently works by linearly interpolating start/end positions and rotations.

for movement, instance transforms can be time parameterised (perhaps defined at a finite resolution, like splines cubic or linear, analogous to the discreet triangles in a mesh, this means that the bounds can be determinate) changing shape is just different sub-objects moved by different transforms, or concrete objects altering their internal representation.

objects must be single closed surfaces, topologically like a sphere or higher genus - otherwise medium tracking wont work.

translucent objects ought not overlap - it will cope, but wont really be physically sensible. the eyepoint and lightsources ought not be inside any objects - it will cope, but wont really work properly.

texturing is handled by the materials. each mesh object has an index/pointer to a material in the scene collection, and its triangles have texture xy coords at each corner. a material can supply surface qualities for any texture xy coord query, by looking in its collection of image texture maps, one for each quality parameter. quality parameters are general, brdf model applicable characteristics, based on the ward brdf: reflectivity diffuse and specular, roughness x and y, transparency, emitted, tangent, normal.

scene indexing

not done yet. presently, tracing is helped by the instance hierarchy bounds.

light interaction

brdf form with stochastic distribution function

transmission faked by using the brdf on both sides

implementations: perfect mirror specular, perfect uniform diffuse, normalized phong, ward

general tests and visualizations

illumination

direct photon mapping with separate direct lighting:

  • emitter paths have emit position chosen stochastically by power, and propagate power entirely stochastically (no scaling), no emission is added at any bounces (all emissive energy enters at the start of the paths). photons are stored in the photon map for 2nd and later surface interactions after emission. path is ended by roulette.
  • direct lighting positions are chosen stochastically by power.
  • eye paths are partly stochastic with multiplicative cos and reflectivity. specular interaction propagates the path, maybe stopped by threshold returning zero. diffuse interaction joins path to illumination, which is direct lighting from the chosen position and local gather from photon map through surface bssdf.

rationale:

  • good sharp shadows where theyre usually produced - by direct lighting
  • good soft shadows where theyre usually produced - by indirect lighting
  • faster running
  • simpler code
  • but: fails to keep sharp shadows produced by direct lighting through glass or mirror... (hack in a special case to allow shadow rays to see through flat glass - covers most occuring failures)
  • alternatively, indirect photon mapping (final gather at each join):
    • more robust
    • handles all illumination equally
    • keeps sharp shadows produced by indirect lighting, but still not through glass or mirror...
    • needs fewer photons
    • but: slower - has extra photon map local gathers and numerical distribution generation for each pixel

calculated for a scene each frame, and then queryable for any point in the scene for the frame.

emitter path starts are chosen by importance sampling by wattage of the emitting objects in the scene. done hierarchically, at present down to the triangle level, but allows down to the texel level. theres no distinct lightsource object type and all objects can be emissive, but a threshold for classifying objects as emitters or not is a rendering setting.

the energy put into the start of each path is the total scene emission wattage divided by the the number of paths. total scene emission equals: mean radiant exitance times area, of each textured triangle, summed over all triangles, equals watts.

rasterization

projective render instead of initial eye ray tracing step.

subpixel position and lens position determine projection

trace pass: if a surface point is above the shinyness threshold for the frame, then instead of being illuminated, its traced into. tracing continues until russian roulette absorbtion, or threshold absorbtion, or the surface hit is below the shinyness threshold. this last path node, furthest from the eye, is illuminated, and the light carried back along the path as radiance, with any emission from hit surfaces added along the way.

illumination interpolation

2004-08-22