Tuesday, December 11, 2007

Top-down tree & lightcut cost

Over the last 2 weeks I:
  • Implemented a top down light tree builder & experimented with some modifications to the heuristic used
  • Estimated the cost for several parts of the lightcuts algorithm in a few scenes
The top-down tree building algorithm I implemented is a simple greedy (based on the heuristic) approach like the ones used for ray intersection acceleration structure building with SAH. To implement this efficiently for a significant number of lights (ex. 100k) poses some problems for the cone angle term in the heuristic used by the authors of the Lightcuts paper. Currently I calculate this angle for all split candidates in a single sweep over which obviously results in suboptimal angles. Another restrictions in the current top-down approach is that all lights are split in 2 non-overlapping (spatially) groups. Most of the time this poses no problems and has the nice side-effect of clustering lights that fill their bounding volume a lot better then cluster formed in the bottom up approach.
After visualizing some of the trees resulting from the top-down and bottom-up approaches with the heuristic as proposed in the lightcuts paper it was clear that the clusters formed for indirect lights are sub-optimal. These clusters tend to extend into empty space instead of following the geometry more closely. To suppress this phenomenon I added the size of the bounding in the direction of the normal as a term to the heuristic. I also changed the coefficient for the angle and new term from the squared diagonal length of the entire scene to the squared diagonal length of the current box being split. Using this heuristic in the conference scene with a glossy floor resulted in a modest improvement for the cut size (15%) for a similar tree build time.

After being suggested in the meeting last week, I also tried to estimate the cost for several parts of the lightcuts algorithm. The total render time can be split as:
Ttot = Teval + Tbound + Titer

whith
  • Ttot: the total render time

  • Teval: the time required for evaluating the clusters (i.e. shadow ray, bsdf & geometric term)

  • Tbound: the time for evaluating the upper bounds for the clusters

  • Titer: the other time (o.a. pushing/popping on the priority queue)
To estimate these I rendered the same scene several times:
  • lightcuts (Ttot)

  • lightcuts + an extra evaluation of the resulting cut (Ttot + Teval)

  • lightcuts + an extra evaluation of all bounds which also requires storing these in a list during cut calculation (Ttot + Tbound)

  • lightcuts + resulting cut + bounds (Ttot + Teval + Tbound)
The scenes used where the conference scene and the statues scene (once diffuse only and once diffuse & glossy). For all the scenes Titer is around 20% of the total time, the cost for the bounds varies a bit more however: 35% for the conference scene (cos light sources have a more expensive bound), 12% for the diffuse statues scene (directional light bound is trivial) and 21% for the glossy statues scene (the glossy bound is more expensive then a diffuse bound). The resulting Teval's are respectively 43%, 66% and 58% (same order as before) so a modest improvement is possible if (a part of) the cuts could be reused. Reusing cuts could be done in a similar way as the reconstruction cuts algorithm proposed in the paper but by completely evaluating (and if necessary refining) the cut, the result will be identical to the lightcuts result. The reconstruction cuts approach however could blur or miss some fine details (ex. shadow boundaries).