---- DATABASE ERROR ----

Digging Deep: Precomputing Parametric Objects

by Mike Kost

Introduction

Parametric objects are can be used to create fantastic shapes, but they clobber the rendering time. Precomputing is a powerful way to mitigate the rendering time penalty associated with parametric objects. This Digging Deep highlights the advantages of precomputing parametric objects in POV-Ray 3.6. 

Quick Reading

Some documentation for a lead in:

Precomputing

Parametric objects allow you to specify the shape by sweeping variables U and V over ranges to generate the X, Y, and Z coordinates of the surface. This is great for specifying the surface, but ray tracing needs to be able to tell when a ray intersects a parametric object. Taking an <X, Y, Z> coordinate and determining if it maps to a U, V pair is mathematically difficult relative to other POV-Ray primitives. This is where precomputing comes in. Precomputing generates arrays that map U,V pairs with ranges of <X,Y,Z> to help jump start the reverse calculation. The downside of precomputing is that it increases the amount of time spent parsing the .pov file.

Enabling Precomputing

Precomputing is enabled by inserting the precompute directive into a parametric object as shown below (it's a unit circle)
parametric {
function { sin(u)*cos(v) }
function { sin(u)*sin(v) }
function { cos(u) }

<0,0>, <2*pi,pi>
contained_by { sphere{0, 1.1} }
max_gradient ??
accuracy 0.0001
precompute 10 x,y,z
pigment {rgb 1}
}
The modifiers following precompute are the DEPTH and the VarList. DEPTH specifies how many entries are to be present in the arrays, with the count being 2^DEPTH. VarList specifies which dimensions, X, Y, and Z, should be precomputed.

The Tests

To get a handle on the performance gains for precomputing, I created a test scene that contained a relatively complex parametric object and rendered it with the following command line
povray +W640 +H480 vase.pov
The result is a 640x480 image that looks remarkable like the one below.

Rendered scene

I ran this scene with several precompute depths on my AMD Athlon 1 GHz machine and the results are below

Precompute
Depth
Parse Time
(seconds)
Render Time
(seconds)
Total Time
(seconds)
% Of No
Precompute
0 (Off)
1
7282
7283
100%
5
1
7186
7187
98.7%
10
1
6045
6046
83.0%
15
1
3951
3952
54.3%
20
23
1907
1930
26.5%

The most surprising result is that maxing out the precompute depth results in a rendering time almost 1/4th of the original. This is great! But, unfortunately, there's a snag.

Memory Consumption

The problem with precomputing is that it's memory intensive. After each of the renderings completed, I recorded the peak memory that Povray reported for the run. The results are below

Precompute
Depth
Memory
(bytes)
Delta From
0 (bytes)
0 (Off)
302,978
0
5
304,606
1,628
10
352,222
49,244
15
1,875,934
1,572,956
20
50,634,718
50,331,740

Wow! That 75% reduction in rendering time took about 50MB. That much memory may be OK if I've got 1 parametric object in my scene, but if I'm crazy enough to have a ton of them, my poor computer is doomed. That huge speed-up does have a trade-off to it. If we take a couple of minutes and analyze the memory consumption, it'll become easier to understand the trade-off decision.

From the Povray documentation, we know that the precompute memory, M, is proportional to 2^DEPTH. To go whole hog, lets assume that this is true and try to fit the equation M = K * 2^DEPTH. We can do this by calculating K = M/2^DEPTH and seeing if a pattern emerges.

Precompute
Depth
Delta From
0 (bytes)
M/2^Depth
0 (Off)
0
n/a
5
1,628
50.875
10
49,244
48.090
15
1,572,956
48.002
20
50,331,740
48.000

As we can see, K converges to 48 [1]. If we expand the equation to assume that there's some fixed overhead to using precomputing [2], the final memory cost of precomputing comes out to M = 48 * 2^DEPTH + 92. We can now figure out what the memory usage will be for any depth level. What's most important here is that we can cut the memory usage in half by stepping back one or two depths if we're willing to trade-off speed. A depth of 19 will take 25 MB and 18 will take 12.5MB, and we know that the performance will be somewhere between a 50% less time and 75% less time.

Conclusions

Rendering times for parametric objects are still nasty beasts and this article has barely scratched the surface on the topic. Settings like max_gradient, proper bounding boxes, and how large the object is will dramatically effect rendering times also, but when all is said and done, precompute is your last hope for some rendering time pick-up.

And based on the results, it's easy to give a recommendation: unless your dropping parametric objects like candy all over your scene, you should be running precompute with a depth of at least 15 to pick up rendering time. It costs about 1.5 MB of RAM and about a second in the parsing stage, which is easily recovered once Povray glances at the parametric object.

Notes and Disclaimers.

[1] - 48 makes a lot of sense. We're precomputing ranges of coordinates for X, Y, and Z. It takes 2 entries to specify a range. Therefore we have 3 coordinate ranges, with 2 entries per range, giving 6. 48 / 6 = 8 bytes per entry. Since Povray works on floating point, I'd assume that it's using double precision floating point numbers which happen to be 8 bytes per number. TA DA!

[2] - The math has got to be getting dull unless this stuff really interests you, so I pulled it down here. After estimating that K was converging on 48 from the previous results, revise the original equation to be M = K*2^DEPTH + Offset. From this, we can rearrange to Offset = M - K*2^DEPTH. From here, run the math, and it turns out that the equation solves perfectly for precompute depths of 5, 10, 15, and 20 when Offset = 92. Case closed.

Published: 10/23/04
Last edited: 10/23/04

Copyright (C) 2004 Mike Kost