The modifiers following precompute are the DEPTH and the VarList. DEPTH specifies how many entries are to be present in the arrays, with the count being 2^DEPTH. VarList specifies which dimensions, X, Y, and Z, should be precomputed.parametric {

function { sin(u)*cos(v) }

function { sin(u)*sin(v) }

function { cos(u) }

<0,0>, <2*pi,pi>

contained_by { sphere{0, 1.1} }

max_gradient ??

accuracy 0.0001

precompute 10 x,y,z

pigment {rgb 1}

}

The result is a 640x480 image that looks remarkable like the one below.povray +W640 +H480 vase.pov

I ran this scene with several precompute depths on my AMD Athlon 1 GHz machine and the results are below

Precompute Depth |
Parse Time (seconds) |
Render Time (seconds) |
Total Time (seconds) |
% Of No Precompute |

0 (Off) |
1 |
7282 |
7283 |
100% |

5 |
1 |
7186 |
7187 |
98.7% |

10 |
1 |
6045 |
6046 |
83.0% |

15 |
1 |
3951 |
3952 |
54.3% |

20 |
23 |
1907 |
1930 |
26.5% |

The most surprising result is that maxing out the precompute depth results in a rendering time almost 1/4th of the original. This is great! But, unfortunately, there's a snag.

Precompute Depth |
Memory (bytes) |
Delta From 0 (bytes) |

0 (Off) |
302,978 |
0 |

5 |
304,606 |
1,628 |

10 |
352,222 |
49,244 |

15 |
1,875,934 |
1,572,956 |

20 |
50,634,718 |
50,331,740 |

Wow! That 75% reduction in rendering time took about 50MB. That much memory may be OK if I've got 1 parametric object in my scene, but if I'm crazy enough to have a ton of them, my poor computer is doomed. That huge speed-up does have a trade-off to it. If we take a couple of minutes and analyze the memory consumption, it'll become easier to understand the trade-off decision.

From the Povray documentation, we know that the precompute memory, M, is proportional to 2^DEPTH. To go whole hog, lets assume that this is true and try to fit the equation

Precompute Depth |
Delta From 0 (bytes) |
M/2^Depth |

0 (Off) |
0 |
n/a |

5 |
1,628 |
50.875 |

10 |
49,244 |
48.090 |

15 |
1,572,956 |
48.002 |

20 |
50,331,740 |
48.000 |

As we can see, K converges to 48 [1]. If we expand the equation to assume that there's some fixed overhead to using precomputing [2], the final memory cost of precomputing comes out to

And based on the results, it's easy to give a recommendation: unless your dropping parametric objects like candy all over your scene, you should be running precompute with a depth of at least 15 to pick up rendering time. It costs about 1.5 MB of RAM and about a second in the parsing stage, which is easily recovered once Povray glances at the parametric object.

[2] - The math has got to be getting dull unless this stuff really interests you, so I pulled it down here. After estimating that K was converging on 48 from the previous results, revise the original equation to be

Published: 10/23/04

Last edited: 10/23/04

Copyright (C) 2004 Mike Kost