In a volume visualization system interaction is very important. The user has to be able to freely move the viewpoint and zoom in and out. However, since the performance of the algorithm cannot be predicted for all types of datasets and transfer functions, it is necessary to use an adaptive scheme for modifying rendering parameters to achieve reactivity during interaction.
We identify the following rendering parameters that represent a trade-off between quality and speed:
Since the rendering time is approximately proportional to the number of rays cast, the image sample distance has quadratic influence on rendering time. The influence of the object sample distance is very much dependent on the transfer function and the dataset itself, thus, in contrast to the image sample distance there is no general rule. For resampling and gradient quality there is also no general rule. Though the different methods can be ordered according to their complexity, their influence on the actual render time cannot be predicted.
Our adaption scheme uses a user-supplied desired render time , the minimum and maximum values for the image sample distance and , the minimum and maximum values for the object sample distance and , and the minimum and maximum values for the reconstruction quality level and . The reconstruction quality level defines the method used for resampling and gradient estimation. The methods are ordered according to their quality and complexity in the following way: and .
The basic adaption procedure given in Algorithm 11 computes the values for , for a desired render time based on the values of and the render time achieved with these settings. and define the increments in which to increase or decrease the objects sample distance and reconstruction quality level.
First, the image sample distance is adjusted according to the assumption that it has quadratic influence on the render time. If the resulting image sample distance is lower than , the image sample distance is set to and the object sample distance is adjusted. If the object sample distance cannot be adjusted, i.e., the resulting value is lower than , then it is set to and the reconstruction quality level is adjusted, if possible. If the adjusted image sample distance is greater than , it is set to and the object sample distance is adjusted. If the object sample distance cannot be adjusted, i.e., the resulting value is greater than , then it is set to and the reconstruction quality level is adjusted, if possible.
An application using this adaption scheme can supply different degrees of interactivity corresponding to different values for . For example, it is common to provide an interactive mode while the user modifies the viewing parameters (camera position, lighting setting, etc.) and a high-quality mode. The interactive mode would have a low value for , e.g. 0.1 seconds (10 frames/second). The high quality mode would use a very high value or even for to ensure the best possible quality. However, since the adaption for the object sample distance and the reconstruction quality level is only incremental, it is possible that a transition from interactive mode to high quality mode does not lead to the best quality.
A solution to this problem is to base the adaption on the last known values for the current mode. In a table, for every possible value of the values for image sample distance, object sample distance, reconstruction quality level, and the actual render time achieved with these settings are stored. Before ComputeAdaption is called, is used to retrieve and from this table. After rendering has been performed, the measured render time and the corresponding settings are written into the table again. This ensures that the adaption is always based on the last known values for the current render mode. An application can use this method to define any number of different render modes. It is even possible to define new modes at run-time by simply specifying a new value for . Using a value that is not found in the table causes a new entry filled with default values to be generated.
|
Figure 3.19 shows an example of the adaption scheme used in our prototype application. The interactive mode is activated when the user presses a mouse button and moves the mouse to rotate the camera. When the mouse button is held down longer than one second without moving the mouse, a preview mode rendering is automatically performed. A high quality mode rendering is performed as soon as the user releases the mouse button.