Gradient Reconstruction

For volume rendering, not only the original function needs to be reconstructed. Especially the first derivative of the three-dimensional function, which is called the gradient, is quite important since it can be interpreted as the normal of an iso-surface. Gradients are used in shading and thus have considerable influence on image quality. Möller et al. [34] even concede gradient reconstruction having a greater impact on image quality than function reconstruction itself. The ideal gradient reconstruction filter is the derivative of the sinc filter, called cosc, which again has infinite extent and therefore cannot be used in practise.

Möller et al. [34] have examined different methods for gradient reconstruction. They have identified the following basic approaches:

Derivative First (DF)
This method computes gradients at grid points of the rectilinear grid formed by the data samples. The gradient at a resample location is determined by interpolation between the gradients at the neighboring grid points.
Interpolation First (IF)
This methods computes the derivative from a set of additionally interpolated samples at the resample location.
Continuous Derivative (CD)
This approach uses a derivative filter which is pre-convolved with the interpolation filter. The gradient at a resample location is computed by convolving the volume by this combined filter.
Analytic Derivative (AD)
This method uses a special gradient filter derived from the interpolation filter for gradient estimation.

In their work, they prove that DF, IF and CD are numerically equivalent and show that the AD method delivers bad results in some cases. An important point of their work is the conclusion that the IF method outperforms the common DF method.

They state that the cost (using caching) of the DF method is $n E_D + m(4E_H)$ and the cost of the IF method is $m(E_D + E_H)$, where $m$ is the number of voxels, $n$ is the number of samples, $E_D$ is the computational effort of gradient estimation, and $E_H$ is the computational effort of the interpolation.

However, we recognize that if an expensive gradient estimation method is used, i.e. $E_D$ is much larger than $E_H$, and the sampling rate is high, i.e. $m$ is much larger than $n$, the DF method has advantages. Since the gradient estimation only is performed at grid points, a higher sampling rate does not increase the number of necessary gradient estimations. Additionally, from a practical point of view the DF method has other advantages: Modern CPUs provide SIMD extensions which allow to perform operations simultaneously on multiple data items. For the DF method this means that, assuming the same interpolation method is used for gradient and function reconstruction, the interpolation of function value and gradient can be performed simultaneously (i.e., since the function value has to be interpolated anyway, gradient interpolation almost comes for free). Using the IF method, this is not possible, since different filters are used for function and gradient reconstruction.

Central and intermediate differences are two of the most popular gradient estimation methods. However, since they use a small neighborhood they are very sensitive to noise contained in the dataset. Filters which use larger neighborhood therefore in general result in better image quality. This is especially true for medical datasets, which are often strongly undersampled in z-direction.

Neumann et al. [38] have presented a theoretical framework for gradient reconstruction based on linear regression which is a generalization of many previous approaches. The approach linearly approximates the three-dimensional function $f(x,y,z)$ according to the following formula:


\begin{displaymath}
f(x,y,z) \approx Ax + By + Cz + D
\end{displaymath} (3.3)

The approximation tries to fit a 3D regression hyperplane onto the sampled values assuming that the function changes linearly in the direction of the plane normal $n=(A,B,C)^T$. The value $D$ is the approximate density value at the origin of the local coordinate system. They derive a 4D error function and examine its partial derivatives for the four unknown variables. Since these partial derivatives have to equal zero at the minimum location of the error function, they end up with a system of linear equations. Assuming the voxels to be located at regular grid points leads to a diagonal coefficient matrix. Thus, the unknown variables $A,B,C$,$D$ can be calculated by simple linear convolution:


\begin{displaymath}
\begin{array}{l}
A = w_A \sum\limits_{k = 0}^{26} {w_k f_k x...
... z_k } ,D = w_D \sum\limits_{k = 0}^{26} {w_k f_k }
\end{array}\end{displaymath} (3.4)

with
\begin{displaymath}
\begin{array}{l}
w_A = \frac{1}{{\sum\nolimits_0^{26} {w_k x...
...} }},w_D = \frac{1}{{\sum\nolimits_0^{26} {w_k } }}
\end{array}\end{displaymath} (3.5)

The $w_k$ are the weights of the weighting function, an arbitrary spherically symmetric function, which is monotonically decreasing as the distance from the local origin is getting larger. $k$ denotes the index of a voxel with an offset $(x,y,z)^T$ in the 26-neighborhood of the current voxel and is defined as:


\begin{displaymath}
k = 9(z + 1) + 3(y + 1) + (x + 1)
\end{displaymath} (3.6)

The vector $(A,B,C)^T$ is an estimate for the gradient at the local origin and the value $D$ is the filtered function value at the local origin. Using the filtered values instead of the original samples leads to strong correlation between the data values and the estimated gradients. These low-pass filtered values come as by-product of gradient estimation at little additional cost. Using this estimation method for an arbitrary resample location, however, requires additional computational effort. It is necessary to perform a matrix inversion and a matrix multiplication at each location. Thus, the gradient estimation using Neumann's approach is much cheaper, if gradients are only computed at grid points. However, we do not pre-compute the gradients since this would require a considerable amount of additional memory. Instead, the gradients and filtered values are computed on-the-fly for each cell. Trilinear interpolation is then used to calculate the function value and gradient at each resample location.

Additionally, this approach has other advantages: Since nothing is pre-computed, different gradient estimation and reconstruction filters can be implemented and simply changed at run-time without requiring further processing. It also helps to solve the problem of using the filtered values instead of the original samples, because the original dataset is still present and the additional filtering can be disabled by the user. This is important in medical applications, since some fine details might disappear due to filtering.