Texture Mapping Techniques

With graphics hardware becoming increasingly powerful, researchers have started to utilize the features of commodity graphics hardware to perform volume rendering. These approaches exploit the increasing processing power and flexibility of the Graphics Processing Unit (GPU). Nowadays, GPU-accelerated solutions are capable of performing volume rendering at interactive frame rates for medium-sized datasets on commodity hardware.

Figure 2.4: Volume rendering using 2D textures. For each of the three major viewing axes, a stack of 2D textures is stored. During rendering the stack corresponding to the axis most parallel to the current viewing direction is chosen and rendered in back-to-front order as textured quads using alpha blending.
\includegraphics{state_of_the_art/images/texture2D.eps}

One method to exploit graphics hardware is based on 2D texture mapping [45]. This method stores stacks of slices for each major viewing axis in memory as two-dimensional textures. The stack most parallel to the current viewing direction is chosen. These textures are then mapped on object-aligned proxy geometry which is rendered in back-to-front order using alpha blending (see Figure 2.4). This approach corresponds to shear-warp factorization and suffers from the same problems, i.e., only bilinear interpolation within the slices, and varying sampling rates depending on the viewing direction.

Figure 2.5: Volume rendering using 3D textures. The volume is stored as a single 3D texture. A set of planes parallel to the image plane is rendered in back-to-front order using alpha blending with appropriately specified texture coordinates.
\includegraphics{state_of_the_art/images/texture3D.eps}

Approaches that use 3D texture mapping [2,7,55,31] upload the whole volume to the graphics hardware as a three-dimensional texture. The hardware is then used to map this texture onto polygons parallel to the viewing plane which are rendered in back-to-front order using alpha blending (see Figure 2.5). 3D texture mapping allows to use trilinear interpolation supported by the graphics hardware and provides a consistent sampling rate. A problem of these approaches is the limited amount of video memory. If a dataset does not fit into this memory, it has to be subdivided. These blocks are uploaded and rendered separately, making the bus bandwith a bottleneck. One way to overcome this limitation is the use of compression [11].

The increasing programmability of the graphics hardware has enabled several researches to apply acceleration techniques to GPU-based volume rendering [46,18]. The performance of these approaches, however, is heavily dependent on the hardware implementation of specific features.