With graphics hardware becoming increasingly powerful, researchers have started to utilize the features of commodity graphics hardware to perform volume rendering. These approaches exploit the increasing processing power and flexibility of the Graphics Processing Unit (GPU). Nowadays, GPU-accelerated solutions are capable of performing volume rendering at interactive frame rates for medium-sized datasets on commodity hardware.
One method to exploit graphics hardware is based on 2D texture mapping [45]. This method stores stacks of slices for each major viewing axis in memory as two-dimensional textures. The stack most parallel to the current viewing direction is chosen. These textures are then mapped on object-aligned proxy geometry which is rendered in back-to-front order using alpha blending (see Figure 2.4). This approach corresponds to shear-warp factorization and suffers from the same problems, i.e., only bilinear interpolation within the slices, and varying sampling rates depending on the viewing direction.
Approaches that use 3D texture mapping [2,7,55,31] upload the whole volume to the graphics hardware as a three-dimensional texture. The hardware is then used to map this texture onto polygons parallel to the viewing plane which are rendered in back-to-front order using alpha blending (see Figure 2.5). 3D texture mapping allows to use trilinear interpolation supported by the graphics hardware and provides a consistent sampling rate. A problem of these approaches is the limited amount of video memory. If a dataset does not fit into this memory, it has to be subdivided. These blocks are uploaded and rendered separately, making the bus bandwith a bottleneck. One way to overcome this limitation is the use of compression [11].
The increasing programmability of the graphics hardware has enabled several researches to apply acceleration techniques to GPU-based volume rendering [46,18]. The performance of these approaches, however, is heavily dependent on the hardware implementation of specific features.