Evenly-Spaced Streamlines In 3D

Using Illuminated Stream Lines

 

Motivation

 

Overview:

The purpose of this project is to combine two methods of flow visualisation. First, we use the "Evenly-Spaced Streamlines" algorithm(ESS) by Bruno Jobard and Wilfrid Lefer to evenly distribute streamlines over a 3D vectorfield. This algorithm produces excellent results for 2D pictures and can be extended to 3D in a rather straight-forward way.

As it is difficult to distinquish length, orientation and position of streamlines in a 3D picture because of the perspective, we want to improve spatial perception by adding illumination from a light source. Unfortunately, popular lighting models like Phong shading cannot easily be applied to streamlines, because lines have abitrary many normal vectors. M. Zöckler, D. Stalling, and H.-C. Hege developed a illumination method which doesn't depend on the normal vector,  but entirely on the tangent vector of the streamline, and they called it "Illuminated Stream Lines". By using texture mapping and texture matrix, a graphics library like OpenGL can be used to render the illumination correctly.

 

Mapping of scalar values on streamline attributes:

Besides from combining two techniques of flow visualisation, we concentrated our efforts on the effective mapping of scalar values (velocity, pressure) on streamline attributes like line width, opacity, color, streamline material and streamline density.

For mapping streamline density interactively, we developed a method that we called "Incremential streamline resolutions". This is an extension to the "Evenly-Spaced Streamlines"-algorithm, with the number of resolutions as an additional input parameter. It works exactly like "Evenly-Spaced Streamlines", with the exeption that it does not calculate all streamlines for the highest resolution at once.
Instead it first calculates the streamlines for the lowest resolution. Then it reuses the calculated streamlines for the next resolution(by putting them back into the streamline queue => see "Evenly-Spaced Streamlines"-section). This step is repeated until the streamlines for the highest resolution are calculated.

 

In pseude-code, the algorithm would look like following:

rn = number of resolutions.
dsep = separating distance between streamlines of highest resolution.
q = ESS-streamline queue.
l = list of already calculated streamlines.
ESS = Evenly-Spaced Streamlines algorithm

 

q := {};
l := {};

for(int i=1;i<rn + 1;i++)
{
    Put already calculated streamlines into q.
    Apply ESS for separating distance dsep * rn / i;
    Assign newly calculated streamlines a resolution index of i;
    Add new streamlines to l;
}
 

Output:
l = {streamlines with resolution index};
q = {};    
 

For example, if the resolution number is 4, the algorithm calculates streamlines for 25%, 50%, 75% and finally 100% of the highest streamline resolution and assigns them a resolution index of 1-4.

Because streamlines of lower resolutions are reused for higher resolutions, there is no cutoff of lower resolution streamlines in regions of higher streamline density. Therefore, with enough resolutions calculated, smooth and continous changings of the streamline density according to some scalar value can be achieved.

 

Problems:

The main problem of our project lies again in the perspective. Whereas the ESS-algorithm works good in 2D, the evenly distribution of the streamlines is destroyed in 3D. There are regions with many streamlines and regions with few streamlines, and this regions change as the view direction changes. The ultimate goal is to achieve "Screen-Space Evenly-Spaced Streamlines".

 

Detailed description

 

Evenly-Spaced Streamlines

 

The idea behind the algorithm is to derive all possible streamlines in a distance dsep from an existing streamline. For that reason reason we use a queue. The algorithm goes like this:

1) We place the first streamline in the queue.

2) We take one streamline out of the queue, calculate all possible seedpoints for a new streamline in distance dsep, integrate the streamlines and place them into the queue.

3) Repeat step 2 until the queue is empty.

For the streamline integration we use another parameter dtest (test distance). Dtest is in the range of 0.0 to 1.0, and is called test distance, because the streamline is tested on every integration step if it still is in a distance >= dsep*dtest from all existing streamlines. If not, the integration of this streamline is stopped. For the integration a Euler, Runge-Kutta-2 or Runge-Kutta-4 operator can be used, for example.
To avoid having to make the distance test with every samplepoint of every streamline, which would be very time consuming, an additional grid called dsep-grid is used. The grid has cell size of dsep, and (pointers of) all calculated samplepoints are inserted into this grid on the right position. Now for every streamline seedpoint, samplepoints closer than dsep can only be in the cell where the seedpoint itself belongs, or in the neighbouring dsep-grid cells. Therefore the test must take place only for the samplepoints in that cells(9 cells in 2D and  27 in 3D).

For 3D, the only additional difficulty compared to the 2D case is that of the new streamline seedpoint calculation. In 2D, you just iterate through every point of the streamline, and consider the 2 points in a distance dsep which are normal to the streamline tangent vector as new seedpoint for a streamline. In 3D, as I said before, there are arbitrary many normal vectors and therefore abitrary many seedpoints. Therefore the following approach is made: One normal vector  is chosen and then rotated around the streamline for n-1 times, with streamline tangent vector as the rotation axis. So we get n candidates for a streamline seedpoint.

 

Illuminated Stream Lines
 

1) Mathematical background

 

We use following notations:

I = light intensity

L = light vector

V = view vector

N = normal vector

T = tangent vector

R = reflection vector = the vector in the L-N-plane with the same angle to N as L.

n = coefficent regulating the highlight intensity

k = diffuse, ambient and speculat color components

With the Phong shading model, the light intensity on any surface point is calculated by

Unfortunately, for line segments there are no unique R and N , but infinitely many, so which normal and reflection vector should we choose? We take the normal vector coplanar to L and T as N, and we take the reflection vector coplanar to L and T as R.
With this particular N, we can now project L into the lines tangent and normal space, yielding an orthogonal decomposition of L:

Because the decomposition is orthogonal, we can  totally eliminate N from the diffuse reflection term with the  help of Phythagoras's theorem:

We can use a similar approach to eliminate N from the specular reflection term either:

 

2) Hardware rendering

 

Diffuse term:

The illumination can be rendered using texture maps and the texture transformation matrix of OpenGL. Remember, the diffuse term of the Phong equation is given by

We want to express the L-part of the dot product with the texture matrix and use the tangent vector as texture coordinate. The texture matrix is written  as

Then the entry point into a one dimensional texture map(note that the term is always between 0 and 1) is given by

Specular term:

If we also want to add the specular term of Phong's model, the  view vector V comes into play, and we have to rewrite the texture matrix like this:

Then we get an entry point into a two dimensional texture map:

Now we can use following formulas to create a 2-dimensional texture map:

 

 

The program: Flow3D

 

 

The program is called Flow3D and is written in C++ with MFC-support. For a detailed description please read the source documentation and the user manual. The documentation, the executeable and the source code are free for download.

 

Files available for download:

 

1) Source code: Flow3D.zip

2) Executeable: Flow3D_exe.zip

3) Source documentation: Flow3D_doc_html.zip

 

Rendering results:

 

The development of the streamline rendering can be seen here.

Except from visualisation of fractal attractors, there are also some visualisations of real datasets which can be inspected here.

Some tool to render volume datasets as flow-field can be inspected here.

 

Memory consumption:

 

Every grid point is stored as a samplepoint. A samplepoint contains the following data:

Position: 3 floats
Vector: 3 float for unit vector, 1 float for magnitude. (Unit vector can be directly used for illumination calculation)
Quadratic minimal distance(to other samplepoints): 1 float.
Additional scalar information: 1 float each.

Consider there is one additional scalar information like pressure => every samplepoint needs 9 floats and one pointer = 38 bytes. An average grid size of 100x100x100, for instance, would need 36 000 000 bytes just for the sample points = ca. 38 MB.

Streamline calculation adds new samplepoints. The number of the new samplepoints depends on the choosen separating distance and integrator step size, so no exact consideration can be made. For every new samplepoint, one pointer to this samplepoint is stored in the dsep-grid.

The actually displayed samplepoints is usually a much lower number that the samplepoints calculated, especially if dt is low, because a minimal distance to previous samplepoint must be given for a samplepoints to be rendered.

Two adjacent displayed samplepoints are put together in a streamline piece, and in the rendering step the streamlines are rendered by drawing lines between the two samplepoints of a streamline piece. Besides the two pointers to the samplepoints, a streamline piece contains a couple of rendering information about the line, like color, line width, opacity, scalar region index, resolution index and so on.
 

Running the program, I made following observations of the memory consumption:

After starting up, the application needed 8148KB. Calculating a grid of 100x100x100, the application needed 55576 KB.
After the streamline calculation with reasonable values, the application needed 58436 KB => Streamline calculation needed  much less than the grid computing.

 

Oliver Mattausch, 9506125
Department of Computer Science TU WIEN