Uni Bielefeld
CITEC Virtual Reality Lab
CITEC

GPU-accelerated Generation of Heat-Map Textures for Dynamic 3D Scenes: Visualizing the Distribution of Visual Attention in Real-Time

Figure 1: Real-time creation of attention heat-map textures on arbitrary 3D objects based on 3D gaze analysis. The picture shows a high-end off-line rendering with mapped heat-maps done in Blender. The model titled “BMW 3 Series Coupe” has been created by mikepan and released under the Creative Commons Attribution, Share Alike 3.0 at http://www.blendswap.com.

There are many applications for the visualization of 3D distributions (volumetric data) over 3D environments. Examples are the mapping of acoustic data (e.g. noise level), traffic data (e.g. on streets or in shops) or visual saliency as well as visual attention maps for the analysis of human behavior. We developed a GPU-accelerated approach which allows for real-time representation of 3D distributions in object-based textures. These distributions can then be visualized as heat maps that are visually overlaid on the original scene geometry using multi-texturing. Applications of this approach are demonstrated in the context of visualizing live data on visual attention as measured by a 3D eye-tracking system. The presented approach is unique in that it works with monocular and binocular data, respects the depth of focus, can handle moving objects and outputs a set of persistent heat map textures for creating high-end photo-realistic renderings.

Demonstration Video

Video 1: The video demonstrates the real-time acquisition of 3D gaze data and different options for visualization. It also shows how the program can be tested with simulated eye gaze for testing purposes.

On our website you will find further information about eye tracking in 3D worlds.

The algorithm in a nutshell

The basic principle behind our approach is a specific texture workflow. Every model holds an Attention Texture that stores it's attention values. Thereby every mesh region that is covered with texture coordinates will automatically be able to store attention values in the analog regions of the Attention Texture.

Figure 2: The texture workflow. First the attention values are updated in the Attention Texture. After the global scaling is determinded the attention values can be mapped to a defined color ramp and displayed on top of the models surface.

Applying the attention values to the Attention Textures is the most demanding task. This is performed in a shader that computes and writes the added attention value for each pixel of the texture. First we identify the 3D point of regard in the scene. By computing the 3D distance towards this point we can:

Figure 3: Utilizing the distance of each pixel to the point of regard, we only colorize texture areas that are contained inside a sphere around the point of regard and weight added attention values with a gaussian distribution. The sphere is adjusted in size to approximate the attention spread that can be expected for a given distance: the closer the fixated point to the observer, the smaller and more focused the area of attention. THe more distant the fixated point, the more widespread the area of attention.

To support occlusions between objects we added an intermediate step where shadow maps are computed for the defined eyes. Therefore each pixel inside the Attention Textures can look up if it's occluded or visible before adding attention values. However due to a basic shadow mapping approach our example demo suffers from typical shadow mapping issues.

Figure 4: In the more complex scene of the children’s bedroom, the realism of the achieved heat maps can be estimated. The model has been created by GLGraffixx under the Creative Commons Attribution 3.0..

Displaying the colorized scene during projection of attention data would usually result in doubled computational costs. This is due to the global attention maximum that has to be determined for normalization, enforcing a second iteration over all Attention Textures to update them to the normalized values. To solve this issue we use one additional texture, called Max Attention Texture, which is bound simultaneously while updating each of the Attention Textures. Whenever the attention value of an updated pixel on the Attention Texture is higher than the value of the same pixel in the Max Attention Texture, we will also update this new local maximum there. The Max Attention Texture will therefore always contain the global attention maximum. With this technique and Parallel Reductions the global attention maximum can be computed more efficiently.

Further Reading: IEEE VR2015 Poster

Software

Here you can download our software demo:

Involved People

Postdoctoral Researcher Dr. Thies Pfeiffer
Student Researchers Cem Memili