Specification - GSoC Shader Engine

From Gephi:Wiki
Jump to: navigation, search

GSoC Project: Shader-Based Graph Visualization Engine

Student: Antonio Patriarca

Mentor: Mathieu Bastian

Gephi is an open-source software for graph and network analysis. Therefore, it should be able to efficiently visualize and manipulate large graphs. The current engine is really portable, but it is unable to make the best use of the hardware because of the legacy features it is based on and because it wasn't designed considering the modern GPU architecture. The aim of the project is to redesign the visualization engine to improve both performance and quality. Several additional features will be included like the ability to set a different shape for each node. The visualization engine will first be implemented in a standalone application and then included in Gephi in the future (after the final term of the Google Summer of Code probably). When completed, it will generate higher quality representation of the graph in less time and generally improve the experience with Gephi.

Design Principles

This section will describe the basic ideas behind the design of the new Visualization Engine.

Batches

Current GPUs are stream processors: there are several small cores, divided in groups, which concurrently executes the same program (the shader program) on a large array of homogeneous data (for example for each fragment or vertex). When the data layout or the program change, the entire pipeline stall (the GPU have to wait all the cores to terminate their work). To achieve the better performance, we therefore have to render nodes and edges in batches.

Shader programs

Recent GPUs can execute very complex programs efficiently and I will use this capabilities to render node shapes efficiently while minimizing the amount of geometry needed for each node. Using impostors in 3D it will be able for example to render a perfect sphere (with curved and anti-aliased edges) using a single quad. A more traditional technique will be obviously used when shaders aren't supported.

Multi-threading

The new engine will use three different threads: the rendering thread, the controller thread and the graph data thread. The rendering thread is controlled by JOGL and it will render the batches of the current render queue. The controller thread manage all the events generated by the other two parts of the engine (and by the other Gephi modules). The graph data thread will update the internal graph data representation structure the engine use and to generate the batches to be rendered. To simplify the communication between the systems the majority of the classes will be immutable.

New Features

The engine will also implements some new features not currently available in Gephi. This features are:

  • Better image quality with the same multi-sample settings.
  • Better performance.
  • Images used to visualize nodes.
  • Background image.
  • Different node models at the same time.
  • Wider choice of 2D models for nodes (Circle, Square, Diamond, Triangle, Hexagon, Image...).
  • Halos or depth-aware borders around nodes to visualize clusters of nodes.

Graph rendering

In this section I will discuss how graphs will be rendered in the new engine. Nodes and Edges will be stored in batches and rendered separately. The rendering system is composed by several renderers. Each one is specialized in the rendering of a single object (node, edge or edge loop) and it use a particular version of OpenGL. Two versions of each renderer will be implemented: a version based on OpenGL 1.2 (for compatibility with older hardware) and a version based on OpenGL 2.0 which uses newer features (shaders and VBO in particular).

Node rendering

Nodes will be rendered as quads in the new engine and the various shapes will be created procedurally in the fragment shader. Since projection is a linear transformation, the bilinear interpolation used to define a point in the quad using texture coordinates is preserved in the projected quad. Therefore, every shape which can be defined using texture coordinates can be easily recreated in the fragment shader. This method also have the advantage to support several different shapes at the same time (using some ifs) and to support a multi-sample independent anti-aliasing.

2D shapes are defined by screen-aligned quads. A screen-aligned quad does not change its shape when projected on the image plane (only its scale). The scale only depends on the distance between the image plane and the quad. In this case it is therefore possible to calculate the image of the quad without computing the model-view-projection matrix. But it will probably not be the case in this engine.

Spheres, the only 3D shape which will be implemented, are a little harder to implement. In [1] they are created using screen-aligned quads, but it is not mathematically correct. Since GPU capabilities are increased from the time QuteMol was designed, I will implement a more mathematically correct projection. In particular, the quad which will define the sphere will be perpendicular to the vector from the camera to the sphere center and a small parallax correction will be applied to calculate the distance to the sphere at each point.

Halos will be implemented as the other node shapes. Halos will have the same shapes as the basic node, but they will be colored with a flat color, they will be bigger and alpha blended. Depth writing in this case will be disabled.

Edge rendering

Edges are usually visualized as (curved) lines or arrows between nodes. They may have different sizes and colors. To simplify engine implementation and improve performance, only straight edges will be supported.

The undirected edges are camera-aligned rectangles. The position of their vertices are computed in the renderer on the CPU in both the OpenGL 1.2 and the OpenGL 2.0 versions. In the future, if a renderer based on geometry shaders will be implemented, the position will be computed on the GPU. I think doing it on the current versions requires too much overhead.

When using shaders, directed edges are rendered as camera aligned rectangles as well. In this case the tip of the arrow is drawn in the fragment shader procedurally. In the OpenGL 1.2 one or two additional triangles are sent to the GPU instead.

Self loops rendering

A self loop is either renderer as a symmetric annulus or as the difference of two tangent circles. In both cases a single quad is rendered and the shape is created procedurally in the fragment shader or using multitexturing and a procedurally generated (in the CPU) texture.

Label rendering

In Gephi, each node or edge may have an associated label. It is difficult to write the code to efficiently render text and I will therefore use the TextRenderer interface already available in the JOGL library. Indeed, it is already quite optimized and tested and it is probably difficult to write something better.

Post-processing effects

One of the advantages of the programmable pipeline is the ability to easily implement several post-processing effects. The most important effect I will implement in the new engine will be the screen space ambient occlusion technique (SSAO). It is a screen space approximation of the more expensive ambient occlusion algorithm. The ambient occlusion greatly improves 3D shape perception and it can be very useful to visualize huge networks. Indeed, it has been successfully used in QuteMol to visualize large molecules ([1]).

Engine design

The engine will follows a MVC pattern: the GraphViewer will be responsible to manage the rendering loop and effects, the GraphController will manage the events generated by the drawable (some are also managed by the GraphViewer) and the GraphScheduler manages the "scenegraph" (it isn't a real scenegraph but only a tree-like data-structure containing the nodes and edges) updates and decides the nodes to render.

Render queues are used to send the nodes and edges to the renderer. When the scenegraph is traversed, the buffer contained in the node is inserted in the render queue using the enqueue() method of the RenderQueue interface. The renderQueue() method is then used to send the queue to the renderer. The rendering will be done asynchronously when all the renderQueue will be marked as renderable or some large time step is passed (in this last case all the not renderable RenderQueue will be disabled and a warning is shown). These RenderQueue interfaces are obtained by the Viewer using its createRenderQueue() method and a reference to them is included in the scenegraph. When the Viewer display method will be called, the viewer will iterate over all the queues and render them using the Renderers.

Geometry module

The matrix stack and the Picking API do not exist anymore in OpenGL. Those features were used by the old engine and they greatly simplified the implementation of node selection and camera management. In the new engine I will therefore implement a new Geometry module to simplify matrix operations and ray/frustum/AABB/Sphere intersections.

Future schedule

August 2nd - August 12th. Rendering system implementation.
August 13th - August 20th. Scheduler and controller basic implementation.
August 20th. Final evaluations deadline.

References

[1] QuteMol paper. http://vcg.isti.cnr.it/Publications/2006/TCM06/Tarini_FinalVersionElec.pdf
[2] Gpu Gems 2: Fast Prefiltered Lines. http://http.developer.nvidia.com/GPUGems2/gpugems2_chapter22.html