Appearance-Preserving on Out-of-Core Simplification in 3D

Proceedings of the Postgraduate Annual Research Seminar 2005
201
Appearance-Preserving on Out-of-Core Simplification in 3D Real-Time Game
Engine Development
Tan Kim Heok
Daut Daman
Abdullah Bade
[email protected]
[email protected]
[email protected]
Department of Graphics and Multimedia, Universiti Teknologi Malaysia
UTM Skudai, 81300 Johor, Malaysia.
Abstract
Modeling and visualization applications are
demanding higher realism nowadays. Thus, drastic
growth in 3D scanning technology and computer
simulations’ complexity has boost the size of geometry
data sets. Yet the most powerful graphics workstation
also can’t generate the smooth rendering of these
extremely massive data. It is significant in real-time and
interactive applications. The conventional simplification
approaches are insufficient to handle it. Thus, out-ofcore approach is introduced. Besides, in many
applications, surface attributes preserving are essential
to highlight the attractive details of the mesh. Therefore,
we present an automatic massive datasets simplification
while preserving its normal, color and texture attributes
besides geometry data. Our approach organizes the
dataset in an octree structure and then simplifies the
model using a new variation of vertex clustering
technique. During run-time, the portion of the visible
mesh will be rendered view-dependently. This approach
is fairly simple and fast. In conjunction, the system is
demonstrated in real-time game environment.
Keywords--- Out-of-Core Simplification, Vertex
Clustering, Appearance Preservation, Game Engine.
1. Introduction
Since the mid nineteen-seventies, Level of Detail
(LOD) technique has been used to improve the
performance and quality of graphics application. The
LOD approach involves retaining a set of
representations of each polygonal object, each with
different levels of triangle resolution
Due to the increasing size of datasets, the problem
of dealing with the meshes that are apparently larger
than the main memory existed. Thus, these data can only
be rendered on high end computer system. These
massive data are practically impossible to fit in available
main memory in desktop personal computer. As a result,
it is very cost ineffective and not user friendly.
Suppose this problem need be solved by
simplification process, the conventional simplification
technique still need to load the full resolution mesh into
main memory in order to perform the task.
Consequently, the idea of out-of-core approach is
proposed simply to overcome this deficiency [7].
In many computer graphics application, the realism
of the virtual environment is very important. Therefore,
the preservation on surface attributes is essential during
the geometric simplification process. Surface attributes,
like normal, curvature, color and texture show the
details of the object. Without them, the rendered scene
will looked dull and unattractive to user.
In this paper, we present an alternative
simplification technique by extending the vertex
clustering algorithm [6] to be able to run in real-time
according to view-dependent criteria. The overall idea is
inspired by Lindstrom [7]. However, the data
partitioning and error metric methods are different from
his approach. Difference from it, adaptive partitioning is
used instead of uniform partitioning. Moreover, the
generalized quadric error metrics [5] is used to simplify
the surfaces in aspect of geometry, normal, color or
texture attributes. During runtime, the rendering is viewdependent because the mesh is rendered proportional to
the distance aspect.
In Section 2, the framework on the whole is given.
Following it, the detail of algorithm is discussed. These
include data
processing, octree construction,
simplification scheme and view-dependent rendering.
Last but not least, the results and conclusion is briefed in
last section.
Proceedings of the Postgraduate Annual Research Seminar 2005
2. Algorithm Framework
This paper introduces an approach for end-to-end
and out-of-core simplification and view-dependent
visualization of large surfaces. Besides, appearance
preservation will be proposed as well. Here, the
arbitrarily large datasets, which are larger than memory
size, can now be visualized by giving a sufficient
amount of disk space (ie. a constant multiple size of the
input mesh). The preprocess work starts with data
preprocessing and then an octree is constructed to
partition the space efficiently. Consequently, a new
variation of vertex clustering technique is preceded. This
new vertex clustering technique is inspired by previous
works of [1] and [7]. Finally, the view-dependent
rendering on output mesh will be carried out during runtime. The off-line phrases are performed on secondary
memory whilst the run-time system only pages in the
needed parts from the mesh in a cache coherent manner
for rendering purpose. The framework overview is
shown in Figure 1.
Preprocessing (Step 1)
Data Processing
Preprocessing (Step 2)
Octree Construction
Preprocessing (Step 3)
Simplification
Run-time
View-dependent
Rendering and Refinement
Figure 1. Framework overview
Algorithm starts with data processing process. It
involves data loading into our system and dereferencing
of triangle indices to their corresponding vertices. The
experimental data is in PLY data format. It is one of the
common file formats in storing the large datasets.
However, these raw data are an indexed mesh. Even
though the format is compact, the processing time is
slow. Thus, it needs to be further processed before
proceeding to simplification process. So, we dereference
a list of triangle to its vertices so that a triangle soup
mesh is generated.
202
Secondly, an octree is constructed to divide the
loaded data into spatial space. The purpose is to make
sure the data processing step become easier and neater.
The triangular mesh is subdivided into its appropriate
location. Because of the datasets size is too large for the
available main memory size on commodity computer;
hence the data in each octree node is kept in its end node
file. The end node files size is small enough to fit in
main memory and are stored in an organized directory
format. Thus, the file searching is easier to be
performed.
At this stage, the end node files can be simplified
independently. This step is taken by modifying the
existing vertex clustering technique from Rossignac and
Borrel [1]. As the mesh is already partitioned in
previous stage, this input mesh doesn’t need any further
space partitioning. Because of the simplification is done
separately in every node, cracks and hole artifacts may
be produced. In order to make sure that the whole input
mesh looks good after joining back all of the node’s
simplified mesh, this portion of mesh must retain its
boundary edges. Meanwhile, all the vertices inside the
mesh are collapsed to an optimal vertex. The
generalized quadric error metrics [5] is used to find the
optimal representation vertex and calculate the
simplified color or texture attribute as well. This
simplification operator introduces non-manifold vertices
and edges. Nevertheless, it is fast. Hence, it is suitable
for our interactive application. The output of this stage is
a set of files with multiple resolutions for each end node.
In the run-time phrase, the visible nodes are
extracted from octree. Each of them is considered as
active nodes and the vertex information is loaded into a
dynamic data structure. The active nodes are expanded
and collapsed based on view-dependent criteria.
3. Data Processing
PLY data file has header and follow by its vertex
list, finally its face list. The file header starts with PLY
and end with END_HEADER. Following is the general
header structure:
ply
format ascii 1.0
comment …
element vertex num_of_vertices
property float32 x
property float32 y
property float32 z
element face num_of_faces
property list uint8 int32 vertex_index
end_header
Proceedings of the Postgraduate Annual Research Seminar 2005
203
Subsequently, a vertex list which total up has
num_of_vertices of vertices and a triangle list, which its
length is num_of_faces is listed. It is in indexed format.
Even this format is extremely space efficient.
Nevertheless, it slows down the processing time.
Therefore, this indexed triangle list have to be converted
to triangle soup style even though it needs bigger
storage space.
model. Each time, the space is divided into eight cubes
recursively until the tree is fully subdivided. It partitions
the input mesh adaptively. This is because it only breaks
down the node into children nodes when the node has
more triangles in it. Each internal node stores their
directory path, so that every node’s file searching
becomes easier. Only the leaf nodes hold the filename
which is storing the partitioned triangle list.
After reading in the PLY file, the header of the PLY
file is discarded as we do not need the information any
longer. Only the number of vertices and number of
triangles are maintained. Subsequently, we create two
files, one storing the vertex values and another one
storing the indexed indices for triangles.
Since the datasets couldn’t fit into main memory,
the vertices in each leaf node are written into every end
node file. The file is small and is kept in organized
directory structure. Each child node is contained in their
parents’ node directory (previous parents’ directory).
Hence, the tracking of every end node file is simpler and
better organized.
To make the indexed mesh become a triangle soup
mesh, external sorting is essential. External sorting is
mandatory because the massive datasets can’t directly
be loaded into main memory due to resource limitations.
External sorting loads the portion of data, which fit into
main memory, then sorts it and lastly output it into a file
again. Here, merge sort is used for this purpose. Sorting
the triangle indices involves the quicksort technique due
of its advantage in sorting speed. As mentioned, the data
is read part by part. Therefore, merging is needed to
unite each sorted portions of data. The merging scheme
used at this time is the two-way-merge sort.
By using the merge sort technique, firstly, sort the
first index of each triangle. Then, each index is read
sequentially. At the same time, the vertex values are
read in sequentially as well. Now, dereference each
triangle index to its corresponding vertex value. Repeat
these steps until all of the triangle’s first indices are
dereferenced.
Different with others [3], our octree structure
doesn’t create any triangle replication. Whenever the
triangle has existed in previous visited nodes, then the
triangle won’t be stored in any other node. This cause
the boundary triangle is kept only once. One may
question it may create artifacts during rendering later.
However, this artifact is rarely happened based on our
experiments.
By using the constructed octree as our spatial data
structure, it makes our simplification easier and viewdependent rendering faster in general. It contains the
whole world information. Every detail mesh in it is
small and stored in end node files, thus making the
simplification can be performed easily. However, the
octree need to be revised during simplification process if
more than one level of detail is demanded.
5. Simplification
Taking account of each face in the triangle shape,
each triangle has three vertices. Thus, the merge sort and
dereferencing processes are needed to run three times in
order to create the complete triangle soup mesh.
4. Octree Construction
Instead of performing uniform vertex clustering [3,
6, 7, 8, 1], we have adopted spatial octree structure in
data organization. This is due to uniform clustering
technique causes undesirable artifacts in the
approximation even it offers great efficiency [9]. At the
same time, this octree structure is vital in handling the
massive datasets.
From previous data processing step, the entire
triangle soup file is loaded portion by portion into our
octree. Octree is chosen as it eliminates the computation
time spent on processing on the empty space in a data
The main flow of the simplification process has
certain similarity with previous vertex clustering
techniques. Anyhow, our idea is mostly inspired by the
algorithm proposed by Lindstrom [7]. Like him, we
collapse all the vertices in a cell into an optimal vertex
by using quadric error metric [4]. However, we are not
using the “triangle cluster” [6] idea, which a triangle is
maintained if its three vertices fall in different region.
Contrary, we preserve the boundary edges of the node.
Stitching [2] is redundant. How the algorithm works
shown in following diagram (Figure 2). At the same
time, we use generalized quadric error [5] metric instead
of original quadric metric [4] in computation of
representative vertex.
As the data was completely subdivided, therefore
every single end node’s triangle data is ready to be
simplified. The steps are as below:
204
Proceedings of the Postgraduate Annual Research Seminar 2005
Before
After
Figure 2. Vertex clustering performed on every
leaf node
•
•
•
•
•
•
Load in the data.
For each input triangle,
- Calculate triangle’s quadric, Qt and add it into
node’s quadric, Qn.
- For each edge,
- If it is not stored in edge list, add it into
edge list and initially set its status as a
boundary edge.
- Else, set the edge’s status as a nonboundary edge.
For each edge in the edge list,
- If it is a boundary edge, save it into boundary
edge list.
Calculate node’s optimal vertex using node’s
quadric, Qn.
For each input triangle,
- Check every edge. If the edge is a boundary
edge, add it into output triangle list. The
triangle is consisted of two vertices from the
boundary edge and the calculated optimal
vertex.
Write the output triangle list into a file, which
named using the node’s directory path with its
level of detail indication number.
By repeating the simplification on every end node
files, it generates first level of simplification (0.LOD) on
the original input mesh. This level of detail is the finest
mesh. To obtain different resolutions of the mesh, some
additional works (Section 5.2) are required.
5.1. Generalized Quadric Error Metrics
This generalized quadric [5] is improved from the
original quadric error metrics [4]. The original quadric
error metrics only handles geometry primitives (vertex
position) in mesh simplification. Although it is extended
from previous quadric error metric, it can not be
generated solely by using the normal of the plane for a
triangle. Instead, it needs two orthonormal unit vectors
e1 and e2 to compute the error metrics. This is where the
unit vectors come from (Figure 3).
Figure 3. Orthonormal vector e1 and e2 define
the local frame with origin p for triangle T [5]
Consider the triangle T = (p, q, r) and we assume
that all properties are linearly interpolated over triangles.
If it has color attribute, then p=(px, py, pz, pr, pg, pb). If it
has texture, then p=( px, py, pz, ps, pt). To compute e1 and
e2:
e1 =
q− p
q− p
e2 =
r − p − (e1 • ( r − p ))e1
r − p − (e1 • ( r − p ))e1
Squared distance D2 of an arbitrary x from T is
D =vTAx +2bTx + c where:
2
A = I − e1e1 − e1e1
T
T
b = ( p • e1 )e1 + ( p • e 2 )e 2 − p
c = p • p − ( p • e1 ) 2 − ( p • e 2 ) 2
A is a symmetric matrix (3x3 matrix for geometry
data; 5x5 matrix for geometry with texture coordinates
data; 6x6 matrix for geometry with normal or color
data), b is a vector (3-vector if is geometry data; 5vector for geometry with texture coordinates data; 6vector for geometry with normal or color data), and c is
a scalar (coefficients). By solving Ax=-b, we get the
optimal vertex x with its simplified surface attributes.
5.2. Simplification on Simplified Mesh
After obtaining the first LOD (level-0), higher level
of detail (level>0) is produced by repeating the
following steps until desired level of detail is achieved.
The simplification for every level of detail starts by
examining the root node.
• Initially set the simplified[level] status as true
and examining all children nodes of the current
node
- If all children nodes’ simplified[level-1] true,
get the simplified data from the children nodes.
- Else, set node’s simplified[level] status false
and stop checking on other children nodes.
• If simplified[level] is true, simplify the data
obtained
previously
using
the
our
simplification algorithm, then save it to
(level.LOD)
• Else repeat this algorithm (recursively).
205
Proceedings of the Postgraduate Annual Research Seminar 2005
6. View-Dependent Rendering and
Refinement
The refinement and rendering are run in two
parallel threads. In refinement process, we use Best First
search in finding the active nodes. Best first search is a
breadth first search with adding heuristics in it. Using
this search enables the mesh to be updated
progressively. Thus, popping effects can be evaded. This
search also ensures the details are paged in and added
evenly over the visible mesh. Meanwhile, we control our
frame rate using some heuristics as well.
We begin refinement process by creating the root
node into active node. When needs more details, we use
the higher resolution mesh obtained from end node
directory. Oppositely, we collapse it when the details are
not required. However, collapse and expand actions is
only be applied to the active nodes that are stored in
dynamic data structure in main memory.
During rendering, we test on the boundaries of node
to the view-frustum planes. If it is visible, expands the
node, else, collapse it. If a node is visible, then compare
the error threshold based on distance aspect with the
node’s quadric error. If quadric error is lesser, collapse
the node. Otherwise, the node is expanded.
7. Results
This framework successfully simplifies datasets,
which are larger than available main memory size on
low cost personal computer. At the same time, the
surface attributes, which are normal, color and texture
details are preserved. It runs well in real-time game
application. Figure 4 and 5 show an example of the
output from near and far distance.
The simplification process is relatively fast. It is
mainly because the next level of simplified mesh is
obtained by referring the previous simplified mesh.
Anyway, the quality is affected by the number of times
of octree being subdivided. When the simplification is
approaching the root node, the result is getting poorer.
Anyhow, it still conserves the shape of the original mesh.
For example, 642 triangles’ bunny looks fine).
69451 triangles
a) 18799 triangles
b) 1822 triangles
c) 642 triangles
Figure 5. Near viewing on Bunny
8. Conclusion
Our work manages to simplify the massive datasets,
which has millions of polygon and preserve the surface
attributes at the same time. To handling out-of-core
datasets, we have invented new vertex clustering
algorithm in mesh simplification. Out approach is
sufficient in running in real-time and interactive game
environment. Accuracy is not crucial here like in
medical visualization. We have adopted the generalized
quadric error metric [5] as the original quadric error
metrics cannot handle surface attributes. This error
metrics is robust and pretty accurate. To make the data
more organized and easier to retrieve during run-time,
octree is used.
There’s some future works, including performing
prefetching to accelerate the data paging and enhancing
the data handling on-disk. Besides these, we can explore
in generating a better quality of simplified mesh to make
it usable in application, which need high accuracy like in
medical visualization. We can also extend this
application to be run able in network game.
9. Acknowledgements
a)
b)
c)
Figure 4. Simplified Bunny in different distance
This research work has been supported by IRPA
grant 04-02-06-0048EA240. Thousands of appreciation
dedicate to Dr. Peter Lindstrom and Dr. Martin Reddy
for valuable opinions on latest issue in level of detail
field.
Proceedings of the Postgraduate Annual Research Seminar 2005
206
References
[1]
Rossignac, J. and Borrel, P. 1993. Multi-resolution 3d
Approximations for Rendering Complex Scenes. In
Modeling in Computer Graphics, Springer-Verlag,
Falciendo, B. and Kunii, T. L. eds., 455-465.
[2]
Cignoni, P., Montani, C., Rocchini, C. and Scopigno, R.
2002. External Memory Management and Simplification
of Huge Meshes. Visualization and Computer Graphics,
IEEE Transactions, 9(4), 525-537.
[3]
Correa, W.T. 2003. New Techniques for Out-of-Core
Visualization of Large Datasets. Ph.D Thesis, Princeton
University.
[4]
Garland, M. and Heckbert, P. S. 1997. Surface
Simplification Using Quadric Error Metrics. In
Proceedings of SIGGRAPH 97, ACM Press. Los
Angeles, California, Whitted, T. ed., 209-216.
[5]
Garland, M. and Heckbert, P. S. 1998. Simplifying
Surfaces with Color and Texture Using Quadric Error
Metrics. In IEEE Visualization ’98, Ebert, D., Hagen, H.
and Rushmeier, H. eds., 263-270.
[6]
Lindstrom, P. and Silva, C. 2001. A Memory Insensitive
Technique for Large Model Simplification. In IEEE
Visualization 2001, San Diego, CA, 121-126.
[7]
Lindstrom, P. 2000. Model Simplification using Image
and Geometry-Based Metrics. Ph.D Thesis, Georgia
Institute of Technology.
[8]
Low, K. L. and Tan T. S. 1997. Model Simplification
using vertex-clustering. In 1997 ACM Symposium on
Interactive 3D Graphics, ACM SIGGRAPH, Phode
Island, Cohen, M. and Zeltzer, D. eds., 75-82.
[9]
Shaffer, E. and Garland M. 2001. Efficient Adaptive
Simplification of Massive Meshes. In 12th IEEE
Visualization 2001 Conference (VIS 2001), San Diego,
CA, 127-134.