In fo rm atic a | U n iversiteit van A m sterd am

Informatica | Universiteit van Amsterdam
Bachelor Informatica
Volumetric Comparison of Simplified 3D
Models
Steven Klein
June 12, 2012
Supervisor: Robert Belleman (UvA)
Abstract—In the field of computer graphics, performance is often an issue when working with highly
detailed 3D models. Simplification methods have been developed to increase performance when
detail is not important, for example when models are far away or moving at high speed. The results
of these simplification methods depend on the types of input models, and many of these methods do
not have proper support for texture mapping. They also usually require manual tuning, taking up
valuable time and leaving room for human error. This thesis presents a method that can compare
different simplifications of a model automatically, making it possible to select optimal simplification
methods and settings for any model, without the need for any user input. This work also includes a
method that can handle texture mapping for any combination of simplification methods.
2
Table of Contents
TITLE PAGE ........................................................................................................................................................ 1
ABSTRACT ........................................................................................................................................................... 2
1.
INTRODUCTION ........................................................................................................................................ 4
1.1
1.2
1.3
1.4
1.5
2.
RELATED WORK....................................................................................................................................... 8
2.1
3.
Model simplification results ............................................................................................................ 22
Texture Simplification Results ......................................................................................................... 24
DISCUSSION ............................................................................................................................................. 27
5.1
5.2
5.3
6.
Step one: Voxelization ..................................................................................................................... 11
Step two: Morphological Operations .............................................................................................. 12
Step three: Isosurface Extraction .................................................................................................... 14
Step four: Simplification.................................................................................................................. 17
Step five: Comparison ..................................................................................................................... 18
Step six: Texture Mapping............................................................................................................... 20
RESULTS .................................................................................................................................................... 22
4.1
4.2
5.
Contribution of this thesis ................................................................................................................. 9
METHODS AND IMPLEMENTATION ................................................................................................. 10
3.1
3.2
3.3
3.4
3.5
3.6
4.
3D Models and Performance ............................................................................................................ 4
Level of Detail ................................................................................................................................... 4
Simplification .................................................................................................................................... 5
Voxels ................................................................................................................................................ 6
Research Goals ................................................................................................................................. 7
Texture mapping limitations............................................................................................................ 27
Comparison algorithm limitations .................................................................................................. 27
Simplification algorithm limitations ................................................................................................ 27
CONCLUSION ........................................................................................................................................... 28
REFERENCES .................................................................................................................................................... 29
APPENDIX - A - MODEL SIMPLIFICATION SCORES .............................................................................. 30
3
1. Introduction
1.1 3D Models and Performance
Visualization of 3D objects and environments (rendering) is becoming increasingly important in
modern computer graphics. 3D rendering is used in many types of applications such as video games,
software used for CGI in movies, and 3D modeling applications such as architectural or industrial
design software.
In general 3D objects used for rendering are represented by a polygonal model, usually consisting of
two main components.
The first component is a mesh that describes the model’s morphology through a collection of vertices
(points in 3D space) and edges between these vertices. Three edges together make up a triangle.
Together, these triangles form polygons that describe the surface of the 3D model (see figure 1).
Figure 1: Example of a triangle mesh representing a dolphin (Source:
http://en.wikipedia.org/wiki/Polygon_mesh).
The second component is an image (often called a “texture”) that is mapped onto the model’s
polygons to provide color for its surface.
Many of the applications that use 3D rendering are interactive, and for optimal use they often
require visual continuity by means of a high frame rate, also referred to as “frames per second”
(FPS); the rate at which a graphics device produces consecutive images per second. Because graphics
hardware can only perform a limited amount of calculations per second, the frame rate will generally
be lower for more complex scenes. This also means that one way to increase the frame rate of an
application is to reduce the complexity of the scene to be rendered.
1.2 Level of Detail
When talking about reduction of complexity of features of a 3D scene, the term Level of Detail is
often used. Techniques to reduce complexity are often applied when the gain in performance is
worth the loss of detail, especially in cases where the loss of detail is less noticeable, for example
when the object is far away or moving at high speeds. In Level of Detail, there are two main
categories: Continuous Level of Detail and Discrete Level of Detail1.
1
http://en.wikipedia.org/wiki/Level_of_detail
4
In Continuous Level of Detail implementations, the simplifications happen gradually. This prevents
visual ‘popping’ of objects or effects, but usually at the cost of increased processing and/or memory
requirements. In Continuous Level of Detail it is important to be able to efficiently calculate any level
of simplification of the target during the execution of the application. One example of a solution to
this problem are Progressive Meshes [4], where the original model and a simplified version of that
model are loaded into memory with the algorithm interpolating between these two models.
In Discrete Level of Detail implementations, switching between a more complex object and a more
simplified object will take place at specific intervals. For example, at a distance of 50 or more from
the camera, an original model could be replaced by a simplified model with half the original triangles
while at a distance of 100 or more that simplified model could be replaced by an even more
simplified model with only a quarter of the original triangles. The main advantage of this method is
that when the simplifications are generated beforehand, no extra processing is required during the
actual execution of the application.
1.3 Simplification
A 3D scene can have many components, each requiring more processing to be rendered the more
complex they are. As a result, there are different targets for simplification.
One of the targets for simplification are textures, this is called mipmapping [11](see figure 2). In
mipmapping, multiple versions of the same texture are loaded into memory, each with a different
resolution (often in powers of two). For example, a 642 texture could have 7 images loaded into
memory with resolutions of 642, 322, 162, 82, 42, 22 and 12.
Figure 2: Example of mipmapping. Full resolution image (left) with corresponding lower resolution images
(right) (Source: http://en.wikipedia.org/wiki/Mipmap).
Another common target for simplification are 3D models. The goal of these simplifications is
generally to reduce the amount of triangles of a model, thus reducing the amount of processing
required to render the model, while minimizing topological changes to the model. However,
simplifying a 3D model is not a trivial task; finding a good simplification of a model can be very hard,
and what simplification is optimal can often be subjective and heavily dependent on context.
5
Another problem in 3D model simplification are textures. With the topology changes caused by
model simplification, some triangles will be removed, while others could be stretched, shrunk or
deformed. What will happen to the textures on those triangles? With single colors attached to
vertices, it is relatively easy to get a new color by interpolating the colors of merged vertices.
However, with more complex textures and texture mappings it gets more complicated to preserve
textures during the simplification process.
1.4 Voxels
Volumetric pixels (voxels) are three dimensional pixels that have volume, usually represented by a
uniform grid (see figure 3).
Figure 3: Uniform grid of volumetric pixels (voxels) (Source:
http://developer.nvidia.com/book/export/html/179).
Working with voxels has a number of advantages: they’re relatively easy to manipulate, they have
volume, meaning you always know what is inside and what is outside of a model, and they are always
correct; no holes, double walls or overlapping parts. Because of these advantages, they are
sometimes used as an intermediate representation of a model, for example to more easily perform
topology changing operations.
There are downsides to using voxels as well. They generally use more memory than equivalent
polygonal models, most current rendering hardware and software are not designed to visualize
voxels, and voxels are very hard to animate. Because of these downsides they are rarely rendered
directly but instead converted to a polygonal model for rendering using an isosurface extraction
method, for example the Marching Cubes [5] or Marching Tetrahedra [9] algorithms.
6
1.5 Research Goals
There are already many existing methods that are able to simplify 3D models, each with their own
advantages and disadvantages. Some, for example, are good at simplifying objects with many small
details, while others are better suited for models consisting of many unconnected parts. Because of
this, it can often be a problem to decide which methods and settings are best suited for a model.
We believe the simplification process can be improved by implementing multiple simplification
methods side by side and running them with multiple settings. The results can then be compared and
the best simplification can be chosen. This however requires a method to automatically compare the
results. Therefore, the main goal of this thesis is to research objective measures to compare different
simplifications of a model.
Most of the existing simplification methods only have support for simple colors assigned to vertices
or no support for colors or textures at all. Therefore, the second goal of this thesis is to look into
possibilities to add support for texture mapping to the simplification process, preferably independent
of the simplification method and settings used.
7
2. Related work
Schroeder’s method [8] was one of the first 3D model simplification algorithms. It uses vertex
removal and re-triangulation to simplify a model. It classifies vertices into different categories based
on their features. It is a solid method and provided the basis for many new model simplification
methods. As van Kaick and Pedrini [6] showed however, the metric they used for selecting vertices to
remove (vertex distance to average plane) performs relatively poorly compared to other newer
metrics.
Garland and Heckbert´s method [3] makes use of pair contractions. A pair contraction consists of
contracting two vertices into one new vertex, thus reducing the vertex and triangle count of the
model. They iteratively contract the vertex pair that has the lowest ‘cost’, meaning it introduces the
smallest change to the model. They calculate this cost by using quadric error matrices based on the
planes of the triangles that share these two vertices. When the target triangle count is reached, the
algorithm will stop and the resulting model will be a simplified version of the original model with a
specific number of triangles. Some of the advantages of this method are that it does not have to
divide vertices into different categories, that it can modify topology by merging non-edge pairs when
necessary. According to van Kaick and Pedrini [6], the quadric metric used performs relatively well
compared to many others simplification metrics. It is however relatively complicated to understand
and implement and when using non-edge contraction it can result in a mesh with certain
degeneracies like holes.
Nooruddin and Turk’s method [7] is basically a pre-processing step for Garland and Heckbert’s
method. First, they voxelize the original model using a parity count or ray stabbing algorithm.
Voxelizing the model in this way also repairs many degeneracies often present in polygonal models
such as holes, double walls or intersecting parts. After the voxelization step, one or more
morphological operations are performed (zero if the only goal is to repair the model or eliminate
interior details). One of the two morphological operations they use is the closing operation (dilation
followed by erosion) that can close small holes, tunnels or openings between two surfaces. The other
operation is an opening operation (erosion followed by dilation) that can remove small and/or thin
details. After these morphological operations, an isosurface is generated using the Marching Cubes
algorithm [5] resulting in a modified version of the original model. As a final step, Garland and
Heckbert’s method is then applied to this new model to reduce the amount of triangles of the model,
but without non-edge contraction (contracting two vertices that do not share an edge) to prevent
non-manifold meshes. The advantages of this added step is that it can filter out many small details
before the actual simplification, making it possible to create an acceptable simplification with less
total triangles than many other methods. This extra step however comes at the price of a lower initial
accuracy and a higher processing cost because of the two conversions and geological changes,
making it less useful for models that need a lower degree of simplification.
Cignoni, Rocchini and Scopigno [1] designed a method to compare simplifications using the Hausdorff
Distance. They take sample points from the original model and for each of those points they find the
closest point to any surface of the simplified model. The distance between these two points is then
used as the error. The advantage of this method is that it measures the error of surfaces, which is
often what you want for Level of Detail purposes. However, when significant topological changes
occur during simplification this method has the potential to become unstable, for example when
8
multiple surfaces are close together and the algorithm finds a surface close to a point that belongs to
a different part of the model. The score it calculates is also hard to interpret when used in an
absolute sense as it gives a distance value as the error that is dependent on the size of the model and
distances between different parts of the model.
2.1 Contribution of this thesis
The comparison method described above is based on the Hausdorff Distance. The results are
dependent on model geometry and features, causing the algorithm to become unstable in some
situations. The results are also difficult to interpret in an absolute context. The comparison method
presented in this thesis is based on volume, and the results are independent of model geometry and
features and are easily interpreted in both absolute and relative contexts.
The simplification methods described above only have support for simple colors assigned to vertices
or no support for colors or textures at all. This thesis presents a method to automatically re-sample
and re-map any colors or textures of the original model to the simplified model that is independent
of the type of texture mapping and simplification method used.
9
3. Methods and Implementation
As a testing framework for this thesis, two simplification algorithms were implemented; one from
Garland and Heckbert, the other from Nooruddin and Turk. These methods were chosen for this
thesis because there is a large overlap for some of the techniques making it easier to implement: for
example, the distance map in step two can also be used for the texture mapping in step six and the
voxelization from step one can also be used for the comparison in step five.
The implementation of this testing framework is based on OGRE (Object-oriented Graphics
Rendering Engine2).
This section describes the different steps of the simplification pipeline (see figure 4).
Steps one through three consist of a modified version of Nooruddin and Turk’s algorithm and are
skipped when using the direct simplification setting.
Step two can be partially or fully skipped depending on the opening and closing settings.
Step four is a modified version of Garland and Heckbert’s algorithm.
Step five is the comparison algorithm designed to compare the different simplifications resulting
from the previous steps, by going through those steps multiple times with different settings and
comparing the simplified models to the original.
Step six is the algorithm designed to handle models mapped textures.
1. Voxelization
2. Morphologic Ops.
3. Isosurface Extract.
4. Simplification
Direct Simplification
5. Comparison
6. Texture Mapping
Figure 4: Flowchart representation of the simplification pipeline
2
http://www.ogre3d.org
10
3.1 Step one: Voxelization
3.1.1 Method
The first step is to go from a polygonal representation of a model to a voxel representation (see
figure 5). Nooruddin and Turk used either a parity-count or a ray-stabbing approach, both from 13
different directions. They chose these methods because they can repair degeneracies in models.
However, when model repair is not required, a parity count algorithm with rays along the 3 main
axes is sufficient.
Figure 5: Polygonal representation of a teapot (background) and its corresponding voxel representation
(foreground) (Source: http://classes.dma.ucla.edu/Winter03/102/week3b.html).
The parity-count algorithm sends rays in three directions parallel to the main axes. It then finds all
intersections between the rays and the faces of the model and stores them. Any ray with an uneven
total number of intersections is discarded. Any voxel that has an uneven number of intersections
along both directions of any of its three corresponding rays is classified as internal, all other voxels
are classified as external (see figure 6).
Figure 6: All voxels between two intersections with rays with an even number of intersections are set to filled
(left), while rays with an uneven number of intersections are discarded (right) (Source: [7]).
11
The quality of the resulting voxel model is mostly dependent on the resolution of the grid used in this
step. When high quality is desired, a voxel grid with a high resolution should be used; otherwise a
lower resolution can be used for faster simplification.
3.1.2 Pseudocode
Pseudocode for the z direction of the voxelization method:
Voxelize the polygonal model in the z direction
for x = 0 to xsize
for y = 0 to ysize
cast ray in z direction
for every relevant triangle
if ray intersects triangle
add position of intersection to sorted list
save intersection location for use in isosurface extraction step
if number of intersections is even
for every set of two intersections in the list (1 and 2, 3 and 4, etc)
set all voxels between intersections to filled
else
discard ray
This loop is repeated multiple times in different directions.
3.1.3 Implementation Remarks
A simple brute force approach (checking all triangles of the model for intersection for every ray,
regardless of proximity) is sufficient for smaller models if processing speed isn’t an issue. When
voxelizing larger models or when processing speed is important, optimizations can and should be
made. One possibility is to sort triangles with a tree structure (BSP, octree, etc) and only check
triangles in sections that the ray passes through for intersections. Using an optimized version of the
ray-triangle intersection function can give significant performance boosts as well.
3.2 Step two: Morphological Operations
3.2.1 Method
The next step is to perform morphological operations on the voxel data produced in step one. The
same method is used as Nooruddin and Turk, which is a combination of erosion and dilation
operations. For these operations a distance map (see figure 7) is calculated.
Nooruddin and Turk chose to use a 3D version of Danielsson’s algorithm [2] to generate the distance
maps, because it is well suited for this task. The general idea of this algorithm is to propagate the
distances in all six main directions by iteratively checking neighboring points if their closest known
point is closer to the current point than the currently known closest point. The results of this
algorithm are not entirely precise, but the error margins are in an acceptable range for the task.
12
Figure 7: 2D example of a simple distance map. The white squares represent internal voxels. A dilation
operation with a distance of 1 changes all black squares (empty voxels) with the number 1 into white squares
(filled voxels) (Source: http://bytewrangler.blogspot.nl/2011/10/signed-distance-fields.html).
Erosion operations decrease the volume of a model while dilation operations increase the volume. A
close operation is a dilation operation followed by an erosion operation and is capable of closing
small holes, tunnels or open spaces between two surfaces. An open operation is an erosion
operation followed by a dilation operation and is capable of removing small and/or thin details. Both
of these operations use a distance threshold setting.
These open and close operations are used to get rid of certain features of a model, making it easier
to get a good simplification with fewer triangles in step four.
3.2.2 Pseudocode
Pseudocode for the initialization step and the loop in one direction for generating a distance map for
dilation:
Set initial values
for every voxel
storedvector = vector(0,0,0)
If voxel is filled
storedmagnitude = 0
else
storedmagnitude= inf
13
Danielsson’s loop in positive and negative z direction
for x = 0 to xsize
for y = 0 to ysize
for z = 0 to (zsize – 1)
if magnitude(storedvector(x,y,z+1) + vector(0,0,1)) < storedmagnitude(x,y,z)
storedvector(x, y, z) = storedvector(x, y, z+1) + vector(0, 0, 1)
storedmagnitude(x,y,z) = magnitude(vector(x,y,z+1) + vector(0, 0, 1))
for z = zsize to 1
if magnitude(storedvector(x,y,z-1) + vector(0,0,-1)) < storedmagnitude(x,y,z)
storedvector(x, y, z) = storedvector(x, y, z-1) + vector(0, 0, -1)
storedmagnitude(x,y,z) = magnitude(vector(x,y,z-1) + vector(0, 0, -1))
In the full algorithm, this loop is repeated in all three directions and also examines voxels to the
sides. When generating the distance map for erosion, internal and external voxels are reversed.
Pseudocode for applying dilation and erosion operations with a certain threshold:
Dilation
for every voxel
if voxel is empty and magnitude stored in voxel < squared threshold
set voxel to filled
else
do nothing
Erosion
for every voxel
if voxel is filled and magnitude stored in voxel < squared threshold
set voxel to empty
else
do nothing
The erosion operation uses a distance map calculated with an inverted voxel grid.
3.2.3 Implementation Remarks
In the distance map, each voxel will store a vector and a magnitude. The vector points to either the
closest filled voxel for an empty voxel, or to the closest empty voxel for a filled voxel. The magnitude
is the squared length of the vector (to save unnecessary square root operations)
3.3 Step three: Isosurface Extraction
3.3.1 Method
The third step is to obtain a polygonal representation of the volumetric model resulting from step
one and two. Nooruddin and Turk chose to use an extended version [10] of the original marching
cubes algorithm, using supersampling to obtain density values for the voxels.
14
Another option is to use exact intersection values of the surface of the original model that can be
gathered during the voxelization in step one to place the vertices resulting from the Marching Cubes
algorithm. This should provide similar if not a better quality of the resulting model, especially in areas
where internal voxels have more than one adjacent external voxel or vice versa such as corners or
thin areas. A third option is to use the Marching Tetrahedra algorithm [9]. It can produce better
results, but comes at the cost of more triangles per voxel. In many cases however, the Marching
Cubes algorithm, if used right, should produces results of sufficient quality (see figure 8).
Figure 8: Original untextured model (left) and voxelized model’s isosurface extracted with an extended
3
Marching Cubes algorithm with a 128 voxel grid with recalculated normal vectors (right).
One problem in this step is that that any voxels changed in step two will not have positioning data,
which means the faces generated from these voxels will be placed in the default positions (on the
center point between two voxels) and thus are subject to the aliasing often seen in standard
marching cubes algorithms (see figure 9). Nooruddin and Turk suggested filtering out these artifacts
with a smoothing operation. However, we found that smoothing is not strictly necessary, because
these aliasing artifacts mostly get filtered out by the simplifications performed in step four (see figure
9).
15
Figure 9: Open operation with a distance threshold of 2 performed on ninja model (left), and that same model
simplified to 25% of original triangle count after open operation (right).The noise resulting from the default
positions of voxels caused by the open operation is clearly visible (left), but was filtered out by the simplification
step without any smoothing operations (right).
3.3.2 Pseudocode
Pseudocode for generating a redundant grid (see Implementation Remarks) and for generating
triangles using the marching cubes algorithm:
Generate redundant grid
for every voxel
if voxel is filled
set corresponding bit of 8 surrounding points in redundant grid to 1
else
set corresponding bit of 8 surrounding points in redundant grid to 0
Generate triangles
for every point in redundant grid
look up triangles to generate in marching cubes lookup table using the bitmask
generate triangles based on looked up values
position vertices at exact intersection points gathered in the voxelization step
3.3.3 Implementation Remarks
It can be helpful to first generate a grid consisting of a bitmask of one byte for each center point
between eight voxels, containing the filled status of the eight surrounding voxels. A zero layer is also
added around the grid to make sure the resulting model will be closed on all sides. This grid makes it
easier to implement a straightforward algorithm to generate the triangles using the marching cubes
algorithm; enter the value of the bitmask into the Marching Cubes lookup table to directly retrieve
the triangles to add for that point.
16
Any voxels changed by the morphological operations in step two might not have intersection data for
all their vertices, in those cases the default halfway point will be used for positioning.
3.4 Step four: Simplification
3.4.1 Method
The fourth step is simplification of the polygonal model.
The input model is either the original model when using the direct simplification setting or the model
resulting from steps one through three. The algorithm used is that of Garland and Heckbert with the
same modification Nooruddin and Turk used: no non-edge contractions. The main reason to disallow
non-edge contraction is to ensure an output model without any degeneracies.
This algorithm uses pair contractions (see figure 10) that contract two vertices into one new vertex,
thus reducing the vertex count and triangle count of the model. It iteratively contracts the vertex pair
that has the lowest ‘cost’, meaning it introduces the smallest change to the model. This cost is
calculated by using quadric error matrices based on the planes of the triangles that one or both
vertices. When the target triangle count is reached, the algorithm will stop and the resulting model
will be a simplified version of the original model, with the specified number of triangles.
Figure 10: Model simplification using vertex contraction (Source: [3]).
3.4.2 Pseudocode
Pseudocode for calculating the error matrices, creating the pairs and removing pairs (thus merging
two vertices):
Calculate error matrices
for every vertex
calculate error matrix based on planes of adjacent triangles
Create pairs
for every triangle with vertices v1,v2,v3
create vertex pair (v1, v2), (v2, v3) and (v1, v3) and calculate costs and position
17
Remove pairs
while currentTriangles < targetTriangles
find vertex pair with lowest cost and remove it
function removepair(v1, v2)
reposition v1 to new position
remove triangles with edge v1, v2
transfer indices from v2 to v1
transfer pairs from v2 to v1
delete v2
update error matrices of vertices involving triangles containing v1
update matrices, positions and costs of pairs involving updated vertices
end function
3.4.3 Implementation Remarks
Garland and Heckbert recommend using a heap for selecting the lowest cost pairs to remove. It is not
strictly necessary to implement a heap, but it does significantly reduce the processing time for larger
models by making the selection of the vertex pair with the lowest cost more efficient.
3.5 Step five: Comparison
3.5.1 Method
The fifth step is comparing the different simplifications of a model and selecting the best one. This
method compares the differences in volume, thus giving a relatively high weight to the overall shape
of the model and less to smaller details. This works well for simplifying models that are far away from
the viewer or moving at high speed, but might be less suited for stationary models close to the
viewer where small details might be important. If small details are important, it should be possible to
assign different weights to different parts of the model depending on their features and position.
First, the original model and the different simplified models are all voxelized using the same
voxelization method used in step one.
The algorithm will then perform a 3D image subtraction with the original model and one of the
simplified models as the input. It will compare each voxel of the original model to the corresponding
voxel of the simplified model. If both voxels are filled, they will be classified as correct. If one of the
two voxels is filled while the other is empty, they will be classified as incorrect (see figure 11). Voxels
that are empty in both the original and simplified model are ignored. The score is the percentage of
correct voxels compared to the total correct and incorrect voxels.
18
Figure 11: Illustration of the method used to compare models. In this example, the score would be 3 out of 5 or
60%.
After the scores are calculated for all the simplified models, the scores will be compared and the best
simplification will be selected.
3.5.2 Pseudocode
Pseudocode for voxelization of a model in one direction and for scoring a model:
Voxelize the polygonal model in the z direction
for x = 0 to xsize
for y = 0 to ysize
cast ray in z direction
for every relevant triangle
if ray intersects triangle
add position of intersection to sorted list
if number of intersections even
for every set of two intersections
set all voxels between intersections to filled
else
discard ray
Score a model
for every voxel
if original model voxel == 0 and simplified model voxel == 0
do nothing
if original model voxel == 1 and simplified model voxel == 0
incorrect += 1
if original model voxel == 0 and simplified model voxel == 1
incorrect += 1
if original model voxel == 1 and simplified model voxel == 1
correct += 1
score = correct / (correct + incorrect)
19
3.5.3 Implementation Remarks
To increase the precision of the comparison the resolution of the grid into which the models are
voxelized could be increased, at the cost of higher processing time. See the results section for more
details on the relation between grid size and precision.
Care should be taken that both models have the same orientation and scale.
3.6 Step six: Texture Mapping
3.6.1 Method
After selecting the optimal simplification, the last step is re-sampling the original textures and
mapping them to the simplified model.
A three dimensional grid is generated using a method similar to the voxelization method in step one
with two differences: only the voxel closest to an intersection is filled instead of all voxels between
two intersections and the voxels get a color value based on the color of the texture at the
intersection instead of a simple empty or filled value.
A distance map is then calculated for the resulting color grid using Danielsson’s algorithm. For every
point in the grid without a color value assigned to it, the closest point with a color value assigned to it
is calculated. The empty points are then filled with the color of their closest filled point.
The texture image is then filled by sampling the colors from the color grid for each pixel of each
triangle in the new texture. Finally the texture coordinates are added to the simplified model and
vertices are duplicated where necessary.
3.6.2 Pseudocode
Pseudocode for generating the color grid in the z direction and generating the texture for the
simplified model:
Generate the color grid in the z direction
for x = 0 to xsize
for y = 0 to ysize
cast ray in z direction
for every relevant triangle
if ray intersects triangle
sample surface color at intersection point from texture using bi-linear filtering
store it in the nearest voxel in the color grid
20
Generate texture
calculate texture size and triangle positions
for each triangle in texture file
for each pixel in triangle
calculate position of corresponding 3D point
sample color value from color grid using calculated position and tri-linear filtering
assign color to pixel in the texture
3.6.3 Implementation Remarks
The positioning of the triangles in the texture image can be done in different ways. An easy and
straightforward solution is to position the triangles from the top left to the bottom right, looping
back to the left before the maximum width is reached. However, using this method means that the
triangles in the texture file are unconnected. Depending on the rendering method used, it is likely
that most or all vertices have to be duplicated, resulting in a drop in performance. Using a more
sophisticated unwrapping method will likely give better results.
21
4. Results
This section describes the results of a number of test cases designed to test the methods described in
this thesis. For all these test cases 1283 grids were used for both the voxelization in step one and the
texturing in step five, while a 2563 grid was used for the comparison in step five unless noted
otherwise.
4.1 Model simplification results
Three models, Ninja (see figure 12), Robot (see figure 13) and Razor (see figure 14), were simplified
using a variety of settings and degrees of simplification. Each model was simplified with five different
settings:
-
Direct simplification (Direct)
-
Simplification after voxelization and isosurface extraction without open and close operations
(“Open 0, Close 0”)
-
Simplification after voxelization and isosurface extraction with a close distance of 1 (“Open
0, Close 1”)
-
Simplification after voxelization and isosurface extraction with an open distance of 1 (“Open
1, Close 0”)
-
Simplification after voxelization and isosurface extraction with both a close and open
distance of 1 (“Open 1, Close 1”)
For each of these 5 settings, 100 simplifications were made with a triangle count ranging from 1% to
100% of the original number of triangles. These simplifications were then compared to the original
model and a score was calculated for each of them. The scores of these 15 series of simplifications (3
models with 5 settings each) are shown in figures 12 through 14. Visual results are show in figures 16
through 18.
Figure 12: Scores of the Ninja simplifications. Original Ninja model consists of 1008 triangles.
22
Figure 13: Scores of the Robot simplifications. Original Robot model consists of 308 triangles.
Figure 14: Scores of the Razor simplifications. Original Razor model consists of 468 triangles.
For the direct simplifications of the Ninja model the scores were also calculated with different grid
sizes in the comparison step to test the accuracy of the comparison algorithm. Grid sizes of 643, 1283,
2563 and 5123 were used and the resulting differences between the smaller three grids and the 5123
grid are shown in figure 15. As can be seen in the graph, the difference in score when using a grid size
of 643 when compared to using a 5123 grid is generally under 1% (average absolute difference of
0.43%). The average difference between the 5123 scores and the 2563 scores was 0.07%. The chaotic
behavior of the graph is mostly caused by the inherent randomness of the error of the comparison.
23
3
Figure 15: Error of the comparison algorithm on the Ninja model with smaller grid sizes when using the 512
grid comparison as the ‘true’ score. The expression used here is ‘ABS(((true score – calculated score) / true
score) * 100%)’.
A more elaborate score comparison is shown in tables 1 through 3 in Appendix A. These tables show
that when only simplifying to a small degree, Garland and Heckbert’s algorithm obtain better results.
When simplifying to a larger degree however, Noorudin and Turk’s algorithm starts to become more
competitive and, in the cases of these three models, wins with one or more of the different distance
threshold settings. These differences are possibly due to the errors introduced when voxelizing the
model, then using isosurface extraction resulting in a model that often has over 30.000 triangles,
then simplifying the model back to the original triangle count. These errors then gradually become
less important compared to the new errors introduced as both algorithms further simplify the model,
while Nooruddin and Turk’s algorithm might gain an edge due to the morphological operations
performed or simply due to luck from running the algorithm more than once on models that are
slightly different each time.
4.2 Texture Simplification Results
For all three models the original model, the 50%, 25% and 10% direct simplifications and the 10%
simplification out of all the settings with the highest scores were retextured and visualized. The
screenshots of these models and the original model are shown in figure 16 through 18.
The differences in the scores of the simplifications of the two algorithms and different distance
thresholds for the open and close operations are relatively small, most likely because they use the
same method for triangle count reduction and the test models don’t have many small details.
As can be seen in figure 16 through 18, retexturing the model comes with a moderate quality loss of
the textures due to the fact that the grid used for the retexturing has a finite resolution. Increasing
the resolution of the grid can increase the quality of the resulting textures, but doing this is often not
necessary or not desired; when the model is far away from the observer or moving with high speed,
which is often the case when employing level of detail, the quality should in most cases be sufficient.
Also, when employing level of detail, mipmapping is usually used in combination with simplifying the
geometry of the model, resulting in a similar drop in resolution and quality of the textures anyway.
24
Figure 16: Visual simplification results of the Ninja model.
Figure 17: Visual simplification results of the Robot model.
25
Figure 18: Visual simplification results of the Razor model.
26
5. Discussion
This section discusses a number of shortcomings of the current versions of these techniques and
provides areas for future research.
5.1 Texture mapping limitations
The current method for sampling and mapping the textures on the simplified models is fairly simple
and straightforward but has its limitations. One of the problems can be bleeding or disappearance of
textures near locations that are heavily simplified, especially where multiple surfaces meet. The
method can likely be improved from the simple nearest neighbor function, by for example
considering the surface normals or using different filtering methods.
The layout of the texture file used, while simple to implement, has its disadvantages. Because more
than 50% of the space is empty, the texture image is larger than it needs to be. Also, because the
triangles are all disjoint, all the vertices of the model need to be duplicated instead of just those at
the seams, resulting in a possible performance drop and an increase in data size. A proper
unwrapping algorithm will likely give better results.
5.2 Comparison algorithm limitations
The current comparison algorithm, while effective, also has its limitations. The current version
weighs all voxels equally. This works well for solid models far away from the viewer, but is less
effective when the model is near the observer and small details might be just as important as the
general shape. Also, when working with models that have very thin surfaces, these surfaces might
get a relatively small weight compared to their importance for the visual shape of the model. This
method might be improved by assigning different weights to voxels, for example by weighing thin
surfaces more heavily.
5.3 Simplification algorithm limitations
Currently the algorithm uses a set number of user defined settings for different simplifications. It
might be possible to automate the process to be more efficient in finding optimal settings and
methods by intelligently choosing which simplification to try next based on statistical methods. For
example a regression or cluster analysis might find a more optimal order to perform the
simplifications in to get a more optimal result without running every possible combination of
settings; while something like least squares might be able to predict optimal settings for any method
using numerical settings by fitting a function to a smaller number of previous results and finding the
optimum of that function. It might also be worth researching whether a statistical link can be found
between different types of features of a model and which algorithm and settings are most successful
in simplifying that model.
27
6. Conclusion
The main goal of this thesis was to research and develop a new method that can compare different
simplifications of the same model.
The comparison method presented in this thesis can be used to objectively and automatically
compare simplifications based on their volume and is easy to implement and independent of the
simplification method used. The resulting score is stable (independent of model geometry) and easily
interpreted in both absolute and relative contexts.
The second goal of this thesis was to research and develop a method that can preserve textures of
the original model during the simplification process and map them to the simplified version of the
model.
The texture mapping method presented in this thesis can be used to transfer the textures of any
original model to any simplified version of that model, regardless of the method used for the
simplification and without requiring user input. This method can produce results of high quality
depending on the degree of simplification and the features of the model.
28
References
[1]
P. Cignoni, C. Rocchini, and R. Scopigno, "Metro: measuring error on simplified surfaces."
Computer.Graphics.Forum vol. 17 no. 2, pp. 167-174. 1998.
[2]
P. E. Danielsson, "Euclidean distance mapping," Computer.Graphics.and image
processing., vol. 14, no. 3. pp.227-248, 1980.
[3]
M. Garland and P. S. Heckbert, "Surface simplification using quadric error metrics."
Proceedings.of the.24th.annual.conference.on Computer.graphics.and
interactive.techniques. pp. 209-216. 1997.
[4]
H. Hoppe, "Progressive meshes." Proceedings.of the.23rd.annual.conference.on
Computer.graphics.and interactive.techniques. pp. 99-108. 1996.
[5]
W. E. Lorensen and H. E. Cline, "Marching cubes: A high resolution 3D surface
construction algorithm." ACM.Siggraph.Computer.Graphics. vol. 21 no. 4, pp. 163-169.
1987.
[6]
O. Matias van Kaick and H. Pedrini, "A comparative evaluation of metrics for fast mesh
simplification." Computer.Graphics.Forum vol. 25 no. 2, pp. 197-210. 2006.
[7]
F. S. Nooruddin and G. Turk, "Simplification and repair of polygonal models using
volumetric techniques," Visualization.and Computer.Graphics., IEEE Transactions.on, vol.
9, no. 2. pp.191-205, 2003.
[8]
W. J. Schroeder, J. A. Zarge, and W. E. Lorensen, "Decimation of triangle meshes,"
COMPUTER.GRAPHICS.-NEW YORK.-ASSOCIATION.FOR.COMPUTING.MACHINERY.-, vol.
26. pp.65, 1992.
[9]
G. M. Treece, R. W. Prager, and A. H. Gee, "Regularised marching tetrahedra: improved
iso-surface extraction," Computers.& Graphics., vol. 23, no. 4. pp.583-598, 1999.
[10]
A. Van Gelder and J. Wilhelms, "Topological considerations in isosurface generation,"
ACM.Transactions.on Graphics.(TOG.), vol. 13, no. 4. pp.337-375, 1994.
[11]
L. Williams, "Pyramidal parametrics." ACM.Siggraph.Computer.Graphics. vol. 17 no. 3,
pp. 1-11. 1983.
29
Appendix - A - Model simplification scores
This appendix has the tables with the simplification scores of the different simplifications of the
different models. The percentages in the top row are the triangle counts of the simplified model
compared to the original model. The descriptions in the left column are the simplification method
and settings used: direct for Garland and Heckbert’s algorithm, the others are the open and close
distance thresholds for Nooruddin and Turk’s algorithm. For each column, the best score is
highlighted.
Table 1. Simplification scores for the Ninja model.
Ninja
Direct
Open: 0, Close 0
Open: 1, Close 0
Open: 0, Close 1
Open: 2, Close 0
Open: 0, Close 2
Open: 3, Close 0
Open: 0, Close 3
Open: 1, Close 1
Open: 2, Close 1
Open: 1, Close 2
Open: 3, Close 1
Open: 1, Close 3
Open: 2, Close 2
Open: 3, Close 2
Open: 2, Close 3
Open: 3, Close 3
100%
0.998266
0.977192
0.947508
0.962614
0.904944
0.940116
0.850237
0.919807
0.939558
0.904736
0.928973
0.862398
0.908380
0.906710
0.861841
0.892511
0.857747
50%
0.969490
0.957653
0.931011
0.940487
0.891585
0.919106
0.839216
0.902564
0.922346
0.889928
0.912196
0.848983
0.894682
0.889387
0.850820
0.875070
0.846262
25%
0.920938
0.907604
0.896444
0.895553
0.856948
0.880777
0.811178
0.869413
0.883450
0.854787
0.873522
0.822891
0.864446
0.844503
0.831854
0.845534
0.818445
10%
0.771582
0.796240
0.795085
0.780021
0.749882
0.783427
0.739597
0.788805
0.769438
0.767663
0.770111
0.747031
0.782789
0.760602
0.752946
0.749785
0.736552
30
Table 2. Simplification scores for the Razor model.
Razor
Direct
Open: 0, Close 0
Open: 1, Close 0
Open: 0, Close 1
Open: 2, Close 0
Open: 0, Close 2
Open: 3, Close 0
Open: 0, Close 3
Open: 1, Close 1
Open: 2, Close 1
Open: 1, Close 2
Open: 3, Close 1
Open: 1, Close 3
Open: 2, Close 2
Open: 3, Close 2
Open: 2, Close 3
Open: 3, Close 3
100%
0.997386
0.974746
0.933205
0.966693
0.880782
0.947667
0.821891
0.927377
0.930478
0.881650
0.918578
0.829196
0.905886
0.874099
0.824348
0.864570
0.825565
50%
0.966300
0.947681
0.905207
0.943135
0.856688
0.920644
0.803136
0.900295
0.903350
0.860150
0.898764
0.808534
0.882435
0.854024
0.811742
0.844283
0.809076
25%
0.874210
0.851841
0.837880
0.856541
0.782611
0.866642
0.750939
0.853632
0.830923
0.788799
0.839448
0.766450
0.817666
0.801202
0.775383
0.804154
0.764786
10%
0.696183
0.726145
0.676573
0.702109
0.655220
0.729863
0.603007
0.714423
0.698532
0.655792
0.689871
0.647023
0.734177
0.712591
0.643503
0.705691
0.640510
Table 3. Simplification scores for the Robot model.
Robot
Direct
Open: 0, Close 0
Open: 1, Close 0
Open: 0, Close 1
Open: 2, Close 0
Open: 0, Close 2
Open: 3, Close 0
Open: 0, Close 3
Open: 1, Close 1
Open: 2, Close 1
Open: 1, Close 2
Open: 3, Close 1
Open: 1, Close 3
Open: 2, Close 2
Open: 3, Close 2
Open: 2, Close 3
Open: 3, Close 3
100%
0.997390
0.962506
0.924605
0.959690
0.868548
0.945208
0.763477
0.920699
0.921031
0.870706
0.912442
0.763164
0.884730
0.858810
0.766728
0.850240
0.768121
50%
0.912476
0.886549
0.832638
0.882289
0.779822
0.869428
0.687659
0.846546
0.841168
0.789585
0.833261
0.688356
0.819142
0.779254
0.694659
0.770956
0.694568
25%
0.665824
0.661981
0.623137
0.697403
0.597258
0.655908
0.533567
0.664482
0.634067
0.613656
0.644835
0.558334
0.642711
0.588262
0.565088
0.586613
0.571247
10%
0.303000
0.306989
0.282970
0.339778
0.299524
0.312924
0.298234
0.304314
0.327397
0.303543
0.343016
0.325843
0.299923
0.313930
0.265261
0.327949
0.309442
31