Chinese Journal of Electronics
Vol.18, No.2, Apr. 2009
MATE: A Visual Based 3D Shape Descriptor∗
LENG Biao1 , QIN Zheng1,2 , CAO Xiaoman2 , WEI Tao1 and ZHANG Zhuxi3
(1.Department of Computer Science and Technology, Tsinghua University, Beijing 100084, China)
(2.School of Software, Tsinghua University, Beijing 100084, China)
(3.General Logistics Research Institute, Beijing 100071, China)
Abstract — Since 3D models have been widely applied
in many research areas, the techniques for content-based
3D model retrieval become necessary. In this paper, a
novel visual based 3D shape descriptor called MATE is
proposed. A modified Principal component analysis (PCA)
method for model normalization is presented at first. Secondly, a new Adjacent angle distance Fourier (AADF) algorithm is proposed. Then the original two-viewed Dbuffer
method is presented to extract characteristics of projected
images. Finally, based on the modified PCA method, the
shape descriptor MATE is proposed by combining AADF,
Tchebichef and two-viewed Dbuffer. Experimental results
show that the descriptor MATE provides better retrieval
performance than the best current descriptors.
Key words — 3D model retrieval, Shape descriptor,
Visual similarity.
I. Introduction
As a hot research area in computer graphics and multimedia, content-based 3D model retrieval has attracted a large
amount of research in recent years, and this promising technique has been applied in many fields, such as computer-aided
design[1] and bioinformatics[2] .
Until now, numerous shape descriptors have been proposed
to capture different characteristics of 3D models, and they
are generally classified into geometry-based descriptors and
shape-based descriptors. Geometry-based descriptors match
3D models according to the geometric information and distribution. Osada[3] proposed a method called shape distribution
for computing shape signatures of 3D models. Funkhouser[4]
utilized spherical harmonics to compute discriminating similarity measures. Kazhdan[5] presented a novel algorithm for
matching 3D models that factors the shape matching equation as the disjoint outer product of anisotropy and geometric
comparisons. Shape-based descriptors distinguish 3D models
by taking the projected images into account. Chen[6] proposed
a novel approach that matches 3D models using their visual
similarities, which are measured with image difference in light
fields. Pu[7] introduced an approach to retrieve the desired 3D
models by measuring the similarity between the user’s sketches
and 2D orthogonal views. For the state-of-the-art reviews in
content-based 3D model retrieval, please refer to Refs.[8–11].
Early research of 3D model retrieval seems to focus on exploring various shape descriptors, hoping to find the “best”
one to represent 3D objects. Shilane[12] compared 12 different
shape descriptors on the criterion of publicly available wellrecognized 3D model database Princeton shape benchmark
(PSB), and Light filed descriptor (LFD)[6] was declared as the
best. Then Vranic[13] proposed a composite shape descriptor
DESIRE, and the experimental results showed that the proposed hybrid descriptor outperformed LFD. Therefore, LFD
and DESIRE are regarded as the best shape descriptors.
In this paper, a new shape descriptor for 3D model retrieval
is proposed with the essential idea that if two models belong
to the same class, they should look similar from the main perspectives. In order to reduce the number of projected images,
a modified Principal component analysis (PCA) method is presented for model normalization at first. Secondly, an original
Adjacent angle distance Fouriers (AADF) algorithm is proposed, which is more appropriate to contour feature extraction for the projected images. Then the original two-viewed
Dbuffer method is investigated to obtain attributes of greyscale depth images including contour and region aspects. Finally, based on the modified PCA method, a novel composite
visual based 3D shape descriptor MATE is presented by concatenating the contour-based descriptor AADF, the regionbased descriptor Tchebichef and the two-viewed Dbuffer descriptor. Compared with several shape descriptors, the experimental results show that the proposed descriptor MATE has
achieved better retrieval effectiveness than others.
II. Modified PCA
Because the scale, rotation and orientation of original models are quite different from each other, the procedure of model
normalization is a must for shape-based 3D model retrieval.
In Ref.[7] Pu proposed the Maximum normal distribution
(MND) method for model normalization. But it is just suitable for CAD models since CAD objects consist mainly of
several hundred meshes. On the contrary, multimedia models
∗ Manuscript Received Nov. 2007; Accepted Nov. 2008. This work is supported by the National Grand Fundamental Research 973
Program of China (No.2004CB719401), and the National Research Foundation for the Doctoral Program of Higher Education of China
(No.20060003060).
292
Chinese Journal of Electronics
are composed of thousands meshes and the structure is unpredictable.
In order to find the appropriate canonical coordinate system, the PCA method[8] is regarded as a prominent approach.
It estimates the pose of a 3D model, and produces an affine
transformation. However, this method is not effective at solving the model normalization problem, as it only takes the vertex spatial positions into account. 3D models are composed
of thousands of meshes, and any two of them are quite different from each other. Vertex spatial position is just one of
the essential elements, and there are other factors affecting
the model normalization, such as linear length between neighbor vertices, the triangular area associated with three adjacent
vertices and so forth. Furthermore, they also have relationship
with the vertex spatial position. Therefore, the triangular area
and linear length should be applied in the process of model
normalization.
Each model is given by a set of vertices vi in 3D space. V
stands for vertex set, and n is the total number of vertices.
Let mv be the mean of V .
The weighing factor wai is proportional to triangle area
based on vertex vi .
wai =
nA0i
3A
and obviously,
n
X
wai = n
(1)
i=1
where A0i is the sum of the triangular areas associated with
vertex vi , and A is the sum of all the triangular areas in the
3D model.
The weighing factor wli is proportional to linear length
based on vertex vi .
wli =
nL0i
2L
and obviously,
n
X
wli = n
(2)
i=1
where L0i is the sum of linear lengths associated with vertex
vi , and L is the sum of all the linear lengths in the 3D model.
The most important covariance matrix Ccov ∈ C3×3 is defined below:
Ccov
n
1X
=
wai × wli × (vi − mv )(vi − mv )T
n i=1
descriptor is proposed to obtain attributes of six grey-scale
depth images. The dimension of black-white images and greyscale depth images are N × N , where N is 256 in our experiments. Finally, a novel composite shape descriptor MATE is
presented.
1. AADF descriptor
In Content-based image retrieval (CBIR) research, Zhang
studies different Fourier descriptors, and shows that the centroid distance is the best shape signature with detailed analysis and explanation presented in Ref.[14]. The feature extraction of projected black-white images in 3D model retrieval is
quite different from CBIR, because it processes 2D images to
retrieval 3D models while CBIR just deals with 2D images.
Even for two models in the same class, their projected images
in the same perspective are different because of some aspects,
such as model normalization, model geometric distribution,
etc. Therefore, the centroid distance Fouriers is not suitable
for feature extraction. Nevertheless, the concept and processing of the centroid distance Fouriers are useful, and can be
utilized for feature extraction in 3D model retrieval.
Based on the similarity translation invariance of Fouriers,
the center of regional content is translated and regional content is scaled for image preprocessing. Then shape contour are
captured and the longest contour is regarded as the contour of
regional content. All contour vertices are taken into a cyclic
sequence L and the total number is M . The center point of
contour L is O, and each vertex is described as:
Li = (Xi , Yi ) = O + ρi (cos θi + sin θi ),
(3)
(4)
After the modified PCA method, 3D models, originally
with an arbitrary rotation and orientation, are invariant to
translation and rotation.
III. “MATE” Descriptor
Based on the modified PCA method, each 3D model is projected with three black-white images from lateral view, front
view and top view at first. AADF and Tchebichef descriptors are utilized to extract contour and region feature from
the black-white images respectively. Then two-viewed Dbuffer
0≤i<M
(5)
where i stands for the sequence number in the contour L. pi
expresses the distance between vertex vi and O. θi is the angle
between vertex vi and O in polar coordinates.
Based on L, contour vertices are sampled with adjacent
angle, exactly every two angles one vertex. To avoid vacancy,
the contour is extended as introduced before. The adjacent
angle sequence S with 180 vertices is defined below:
where
The rest procedure of the modified PCA method is the
same as the PCA method. The transformation matrix Ctra is
calculated based on the eigenvalues of Ccov . Then the original
vertice vi is transformed to the new one vi0 :
vi0 = Ctra (vi − mv )
2009
Sj = Li , if ρi = max Pj
¾
½ ¯
¯ 2jπ
2(j + 1)π
≤ θi <
Pj = ρi ¯¯
180
180
(6)
where 0 ≤ j < 180. The sequence S may lose some contour
information seriously if many sequential vertices are not sampled from L to S. Thus, to avoid severe contour information
failure, and allow a little slight contour feature loss, some vertices, which are not sampled by S, will be added into sequence
S. For every two sequential vertices vu and vu+1 in S, if more
than two vertices are not sampled between vu and vu+1 in L,
adjacent distance vertices will be inserted into S. Then, the
adjacent angle distance sequence T is formed, described as:
½
T ∪ Sj , τ (Sj+1 ) − τ (Sj ) ≤ 2
T =
T ∪ ψj , τ (Sj+1 ) − τ (Sj ) > 2
with
ψj ={Li |i = τ (Sj ) + 2n, τ (Sj ) ≤ i < τ (Sj+1 )}
(7)
where τ (Sj ) is the sequence number of vertex Sj in L. The
next step is centroid distance Fourier transformation based on
the contour sequence T , and the novel algorithm is called adjacent angle distance Fouriers. Fig.1 displays an ant with three
different contours. For feature extraction of each projected
MATE: A Visual Based 3D Shape Descriptor
black-white image, the first 30 coefficients are exploited, so
the dimension of AADF descriptor is 90 for a 3D model.
Fig. 1. An ant with different contours. (a) basic contour; (b)
adjacent angle contour; (c) adjacent angle distance
contour
2. Tchebichef descriptor
Shape feature descriptors are generally classified into two
primarily categories: region-based and contour-based descriptors. The AADF just extracts the information of contour
shape, and fails to emphasize shape interior content. On
the contrary, in region-based approaches, all vertices within
shape region are taken into account to obtain shape representation. Mukundan[15] proposed the Tchebichef (a.k.a. Chebyshev) moments that can be effectively used as pattern features
in the analysis of two-dimensional images, and the experiments
showed that Tchebichef moments were superior to the conventional orthogonal moments.
In Ref.[15] Mukundan introduced Tchebichef moments
based on the discrete orthogonal Tchebichef polynomial. The
scaled orthogonal Tchebichef polynomials for an image of size
N ×N are defined according to the following recursive relation:
t0 (x) =1
t1 (x) =(2x − N + 1)/N
·
tp (x) = (2p − 1)t1 (x)tp−1 (x) − (p − 1)
¾
¸Á
½
(p − 1)2
t
(x)
p, p > 1
· 1−
p−2
N2
(8)
and the squared-norm p(p, N ) is given by
p(p, N ) =
N (1 − 1/N 2 )(1 − 22 /N 2 ) · · · (1 − p2 /N 2 )
2p + 1
(9)
where p = 0, 1, · · · , N − 1.
The radial Tchebichef moment of order p and repetition q
is defined as:
Spq =
m−1
2π
XX
1
tp (r)e−jqθ f (r, θ)
2πρ(p, m) r=0
(10)
θ=0
where m denotes (N/2) + 1.
In the above equation, both r and θ take integer values.
The mapping between (r, θ) and image coordinates (x, y) is
given by:
x=
rN
N
cos(θ) + ,
2(m − 1)
2
y=
rN
N
sin(θ) +
(11)
2(m − 1)
2
Tchebichef moments consist of several different |Spq |,
which is regarded as the feature vector, for the details, please
refer to Refs.[15, 16].
In our case, we focus on testing the retrieval performance of
Tchebichef moments with different combination q and p for 3D
293
model retrieval, and the experimental results show that when
p = 10, and q = 1, the retrieval performance of Tchebichef
moments is better than others. Thus, in this paper, only 10
Tchebichef moments coefficients with q = 1 and 1 ≤ p ≤ 10
describe the region shape in a projected black-white image,
and the dimension of Tchebichef is 30 in all.
3. Two-viewed Dbuffer descriptor
After the model normalization, a novel two-view Dbuffer
descriptor is proposed to obtain the visual features of 3D models. Based on the recognition mechanism of our eyes, the essential concept of this method is that if the shape information is
attained synchronously from two view planes in the bounding
box, the probability of recognition and the ability to distinguish are greater than that from only one view plane.
In order to acquire much more visual information using
two view planes synchronously, the normalized model using
modified PCA method should be transformed. At first, the
normalized model is transformed by 45 degrees with the y axis
moved counter-clockwise, and the vertex v is transformed to
the new one vi0 :
vi0 = M v
(12)
where the transformation matrix M on the y axis is defined
as:
cos θ 0 − sin θ
π
θ=
M = 0
1
0 ,
(13)
4
sin θ 0 cos θ
The new bounding box is calculated after the transformation on the y axis, and the front view can obtain shape
information both from the front view plane and lateral view
plane in the original bounding box.
Then the object is projected into two grey-scale depth images with the y axis from both directions, and the attribute of
the pixel (a, b) in grey-scale depth images of dimensions N ×N
pixels is calculated as:
q
2 + D2
Va,b = Dvp
(14)
po
where Dvp is the vertical distance between pixel (a, b) and the
view plane in the new bounding box, and Dpo stands for the
horizontal distance between pixel (a, b) and the center of the
image. For the details of projection, please refer to Ref.[8].
The object is also transformed and projected on the x axis
and z axis respectively, therefore, each model is projected into
6 different grey-scale depth images.
Finally, the grey-scale depth image is transformed from
the spatial domain into the spectral domain by the two dimensional Discrete Fourier transform (DFT):
M
−1 N
−1
X
X
1
fˆpq = √
fab e−j2π(pa/M +qb/N )
M N a=0 b=0
(15)
where M = N , j is the imaginary unit, p = 0, · · · , M − 1,
q = 0, · · · , N − 1, and fab is the gray-scale value of pixel(a, b).
Because of the computational complexity and separability, the
DFT is reduced from a 2-dimensional operation to two 1dimensional operations.
¶
M −1 µ
N −1
1 X
1 X
√
fˆpq = √
fab e−j2πqb/N e−j2πpa/M
M a=0
N b=0
(16)
294
Chinese Journal of Electronics
For the detailed computing procedure of Fourier coefficients, including cyclically shifting the original pixel of the
gray-scale image, utilizing the symmetry property to calculate
the coefficients, please refer to the Section IV.3 in Ref.[8].
The number of coefficients in a gray-scale image is k2 + k +
1, and in our case, k = 8, 73 components of Fourier coefficients
are obtained to represent each grey-scale depth image. Thus,
the dimension of the two-view Dbuffer descriptor is 438, since
each model is described from 6 different perspectives.
4. Composite descriptor
Finally, based on the modified PCA(M) method, a composite shape descriptor, combined from the AADF(A) descriptor,
the Tchebichef(T) descriptor, and the two-viewed Dbuffer(E)
descriptor, is presented called “MATE”.
Before the combination, each feature vector must be normalized as discussed in Ref.[8]. Then the feature vector of
MATE descriptor is formed by concatenating the basic feature vectors with equal weight, hence the dimension of MATE
descriptor is 558.
2009
sures.
To estimate the retrieval effectiveness, several measures
such as precision vs. recall, nearest neighbor, first-tier, secondtier, DCG and normalized DCG, are applied, which are wellknown evaluation methods for 3D model retrieval and their
explanations are described in Ref.[12].
2. Effectiveness comparison
To evaluate the proposed shape descriptor MATE, it
is compared to Ray[17] , Silhouette[8] , GEDT[4] , Dbuffer[8] ,
LFD[6] and DESIRE[13] using the benchmark PSB. Fig.2 shows
the average precision vs. recall results for MATE and other
shape descriptors, and it is obvious that the composite descriptors MATE, DESIRE and LFD get better retrieval performance than single descriptors. Among the three composite
ones, the precision of MATE at different recall levels is almost
the same as that of DESIRE, while LFD is the worst.
IV. Experiments
In this section, we introduce the 3D model repositories,
and several standard measures. Then experimental results
compared with several descriptors and some discussions are
presented in detail.
1. Retrieval evaluation
The experiments are based on the publicly available benchmark of 3D model repositories PSB[12] , which contains 1814
objects in general categories like human, building, vehicle, etc.
In the experiments, each model is used as a query object, and
the models belonging to the same class are considered relevant.
The L1 (Manhattan) distance is used as a similarity measure, because the experiments in Ref.[8] show that L1 acquires
the best retrieval results compared with other similarity meaTable 1. The comparison among
Descriptor
Dimension
Nearest neighbor
Ray
136
51.49%
Silhouette
300
54.69%
CEDT
544
57.55%
Dbuffer
438
60.86%
LFD
4700
61.77%
DESIRE
472
65.82%
MATE
558
70.56%
Fig. 2. Average precision vs. recall figures for several shape
descriptors
seven descriptors with several evaluation measures
First-tier
Second-tier
E-measure
DCG
Normalized DCG
25.56%
34.58%
19.79%
53.68%
−12.85%
30.51%
40.86%
23.45%
57.96%
−5.91%
32.29%
43.24%
24.35%
59.62%
−3.21%
33.18%
42.12%
24.43%
59.94%
−2.68%
33.01%
46.54%
27.11%
65.52%
6.37%
40.45%
51.33%
28.16%
66.31%
7.65%
42.27%
54.81%
29.49%
68.14%
10.62%
Table 1 shows the comparison of seven shape descriptors
using dimension and several evaluation measures in test subset of PSB. For the dimension column, LFD descriptor has as
much as 4700, which needs more storage space and execution
time, while the dimension of other six descriptors is about
several hundreds. From the table, it is evident that the retrieval effectiveness of MATE using different measures is the
best among seven descriptors. Compared with the best current
shape descriptor DESIRE, MATE provides an improvement
of 7.20%, 4.50%, 6.78%, 4.72% and 2.76% in terms of nearest
neighbor, first-tier, second-tier, E-measure and DCG respec-
tively. Therefore, the proposed descriptor MATE is more suitable for 3D model retrieval than DESIRE.
Fig.3 displays the results using various evaluation criteria,
and the 2 × 10 bars show the average performance for the 10
largest model classes in train subset of DESIRE and MATE.
Table 2 enumerates the class name and class size for the corresponding class number. These results indicate that the MATE
descriptor performs better retrieval performance than the descriptor DESIRE.
As discussed above that the proposed MATE descriptor
outperforms DESIRE not only in average but also in the
MATE: A Visual Based 3D Shape Descriptor
largest model classes. Finally, they are compared with the
smallest model classes pig quadruped class and table-and-chair
furniture class, only 4 models in each class. Fig.4 demonstrates the retrieval results using the DESIRE descriptor. To
the query model of a pig, only the best match is relevant, while
the second and the third matches are absolutely irrelevant. To
retrieve a table-and-chiar furniture model, none of the three
most similar retrieved models belong to the same class as the
query model. On the contrary, the proposed MATE descriptor achieves perfect retrieval performance as all the relevant
models are retrieved in the first three similar models for both
query models, illustrated in Fig.5.
295
Fig. 5. Retrieval using the MATE descriptor
The retrieval performance discussed above suggests that
the MATE descriptor is more effective than the LFD and
DESIRE descriptor. However, the effectiveness is only one
marked advantage of MATE. Table 3 summarizes the storage
size, average generation time and comparison time executed
on a PC with Pentium 4 2.0GHz and 1GB RAM. In comparison with LFD, MATE has several benefits in terms of storage
and computation costs. Table 3 shows only slight difference
between MATE and DESIRE in terms of the storage size, average extraction time and query time, which can be negligible
in practice. Therefore, the proposed MATE descriptor is a
significantly better technique.
Table 3. Space and time complexity
Size
Generate time
Compare time
Descriptor
(bytes)
(second)
(second)
LFD
4700
3.8
0.046
DESIRE
472
0.7
0.005
MATE
588
0.6
0.005
V. Conclusion
Fig. 3. The comparison of retrieval performance between DESIRE and MATE shape descriptors with the largest
model classes using various criteria
Table 2. The largest model classes in train subset
Class number
Class name
Class size
1
fighter-jet airplane
50
2
human biped
50
3
rectangular table
26
4
potted-plant
25
5
human-arms-out
21
6
sports-car
19
7
rifle gun
19
8
helicopter aircraft
17
9
face body-part
17
10
chess-piece
17
Fig. 4. Retrieval using the DESIRE descriptor
In this paper, there are four contributions to the contentbased 3D model retrieval. The first one is the proposal of
modified PCA model normalization method, which reduces
the number of projected images, acquires image feature with
more characteristics, and converts the 3D models originally
with an arbitrary rotation and orientation into invariant to
translation and rotation. Secondly, a new adjacent angle distance Fouriers descriptor is presented, which is based on the
concept of centroid distance Fouriers. The AADF is able to
capture more precise contour feature of black-white images,
and more appropriate to the contour feature extraction in 3D
model retrieval. Then, an original two-viewed Dbuffer descriptor is investigated to acquire characteristics of grey-scale depth
images including contour and region aspects. Finally, based
on the modified PCA method, a novel composite 3D shape
descriptor MATE is proposed by concatenating contour-based
descriptor AADF, region-based descriptor Tchebichef and the
two-viewed Dbuffer descriptor. The experimental results on
the criterion of 3D model database PSB show that the proposed descriptor MATE has attained the best retrieval effectiveness compared with four single descriptors and two composite descriptors LFD and DESIRE using several standard
measures.
Results from Section IV suggest that the MATE descriptor is more effective than LFD and DESIRE. However, the
effectiveness is only one marked advantages of MATE. Com-
296
Chinese Journal of Electronics
pared with LFD, the advantage of feature vector dimensions is
obvious, as the feature vector components of MATE are only
11.87% of that in LFD. The composite descriptor DESIRE is
composed of Dbuffer, Silhouette and Ray descriptors. Among
them, the Dbuffer and Silhouette descriptors belongs to the
shape-based approaches, while the Ray descriptor is a member of the geometry-based approaches, therefore, the process
of feature extraction in DESIRE is more complex than that
in MATE. In a summary, the proposed composite MATE descriptor is a significantly better technique.
References
[1] W.C. Regli and V.A. Cicirello, “Managing digital libraries for
computer aided design”, Computer-Aided Design, Vol.32, No.2,
pp.119–132, 2000.
[2] J.S. Yeh et al., “A web-based three dimensional protein retrieval
system by matching visual similarity”, Bioinformatics, Vol.21,
No.13, pp.3056–3057, 2005.
[3] R. Osada et al., “Shape distributions”, ACM Transactions on
Graphics, Vol.21, No.4, pp.807–832, 2002.
[4] T. Funkhouser et al., “A search engine for 3D models”, ACM
Transactions on Graphics, Vol.22, No.1, pp.83–105, 2003.
[5] M. Kazhdan, T. Funkhouser, “Shape matching and anisotropy”,
ACM Transactions on Graphics, Vol.23, No.3, pp.623–629,
2004.
[6] D.Y. Chen et al., “On visual similarity based 3D model retrieval”, Computer Graphics Forum, Vol.22, No.3, pp.223–232,
2003.
[7] J.T. Pu, K. Ramani K., “On visual similarity based 2D drawing
retrieval”, Computer-Aided Design, Vol.38, No.3, pp.249–259,
2006.
[8] D.V. Vranic, “3D model retrieval”, Ph.D. Thesis, University of
Leipzig, Leipzig, Germany, 2004.
[9] B. Bustos et al., “Feature-based similarity search in 3D object
databases”, ACM Computing Surveys, Vol.37, No.4, pp.345–
387, 2005.
[10] T. Funkhouser et al., “Shape-based retrieval and analysis of 3d
models”, Communications of the ACM, Vol.48, No.6, pp.58–64,
2005.
[11] N. Iyer et al., “Three-dimensional shape searching: state-of-theart review and future trends”, Computer-Aided Design, Vol.37,
No.7, pp.509–530, 2005.
[12] P. Shilane et al., “The Princeton shape benchmark”, Proc. of
Shape Modeling and Applications, Italy, pp.167–178, 2004.
[13] D.V. Vranic, “Desire: a composite 3d-shape descriptor”, Proc.
of IEEE Conference on Multimedia and Expo, Amsterdam,
Holand, pp.962–965, 2005.
[14] D. Zhang, G. Lu, “Generic fourier descriptor for shape-based
image retrieval”, Proc. of IEEE Conference on Multimedia and
Expo, Lausanne, Switzerland, pp.425–428, 2002.
[15] P. Mukundan, S. Ong and P. Lee, “Image analysis by tchebichef
moments”, IEEE Transaction on Image Processing, Vol.10,
No.9, pp.1357–1364, 2001.
2009
[16] M. Celebi, Y. Aslandogan, “A comparative study of three
moment-based shape descriptor”, Proc. of IEEE Conference
on Information Technology: Coding and Computing, Las Vegas, USA, pp.788–793, 2005.
[17] D.V. Vranic, “An improvement of rotation invariant 3D shape
descriptor based on functions on concentric spheres”, Proc.
of IEEE Conference on Image Processing, Barcelona, Spain
pp.757–760, 2003.
LENG Biao
received B.S. degree
in computer science and technology from
the National University of Defense Technology, Changsha, China, in 2004. He is
currently a Ph.D. candidate at Department
of Computer Science and Technology, Tsinghua University, Beijing, China. His research interests include 3D model retrieval,
relevance feedback and pattern recognition.
QIN Zheng professor and Ph.D. supervisor at both Department of Computer
Science and Technology and School of Software, Tsinghua University, Beijing, China.
His major research interests include software architecture, data synthesis and 3D
model retrieval.
CAO Xiaoman
received B.S. degree in automation from Xi’an Jiaotong
University, Xi’an, China, in 2006. She
is currently a M.S. candidate at School
of Software, Tsinghua University, Beijing,
China. Her research interests include 3D
model retrieval and relevance feedback.
WEI Tao
received B.S. degree in
computer software from Tsinghua University, Beijing, China, in 2007. He is currently a M.S. candidate at Department
of Computer Science and Technology, Tsinghua University, Beijing, China. His research interests include data mining, machine learning and relevance feedback.
ZHANG Zhuxi
received B.S. degree and M.S. degree in
computer science and technology from the National University of
Defense Technology, Changsha, China, in 2004 and 2007. His research interests include software engineering, distributed computing
and information retrieval.
© Copyright 2026 Paperzz