An Optical Sensor for Measuring the Position and Slanting Direction

sensors
Article
An Optical Sensor for Measuring the Position and
Slanting Direction of Flat Surfaces
Yu-Ta Chen 1 , Yen-Sheng Huang 1 and Chien-Sheng Liu 1,2, *
1
2
*
Department of Mechanical Engineering and Advanced Institute of Manufacturing with High-Tech Innovations,
National Chung Cheng University, Chiayi County 62102, Taiwan; [email protected] (Y.-T.C.);
[email protected] (Y.-S.H.)
Graduate Institute of Opto-Mechatronics, National Chung Cheng University, Chiayi County 62102, Taiwan
Correspondence: [email protected]; Tel.: +886-5-272-9306; Fax: +886-5-272-0589
Academic Editor: Changzhi Li
Received: 23 May 2016; Accepted: 5 July 2016; Published: 9 July 2016
Abstract: Automated optical inspection is a very important technique. For this reason, this
study proposes an optical non-contact slanting surface measuring system. The essential features
of the measurement system are obtained through simulations using the optical design software
Zemax. The actual propagation of laser beams within the measurement system is traced by using
a homogeneous transformation matrix (HTM), the skew-ray tracing method, and a first-order Taylor
series expansion. Additionally, a complete mathematical model that describes the variations in light
spots on photoelectric sensors and the corresponding changes in the sample orientation and distance
was established. Finally, a laboratory prototype system was constructed on an optical bench to
verify experimentally the proposed system. This measurement system can simultaneously detect the
slanting angles (x, z) in the x and z directions of the sample and the distance (y) between the biconvex
lens and the flat sample surface.
Keywords: flat surfaces; autofocus; autocollimator
1. Introduction
Among the measurement methods used in present day production lines, the non-contacting
measurement technique is widely applied [1,2]. This technique is mainly divided into two types:
image-based measurement and optical measurement. Image-based measurement focuses on image
processing of captured images. The biggest limitations of this method are its long system response
time and low accuracy. For these reasons, the technique has been gradually replaced in recent
years by optical measurement. In the field of optical measurements, the positioning technique is of
central importance [3,4]. For the laser positioning measurement technique, our research team has
proposed a series of innovative system frameworks and combined them with optical diffusers to
suppress the geometrical fluctuations of the laser beam and to achieve long-range and high-accuracy
measurements [5–7]. Taken together, it is clear that optical measurement systems have the advantages
of high measurement speed and accuracy.
In recent years, the positioning and slanting of a plane specular sample to be measured or
processed has an important requirement in many fields of research and industry [8]. In applications of
flat surface measurement techniques, Lee and Chang developed a laser scanning sensor with multiple
position-sensitive device (PSD) detectors that can digitize a freeform surface through mathematical
calculations using the Lambert model and geometric triangulation measurements [9]. The unique
characteristic of this system is the use of multiple PSD detectors to resolve the dead zone of the
measured angle and to broaden its range. Shiou developed a detector for distance and slanting angle
measurements that can obtain the flat surface distance and slanting angle of the sample after the
Sensors 2016, 16, 1061; doi:10.3390/s16071061
www.mdpi.com/journal/sensors
could be obtained. Finally, Gao developed a two-dimensional angle probe to measure the flatness of
large silicon wafers [11]. Moreover, many methods are available to measure the parameters of
slanting flat surfaces. However, to the best of our knowledge, these systems have complicated
structures and high manufacturing costs. At present, there is no available measurement system with
a simple architecture and straightforward assembly that can simultaneously measure the tilt angle
Sensors 2016, 16, 1061
2 of 13
and position information of a flat surface. Therefore, the aim of this study is to propose a fire-new
measurement system with a simple architecture, straightforward assembly, and lower
manufacturing
costbythat
can simultaneously
the tiltand
angle
and position
information
the
data
are processed
triangulation
operationsdetect
[10]. Hirata
Haraguchi
irradiated
a sampleofwith
sample.
an
annular laser beam and found that the shape of the laser beam changes with different slanting
angles of the sample; by analyzing these changes, information on the slanting angles of the sample
2. Architecture
andFinally,
Measurement
Methodaof
Proposed Measurement
System
could
be obtained.
Gao developed
two-dimensional
angle probe
to measure the flatness of
large The
silicon
wafers
[11].
Moreover,
many
methods
are
available
to
measure
the
parameters
of slanting
architecture of the proposed measurement system in this study consists
of a He-Ne
laser as
flat
However,
to the
best of(BS)
our accompanied
knowledge, these
systems
havelenses,
complicated
structures
the surfaces.
light source,
two beam
splitters
by two
biconvex
and two
chargeand
highdevices
manufacturing
costs.
present,
there
is no available
measurement
system
with
a simple
coupled
(CCD) that
areAt
used
as the
photoelectric
sensors
to receive the
optical
signals.
As
architecture
and straightforward
assembly
that can simultaneously
measure the
tilt angle and
position
shown in Figure
1, this design
is an adaptation
of an auto-focusing
technique,
namely,
the
information
a flat surface.
Therefore,
the aim of
to propose
a fire-new
triangulationofmethod.
The laser
beam is directed
offthis
the study
opticalisaxis
by a small
distance measurement
(s) and is then
system
with
a
simple
architecture,
straightforward
assembly,
and
lower
manufacturing
cost that
can
reflected by BS2 to become incident on the sample through the biconvex lens. The laser beam
is then
simultaneously
angle
and position
information
of the
sample.
reflected by the detect
samplethe
totilt
pass
through
the biconvex
lens back
to BS
2 where the beam was split into
CCD2
CCD1
two beams. One of the split beams goes directly through BS2 and is incident on CCD2. Notably, the
2. Architecture and Measurement Method of Proposed Measurement System
laser spots on CCD2 will change as a function of the different sample orientations and distance. The
otherThe
beam
is reflected
splitter
BS1 and travels
through
thestudy
biconvex
lensofbefore
proceeding
architecture
ofby
thebeam
proposed
measurement
system
in this
consists
a He-Ne
laser as
to CCD
The information
of the laser
spots on CCD
1 will
depend not
onlyand
on two
the charge-coupled
sample surface
the
light1.source,
two beam splitters
(BS) accompanied
by
two biconvex
lenses,
orientation
but that
also are
on the
change
the distancesensors
betweentothe
biconvex
lens and
the sample.
devices
(CCD)
used
as thein
photoelectric
receive
the optical
signals.
As shown in
When
using
this
proposed
measurement
system
to
measure
the
distance
(

y
)
between
the
Figure 1, this design is an adaptation of an auto-focusing technique, namely, the triangulation
method.
biconvex
sample
and axis
the slanting
angles
in the(s)x and is
z directions
(x, 
z), BS
the
The
laser lens
beamthe
is flat
directed
off surface
the optical
by a small
distance
then reflected
by
2
shape
and incident
center position
the laser
beam the
received
by the
twoThe
CCDs
will
change
as areflected
functionby
of
to
become
on the of
sample
through
biconvex
lens.
laser
beam
is then
bothsample
changes
thethrough
distancethe
(ybiconvex
) and slanting
angles
x2, 
z). In addition,
the
to in
pass
lens back
to (BS
where
the beam the
wasreceived
split intophotoelectric
two beams.
signal
that are
influenced
byBSthe
slanting
angles
(

x
,

z
)
of
the
sample
when
the
One
of contains
the split factors
beams goes
directly
through
and
is
incident
on
CCD
.
Notably,
the
laser
spots
2
2
sample
(

y
)
resolved
by
the
two
CCDs
is
measured.
Therefore,
light
beam
tracing
equations
on
CCDdistance
will
change
as
a
function
of
the
different
sample
orientations
and
distance.
The
other
beam
2
must
be established
a HTM
tracing to
the lens
relationship
between the
is
reflected
by beam via
splitter
BS1and
andskew-ray
travels through
thediscuss
biconvex
before proceeding
tosample
CCD1 .
distance
(y), slanting
(x, 
z),CCD
and 1the
corresponding
change
in sample
the center
position
of the laser
The
information
of the angles
laser spots
on
will
depend not only
on the
surface
orientation
but
spot.
In
this
regard,
a
detailed
mathematical
derivation
will
be
presented
in
Section
4.
also on the change in the distance between the biconvex lens and the sample.
Lens
BS2
s
He-Ne Laser
Optical axis
BS1
Objective
Lens
Sample
αβ axis
Goniometers
Figure
Figure 1.
1. Architecture
Architecture of
of proposed
proposed measurement
measurement system.
system.
When using this proposed measurement system to measure the distance (δy ) between the biconvex
3. Optical
of Proposed
lens
the flatSimulation
sample surface
and the Measurement
slanting anglesSystem
in the x and z directions (ω x , ω z ), the shape and
centerThe
position
of
the
laser
beam
received
by
the
two
CCDs
will change
as a software
function Zemax
of both (Radiant
changes
ray trace function of the commercially available
optical
simulation
in
the distance
(δy ) and slanting
angles
). In
addition,
received
photoelectricinsignal
contains
x , ω zto
Zemax
LLC, Redmond,
WA, USA)
was(ωused
simulate
thethe
actual
ray propagation
the proposed
factors
that
are
influenced
by
the
slanting
angles
(ω
,
ω
)
of
the
sample
when
the
sample
distance
(δy )
x
z
measurement system. The Matlab software (The MathWorks, Inc., Natick, MA, USA) was then used
resolved by the two CCDs is measured. Therefore, light beam tracing equations must be established
via a HTM and skew-ray tracing to discuss the relationship between the sample distance (δy ), slanting
angles (ω x , ω z ), and the corresponding change in the center position of the laser spot. In this regard,
a detailed mathematical derivation will be presented in Section 4.
Sensors 2016, 16, 1061
3 of 13
3. Optical Simulation of Proposed Measurement System
The
ray
Sensors
2016,
16,trace
1061 function of the commercially available optical simulation software Zemax (Radiant
3 of 13
Zemax LLC, Redmond, WA, USA) was used to simulate the actual ray propagation in the proposed
to process thesystem.
images The
of the
laser software
spot position
map, to calculate
the location
of thewas
spotthen
center,
measurement
Matlab
(The MathWorks,
Inc., Natick,
MA, USA)
usedand
to
to plot graphs
for the
spot
position
versus map,
sample
surface orientation
distance.
In doing
the
process
the images
of the
laser
spot position
to calculate
the locationand
of the
spot center,
and so,
to plot
feasibility
of spot
the proposed
measurement
system
is determined
and the
trend is
graphs
for the
position versus
sample surface
orientation
and distance.
In measurement
doing so, the feasibility
determined.
of
the proposed measurement system is determined and the measurement trend is determined.
2 when
the
sample
distance
(y)
is
Figure 2 shows the changes in
in the
the laser
laserspots
spotson
onCCD
CCD1 1and
andCCD
CCD
the
sample
distance
(δy)
2 when
˝ ~+2
˝.
within
the
range
ofof
−1´1
mm~+1
mm
and
thethe
slanting
angles
(x(ω
, xz,)ω
are
within
the
range
of
−2°~+2°.
The
is
within
the
range
mm~+1
mm
and
slanting
angles
)
are
within
the
range
of
´2
z
figure
indicates
thatthat
thethe
laser
spots
onon
CCD
1 and
CCD
2 show
sample
The
figure
indicates
laser
spots
CCD
CCD
changingtrend
trend when
when the sample
1 and
2 showa achanging
experiencesaasmall
smallchange
changeininangle
angleand
anddistance.
distance.
However,
relationship
is not
linear,
experiences
However,
thethe
relationship
is not
linear,
andand
thus,thus,
the
the
sample
distance
and
orientation
cannot
be
determined
directly
from
the
simulation
results.
sample distance and orientation cannot be determined directly from the simulation results. Therefore,
Therefore,
it istonecessary
to derive a mathematical
measurement
equationthe
to relation
analyzebetween
the relation
it
is necessary
derive a mathematical
measurement
equation to analyze
the
betweendistance/orientation
the sample distance/orientation
and the
two CCDs
to obtain mathematical
a quantitativeanalysis
mathematical
sample
and the two CCDs
to obtain
a quantitative
of the
analysis ofmeasurement
the proposedsystem.
measurement system.
proposed
Figure
Figure 2.
2. Optical simulation
simulation results:
results: (a)
(a) changes
changes in
in laser
laser spots
spots on
on CCD
CCD11 and (b) changes in laser
laser spots
spots
on
on CCD
CCD22.
4. Mathematical
4.
MathematicalModeling
Modelingof
of Proposed
Proposed Measurement
Measurement System
System and
and Derivation
Derivation of
of Actual
Actual Ray Path
Mathematicalmodeling
modelingis established
is established
by using
a and
HTM
the skew-ray
tracingproposed
method
Mathematical
by using
a HTM
theand
skew-ray
tracing method
proposed
by Lin
model ray
andtracing
perform
rayproposed
tracing for
the proposed
measurement
by
Lin to model
andtoperform
for the
measurement
system
architecture system
in this
architecture
in
this
paper
[12,13].
The
HTM
corresponding
to
the
coordinate
frame
of
each
optical
paper [12,13]. The HTM corresponding to the coordinate frame of each optical boundary relative
boundary
relative
to a reference
coordinate
system
(xyz)0 is defined
sequentially.
A mathematical
to
a reference
coordinate
system (xyz)
sequentially.
A mathematical
model
is proposed
0 is defined
model
is
proposed
to
develop
a
systematic
forward
and
reverse
mathematical
derivation.
Then,ofa
to develop a systematic forward and reverse mathematical derivation. Then, a linear equation
linear
equation of the
measurement
system
is obtained
by using
a first-order
Taylor series
expansion
the
measurement
system
is obtained
by using
a first-order
Taylor
series expansion
to evaluate
the
to evaluateoptical
the first-order
optical
the proposed
measurement system.
first-order
properties
of theproperties
proposed of
measurement
system.
4.1.
4.1. Establishing Optical Boundaries for
for Proposed
Proposed Measurement
Measurement System
System
Based
Based on
on the
the modeling
modeling steps
steps in
in the
the skew-ray
skew-ray tracing
tracing method
method proposed
proposed by
by Lin
Lin [12,13],
[12,13], the optical
i r are first defined. The proposed measurement system contains two parts:
boundaries
of
the
system
boundaries of the system i are first defined. The proposed measurement system contains two parts:
ray
ray path
path I,Ι, which
which contains
contains aa total
total of
of 14
14 optical
optical boundaries,
boundaries, and
and ray
ray path
path II,
ΙΙ, which
which contains
contains aa total
total of
of
eight
optical
boundaries,
as
shown
in
Figure
3:
eight optical boundaries, as shown in Figure 3:
 I ix
I
0
Ai   iy
 I iz

0
J ix
J iy
J iz
0
K ix
K iy
K iz
0
tix 
tiy 
tiz 

1
(1)
Sensors 2016, 16, 1061
4 of 13
»
0
—
—
Ai “ —
–
Iix
Iiy
Iiz
0
Jix
Jiy
Jiz
0
Kix
Kiy
Kiz
0
tix
tiy
tiz
1
fi
ffi
ffi
ffi
fl
(1)
The transformation matrices of coordinate frames (xyz)i of each optical boundary i ri relative to
the reference coordinate frame (xyz)0 are established as Equation (1) using the HTM. The elements in
the HTM for each optical boundary are shown in Tables 1 and 2. The notations C and S denote cosine
and sine, respectively. As seen, given are the HTM tables of coordinate (xyz)i for each optical boundary
i r relative to the reference coordinate system of ray paths I and II, respectively, where ω , ω , and δ
x
z
y
i
represent the rotation angle in the x and z directions and the distance (δy ) between the biconvex lens
and the flat sample surface. The initial position of the laser light source is defined as P0 = [0 0 ´ 2 1]T ,
and its unit vector is defined as `0 = [0 1 0 0]T .
Table 1. Parameters of the optical boundary coordinate transformation matrix for ray path I.
I
I=1
I=2
I=3
I=4
I=5
I=6
I=7
Type
Convex
Spherical
Concave
Spherical
Flat Mirror
(Sample)
Convex
Spherical
Concave
Spherical
Flat
Refracting
Flat
Mirror
Ni
Ri
Iix
Iiy
Iiz
Jix
Jiy
Jiz
Kix
Kiy
Kiz
tix
tiy
tiz
1/1.515
24.397
1
0
0
0
1
0
0
0
1
0
30.747
0
1.515
24.397
1
0
0
0
1
0
0
0
1
0
´7.915
0
Reflected
1.515
24.397
1
0
0
0
1
0
0
0
1
0
30.747
0
1/1.515
1
Sω z
0
´Cω x Sω z
1
Cω x Sω z
Sω x Sω z
Sω x Cω z
1
0
38.202 + δy
0
1/1.515
24.397
1
0
0
0
1
0
0
0
1
0
´7.915
0
1
0
0
0
1
0
0
0
1
0
0
0
´1
0
0
?0
´? 2{2
2{2
?0
?2{2
2{2
0
´12.7
0
I
I=8
I=9
I = 10
I = 11
I = 12
I = 13
I = 14
Type
Flat
Refracting
Flat
Refracting
Flat Mirror
Flat
Refracting
Convex
Spherical
Concave
Spherical
CCD1
Ni
Ri
Iix
Iiy
Iiz
Jix
Jiy
Jiz
Kix
Kiy
Kiz
tix
tiy
tiz
1.515
0
´1
0
0
0
0
´1
0
´1
0
0
´12.7
´12.7
1/1.515
0
´1
0
0
0
0
´1
0
´1
0
0
´12.7
´31.9
Reflected
0
´1
0
0
?0
?2{2
´ 2{2
?0
´?2{2
´ 2{2
0
´12.7
´38.1
1.515
0
´1
0
0
0
´1
0
0
0
1
0
´25.4
´44.6
1.515
24.397
´1
0
0
0
´1
0
0
0
1
0
´56.147
´44.6
1/1.515
24.397
´1
0
0
0
´1
0
0
0
1
0
´17.485
´44.6
0
´1
0
0
0
´1
0
0
0
1
0
´63.602
´44.6
Sensors2016,
2016,16,
16,1061
1061
Sensors
of 13
13
55 of
Table 2. Parameters of the optical boundary coordinate transformation matrix for ray path II.
Table 2. Parameters of the optical boundary coordinate transformation matrix for ray path II.
I
I=1
I=2
I=3
I=4
I=5
I=6
I = 15
I = 16
Flat
I
I=1
I=2
I=3
I=4
I=5
IFlat
=6
I =Flat
15
I = 16
Convex
Concave
Convex
Concave
Type
Mirror
CCD2
Spherical
Spherical
Spherical
Spherical
Refracting
Refracting
Convex
Concave
Flat
Mirror
Convex
Concave
Flat
Flat
(Sample)
CCD2
Type
Spherical Spherical
(Sample)
Spherical Spherical Refracting Refracting
Ni
1/1.515
1.515
Reflected
1/1.515
1.515
1/1.515
1.515
N
1/1.515
1.515
Reflected
1/1.515
1.515
1/1.515
1.515
Ri
24.397
24.397
24.397
24.397
R
24.397
24.397
24.397
24.397
1
1
1
1
1
1
−1
−1
Iixi
IIixiy
10
10
1
1
10
´10
´1
Sz
0
01
0
Iiy
0
0
Sω z
0
0
0
0
0
Iiz
0
0
0
0
0
0
0
0
Iiz
0
0
0
0
0
0
0
0
0
0
−CxSz
0
0
0
0
0
Jix
Jix
0
0
´Cω x Sω z
0
0
0
0
0
11
11
11
−1
−1
JJiyiy
11
11
11
´1
´1
0
0
C

xS z
0
0
0
0
JJ iz
0
0
Cω x Sω z
0
0
0
0
00
iz
ix
0
0
S

x
S

z
0
0
0
0
K
Kix
0
0
Sω x Sω z
0
0
0
0
00
K
iy
0
0
S

x
C

z
0
0
0
0
Kiy
0
Sω x Cω z
0
0
0
0
00
iz
1
1
1
1
1
1
1
K
Kiz
1
1
1
1
1
1
11
ttixix
0
00
00
00
00
00
00
00
ttiyiy
30.747
´7.915
38.202
´7.915
30.747
00
´25.4
´49.426
−7.915
38.202
−7.915
30.747
−25.4
−49.426
ttiziz
00
00
00
00
00
00
00
00
Figure 3.
3. Definition
Definition of
of each
each optical
optical boundary
boundary of
of the
the proposed
proposed measurement
measurementsystem.
system.
Figure
4.2. Incident
Incident Point
Point and
and Unit
Unit Vector
Vector of
of Incident
Incident Laser
Laser Ray
Ray
4.2.
Totrace
tracethe
thepropagation
propagationof
ofthe
thelaser
laserray
ray in
in the
the proposed
proposed measurement
measurementsystem,
system,the
the initial
initial position
position
To
of the
the laser
laser light
light source
source is
is set
set as
as P0 == [0
[0 0–2
0–2 1]
1]TT and its unit vector is set as `ℓ0 ==[0[011000]0]TT.. The
The laser
laser ray
ray
of
begins at
at an
an initial
initial position
position that
that is
is along
along the
the unit
unitvector
vector but
butincident
incident to
tothe
theproposed
proposedmeasurement
measurement
begins
system. For
For the
the actual
actual ray
ray propagation
propagation in
in the
the proposed
proposed measurement
measurement system,
system, the
the ray
ray incident
incident point
point on
on
system.
each
optical
boundary,
the
unit
vectors
of
the
reflected
rays
[14],
and
the
unit
vectors
of
the
refracted
each optical boundary, the unit vectors of the reflected rays [14], and the unit vectors of the refracted
rays are
are shown
shown in
in Figure
Figure 44 and
and can
canbe
beobtained
obtainedfrom
fromEquations
Equations (2)
(2)toto(8).
(8). Equation
Equation (2)
(2) gives
gives the
the
rays
i with
expressions of
its
unit
normal
vector
parameters
α
i
and
β
i
.
Note
expressions
of the
theoptical
opticalboundary
boundary i rand
and
its
unit
normal
vector
n
with
parameters
α
and
βi .
i
i
i
that
the
non-negative
parameter
i
represents
the
geometrical
length
from
point
to
.
The
±
sign
Note that the non-negative parameter λi represents the geometrical length from point Pi´1 to Pi .
in Equation
(3)Equation
indicates there
may be two
possible
intersection
points
of this raypoints
and a complete
sphere.
The
˘ sign in
(3) indicates
there
may be
two possible
intersection
of this ray
and
only
one of
these only
points
is of
useful,
the
signappropriate
must be chosen.
Thebe
spherical
aClearly,
complete
sphere.
Clearly,
one
these thus
points
is appropriate
useful, thus the
sign must
chosen.
coordinates,
i and βi, at the
can bepoint
determined
Equationfrom
(6). Equations
(7)
The
sphericalαcoordinates,
αi incidence
and βi , at point
the incidence
Pi can befrom
determined
Equation (6).
Sensors 2016, 16, 1061
Sensors 2016, 16, 1061
6 of 13
6 of 13
and (8) define the Di and Ei as the parameters in spherical and plane surface boundary situation,
Equations (7) and (8) define the Di and Ei as the parameters in spherical and plane surface boundary
respectively.
situation, respectively.
The results of these calculations are described in Tables 3 and 4. Tables 3 and 4 are the calculated
The results of these calculations are described in Tables 3 and 4. Tables 3 and 4 are the calculated
parameters of the ray incident points and ray unit vectors of ray path I and II, respectively.
parameters of the ray incident points and ray unit vectors of ray path I and II, respectively.
Figure
Figure4.4. Definition
Definitionof
ofincident
incidentpoint
pointand
andunit
unitvectors
vectorsof
ofreflected
reflectedand
andrefracted
refractedrays.
rays.
Incidentpoint
point
Incident
»
—
—
Pi “ —
–
Pix
Piy
Piz
1
fi »
»
Pi `1x  λ
C|R
 PixP  `
 iσ fi Ri »
i 1 x i 
i C
i  Cα
i | Cβ
i 
i´1x
i´1x i
 i i






ffii —iρ ffi R —
ffi —PiyP
P
C|R
| iCβ
`i 1`yi´1yλ
i 1i y 
i —
i S
ffi
ffi
—
ffi
—
i
´
1y
i
i

 i Sαi






P i “ —

ffi “= — =ffi “ —
ffi

 PizPi´1z `
  i τi fl R–i S|R
Pi `1zi´1zλi i1z fli  –
fl –
|
i i Sβ i
   1
  1 
1
1
1
1 
 1 
fi
fi
ffi
ffi
ffi
fl
(2)
(2)
b
2
λi “´D
˘
DD
Ei
i
i 2´
i   Di 
i  Ei
(3)
(3)
Di “ ρDi p´
`i´i 1y
Pi´P
` ´i-1x
`i´1y
( q`i-1yτi)`i´1zi `
 Pii-1y
`i-1y
 P`i-1zPi´i-1z
1x
1x `
´1y
1z `i´1z˘
i  
i-1z 
i-1xi
2
2
2
2
2
2
Ei “ Pi´1x ` P
2 i´1y `
2 Pi´1z2` ρi `
2 τi ´
2 Ri 2´ 2 ρi Pi´1y ` τi Pi´1z
(4)
(4)
Boundary equations
Boundary equations
Spherical surface boundary:
Spherical surface boundary:
Ei  Pi-1x  Pi-1y  Pi-1z   i   i  Ri  2   i Pi-1y   i Pi-1z 
Plane surface boundary:
Plane surface boundary:
Di “ nD
` niy P
` n iz Pi-1z
niiy´P1y
`
ei ei
ixiP
i´n1x
i´1z
ix Pi-1x 
i-1y  niz
Ei “ nix `i´1x ` niy `i´1y ` niz `i´1z
Ei  nix  i-1x  niy  i-1y  niz  i-1z
(5)
(5)
For
of tracing
tracingthe
thereflected
reflectedororrefracted
refracted
incidents
at the
boundary
surface,
For the
the purpose
purpose of
rayray
incidents
at the
boundary
surface,
one
˝
˝
one
needs
the
incidence
angle
θ
(0
ď
θ
ď
90
),
which
is
determined
by
the
dot
product
of
`
´1
i
i
needs the incidence angle θi (0° ≤ θi ≤ 90°), which is determined by the dot product of ℓ
and ithe
and
the
active
unit
normal
vector
n
.
Then
the
ray
unit
directional
vector
`
can
be
obtained
by
the
i
i
active unit normal vector i . Then the ray unit directional vector ℓ can be obtained by the refraction
refraction
and reflection
law of
optics,
and Equations
(7)show
and (8)
the calculated
and simplified
and reflection
law of optics,
and
Equations
(7) and (8)
theshow
calculated
and simplified
result of
Sensors 2016, 16, 1061
7 of 13
result of them. Note that Ni is the index of refraction or reflection defined by the Snell’s law. More details
about the skew-ray tracing method derivation process are given in publications by Lin [14,15].
ˇ
ˇ
ˇ
ˇ
Cθi “ ˇ`i´1 ‚ ni ˇ “ ´`i´1 ‚ ni
`
αi “ atan2 ˆPi´1y ` `i´1y λi ` ρi , Pi´1x ` `i´1x λi q
˙
b
`
˘2
2
β i “ atan2 Pi´1z ` `i´1z λi ` τi , pPi´1x ` `i´1x λi q ` Pi´1y ` `i´1y λi ` ρi
(6)
Table 3. Ray incident points and ray unit vector parameters of ray path I.
I
I=1
I=2
I=3
I=4
I=5
I=6
I=7
Type
Convex
Spherical
Concave
Spherical
Flat Mirror
(Sample)
Convex
Spherical
Concave Spherical
Flat
Refracting
Flat
Refracting
Ni
φi
nix
niy
niz
ρi
τi
ei
1/1.515
0
Cβ1 Cα1
Cβ1 Cα1
Sβ1
´(d1 + R)
0
Sphere
1.515
0
Cβ1 Cα1
Cβ1 Cα1
Sβ1
R ´ (d1 + d2 )
0
Sphere
Reflected
0
´Cω x Sω z
Cω x Cω z
Sω x
0
0
´(d1 + d2 + d3 )
1/1.515
0
Cβ4 Cα4
Cβ4 Cα4
Sβ4
R ´ (d1 + d2 )
0
Sphere
1.515
0
Cβ5 Cα5
Cβ5 Cα5
Sβ5
´(d1 + R)
0
Sphere
1/1.515
0
0
1
0
0
0
0
Reflected
π/4
0
Sφ7
´Cφ7
0
0
´d7 Sφ7
I
I=8
I=9
I = 10
I = 11
I = 12
I = 13
I = 14
Type
Flat
Refracting
Flat
Refracting
Flat Mirror
Flat Mirror
(Sample)
Convex Spherical
Concave
Spherical
CCD1
Ni
φi
nix
niy
niz
1.515
0
0
0
1
1.515
0
0
0
1
Reflected
3π/4
0
Cφ10
Sφ10
1.515
0
0
1
0
ρi
0
0
0
0
τi
0
1
0
0
ei
´(d8 + d9 )
´d8
1/1.515
0
Cβ5 Cα5
Cβ5 Cα5
Sβ5
d7 S2φ10 ´ d8 C2φ10
`d11 ` d12 ` R
d7 C2φ10 ` d8 S2φ10
`d10 ` d9
pd8 ` d9 ` d10 q Sφ10 d8 C2φ10 ´
´d7 Cφ10
d11 ´d7 S2φ10
Sphere
1.515
0
0
1
0
d7 + d11 + d12
+ d13 ´ R
1/1.515
1
0
0
0
d8 + d9 + d10
1
Sphere
´d7 ´ d11
´ d12 ´
d13 ´ d14
The unit vector of the reflected ray:
»
—
—
`i “ —
–
`ix
`iy
`iz
0
fi
»
ffi —
ffi —
ffi “ —
fl –
`i´1x ` 2Cθi nix
`i´1y ` 2Cθi niy
`i´1z ` 2Cθi niz
0
fi
ffi
ffi
ffi “ `i´1 ` 2Cθi ni
fl
(7)
The unit vector of the refracted ray:
»
»
`ix
— `
— iy
`i “ —
– `iz
0
fi
—
ffi —
ffi —
ffi “ —
fl —
–
b
1 ´ Ni2 ` pNi Cθi q2 ` Ni p`i´1x ` nix Cθi q
b
`
˘
´niy 1 ´ Ni2 ` pNi Cθi q2 ` Ni `i´1y ` niy Cθi
b
´niz 1 ´ Ni2 ` pNi Cθi q2 ` Ni p`i´1z ` niz Cθi q
fi
´nix
ffi ˆ
˙
b
ffi
ffi
2
ffi “ Ni Cθi ´ 1 ´ Ni2 ` pNi Cθi q ni ` Ni `i´1
ffi
fl
(8)
0
Sequentially tracing the ray incident points on each optical boundary gives the coordinates of
the spots on the two CCDs. Because ω x , ω z and δy are generally very small, a first-order Taylor series
expanded about ω x = ω z = δy = 0 can be used to obtain a first-order equation of the proposed system
measurement. Then, substituting into the equation the parameters for the actual sensor position and
measurement system, such as the system internal unit distance di and the lens surface radius Ri , the
system measurement Equation (9) of the spot coordinates on the two CCDs can be obtained:
expanded about x = z = y = 0 can be used to obtain a first-order equation of the proposed system
measurement. Then, substituting into the equation the parameters for the actual sensor position and
measurement system, such as the system internal unit distance di and the lens surface radius Ri, the
system measurement Equation (9) of the spot coordinates on the two CCDs can be obtained:
 P14 x  215.147(3.44801  0.155909 y + 0.780777 x )

 P14 z  214.876(2.69362+0.0402449z )

 y + 48.5221x )˘
$  P16 x  215.147(3.47192+0.464506
`
’
P
“
215.147
3.44801
´
0.155909δ
y ` 0.780777ωx
14x
’
’
 214.876(2.43596
 50.5223z ) q
&
16 z “
PP14z
214.876 p2.69362 ` 0.0402449ω
z
Sensors 2016, 16, 1061
8 of 13
(9)
`
˘
(9)
’
P16x “of215.147
3.47192
` 0.464506δ
y ` 48.5221ω
x (9). Equation (10) gives a
’
Equation (10) is the
result
a
reverse
derivation
of
solving
Equation
’
%
P16z
“ 214.876
p2.43596
50.5223ωangles
zq
set of linear equations that
gives
accurate
sample´slanting
(x, z) and distance (y) from the
spot center
locations
on the
CCDs:
Equation
(10) is the
result
of a reverse derivation of solving Equation (9). Equation (10) gives a set
of linear equations that gives accurate
sample+slanting
and distance (δy ) from the spot
x , ωz ) P
 x =  0.27030
0.00027angles
P14 x + (ω
0.00009
16 x
center locations on the CCDs:
(10)
 z  0.04822  0.00009 P16 z
20.76186`0.00027P
0.0284514x
P14 x`+0.00009P
0.0004616x
P16 x
ωx y“= ´0.27030
ωz “ 0.04822 ´ 0.00009P16z
Table 4. Ray incident
points ´
and
ray unit14x
vector
parameters
δy “ 20.76186
0.02845P
` 0.00046P
16x of ray path II.
I
I=1
nixN
i
niyφi
nniz ix
ρni iy
niz
τiρi
eiτ i
ei
I=2
I=3
I=4
I=5
I=6
I = 15
Table 4. Ray incident points and ray unit vector parameters of ray path II.
Convex
Type
Spherical
I
I=1
Ni
1/1.515
Convex
Type
i
0
Spherical
Concave
Spherical
I=2
1.515
Concave
0
Spherical
(10)
I = 16
Flat Mirror
(Sample)
I=3
Reflected
Flat Mirror
0
(Sample)
Convex
Concave
Flat
Flat
CCD2
Spherical
Spherical
Refracting Refracting
I=4
I=5
I=6
I = 15
I = 16
1/1.515
1.515
1/1.515
1.515
1.515
Convex
Concave
Flat
Flat
CCD
0
0
0
0
02
Spherical
Spherical
Refracting
Refracting
Cβ
1Cα1
Cβ21.515
Cα2
−CReflected
xSz
Cβ41/1.515
Cα4
Cβ5Cα
5
0
0
0
1/1.515
1.515
1/1.515
1.515
1.515
Cβ1Cα
Cβ2Cα
CxC0z
Cβ4Cα04
Cβ5Cα05
10
01
02
01
01
Sβ11 Cα1
Sβ22 Cα2
Sx x Sω z
Sβ
4 4 Cα4
Sβ
5 5 Cα5
00
Cβ
Cβ
´Cω
Cβ
Cβ
0
00
Cβ
Cα
Cβ
Cα
Cω
Cω
Cβ
Cα
Cβ
Cα
1
1
10
x
z
1
1
2
2
4
4
5
5
−(d1 + R)
R − (d1 + d2)
0
R − (d1 + d2)
−(d1 + R)
0
0
Sβ1
Sβ2
Sω x
Sβ4
Sβ5
0
0
0
0 1 + R)
0 0
0 (d1 + d2 )
0 1 + R)
00
´(d
R ´0(d1 + d2 )
R´
´(d
00
00
0
0
0 15
0 − d16
Sphere
Sphere
− (d1 + d2 +0 d3)
Sphere0
Sphere0
00
−d
−d15
Sphere
Sphere
´ (d1 + d2 + d3 )
Sphere
Sphere
0
´d15
´d15 ´ d16
5.
5. Experimental
ExperimentalResults
Resultsand
andDiscussion
Discussion
Figure
Figure 55 shows
shows the
the actual
actualdesign
designof
ofthe
theproposed
proposedmeasurement
measurementsystem
systemon
onananoptical
opticalbench.
bench.
Photograph of the laboratory-built prototype.
Figure 5. Photograph
The
experimental set-up
set-up includes
includes placing
placing the
the sample
sample on
on aa dual
dual (α-β)
(α-β) axis
The experimental
axis goniometer
goniometer that
that rotates
rotates
˝
about
the
α
and
β
axes
with
a
resolution
of
0.1°.
LabVIEW
(National
Instruments
Co.,
Mopac
Expwy
about the α and β axes with a resolution of 0.1 . LabVIEW (National Instruments Co., Mopac Expwy
Austin, TX, USA ) is used to control the linear stage (one step of 1 lm, HF-KP053-B series, Chiuan Yan
Tech, Changhua, Taiwan) to induce mono-dimensional movements of the sample (i.e., distance (δy )),
and a He-Ne Laser is used as the system light source (λ = 632 nm, 2 mW).
5.1. Part 1
The measurement results of the sample distance (δy ) and slanting angles (ω x , ω z ) are verified
individually. When verifying the measurement results of the distance (δy ), the sample is driven by
Austin, TX, USA ) is used to control the linear stage (one step of 1 lm, HF-KP053-B series, Chiuan Yan
Tech, Changhua, Taiwan) to induce mono-dimensional movements of the sample (i.e., distance (y)),
and a He-Ne Laser is used as the system light source ( = 632 nm, 2 mW).
5.1. 2016,
Part 16,
1 1061
Sensors
9 of 13
The measurement results of the sample distance (y) and slanting angles (x, z) are verified
individually. When verifying the measurement results of the distance (y), the
sample is driven by a
a linear stage along the y-direction with both slanting angles (ω x , ω z ) set to 0˝ . The measurement range
linear stage along the y-direction with both slanting angles (x, z) set to 0°. The measurement range
of the distance is ´1~+1 mm. When verifying the measurement results of the flat sample slanting
of the distance is −1~+1 mm. When verifying the measurement results of the flat sample slanting angle
angle
ω x using
a dual
goniometer,
the sample
distance
to 0 mm,
andslanting
the slanting
y ) is fixed
x using
a dual
axis axis
goniometer,
the sample
distance
(y) is(δfixed
to 0 mm,
and the
angleangle
z
˝ ~+1˝ . The verification method of
ω z isisset
setto
to0°.
0˝ .The
Themeasurement
measurement
range
of
the
slanting
angle
is
´1
range of the slanting angle is −1°~+1°. The verification method of the
themeasurement
measurement
results
of the
sample
slanting
ω z is similar
that
of ω x . The verification
results
of the
flatflat
sample
slanting
angleangle
z is similar
to that to
of 
x. The verification
results
results
are shown
in Figure
6. All
measurement
points
in Figure
6 originate
from
simultaneous
images
are shown
in Figure
6. All
measurement
points
in Figure
6 originate
from
simultaneous
images
captured
by
two
CCDs.
The
sampling
frequency
is
one
image
per
second,
and
20
images
are
captured
captured by two CCDs. The sampling frequency is
image per second, and 20 images are captured
perper
measured
point.
Then,
andslanting
slantingangles
angles((ω
measured
point.
Then,the
theaverage
averagevalues
valuesof
ofthe
thesample
sample distance
distance ((δ
yy)) and
x, x
, zω
) z)
areare
calculated
using
the
inthe
theprevious
previoussection.
section.It It
observed
calculated
using
theresults
resultsofofthe
themathematical
mathematical derivations
derivations in
is is
observed
˝ and
that
measured
errorsofofthe
theslanting
slantingangles
angles (ω
(xx, 
y) yare
0.009°
6 µm,
that
thethe
measured
errors
ωzz))and
andthe
thedistance
distance((δ
) are
0.009and
6 µm,
respectively.
In
other
words,
the
measured
linearity
of
the
slanting
angles
(

x
,

z
)
and
the
distance
respectively. In other words, the measured linearity of the slanting angles (ω x , ω z ) and the distance
y) 0.45%
is 0.45%
and
0.3%,respectively.
respectively.
(δy()is
and
0.3%,
Figure
Verification
experimentalresults:
results:(a)
(a)distance
distance (δ
(yy);
x); and (c) slanting
Figure
6. 6.
Verification
ofof
experimental
); (b)
(b) slanting
slantingangle
angle(
(ω
x ); and (c) slanting
angle
angle
(ω(
).z).
z
5.2. Part 2
5.2. Part 2
This part would show the complex measurement results of the sample distance (y) and slanting
This part would show the complex measurement results of the sample distance (δy ) and slanting
angles (x, z). However the possible permutations of the sample distance (y) and slanting
angles
angles
(ω
,
ω
).
However
the
possible
permutations
of
the
sample
distance
(δ
)
and
slanting
x
z
y
(x, z) multiply towards infinity. This part would verify the measurement results of the distanceangles
(y)
(ω x , ω z ) multiply towards infinity. This part would verify the measurement results of the distance (δy )
under the setting of slanting angles (ω x , ω z ) that are equal to (2˝ , 2˝ ), (´2˝ , ´2˝ ), (´2˝ , 2˝ ), (2˝ , ´2˝ ).
The purpose of this experiment is to find the limit of the laboratory-built prototype.
The maximum measurement range of the distance (δy ) is ´4~+4 mm and each measured point
is 1 mm apart. The sampling frequency is one image per second, and 20 images are captured per
measured point. All measurement points in Figure 7 originate from simultaneous images captured
by two CCDs. Figure 7 shows the experimental results are very similar to the simulation results on
two CCDs.
Sensors 2016, 16, 1061
10 of 13
under the setting of slanting angles (x, z) that are equal to (2°, 2°), (−2°, −2°), (−2°, 2°), (2°, −2°).10The
of 13
purpose of this experiment is to find the limit of the laboratory-built prototype.
Sensors 2016, 16, 1061
(a)
(b)
(c)
(d)
Figure 7. Verification of the distance (y) results with the slanting angles (x, z): (a) (2°, 2°), (b) (−2°,
Figure 7. Verification of the distance (δy ) results with the slanting angles (ω x , ω z ): (a) (2˝ , 2˝ ),
−2°), (c)
(2°, −2°), and (d) (−2°, 2°).
(b) (´2˝ , ´2˝ ), (c) (2˝ , ´2˝ ), and (d) (´2˝ , 2˝ ).
The maximum measurement range of the distance (y) is −4~+4 mm and each measured point is
1 mm apart. The sampling frequency is one image per second, and 20 images are captured per
measured point. All measurement points in Figure 7 originate from simultaneous images captured
by two CCDs. Figure 7 shows the experimental results are very similar to the simulation results
on
Sensors 2016, 16, 1061
11 of 13
two CCDs.
6. System
SystemStability
StabilityTest
Test
stabilitytest
test
involves
placing
the proposed
measurement
in a stable
The stability
involves
first first
placing
the proposed
measurement
system in asystem
stable environment
environment
15 min
to regulate the
temperature
of the
Then,
the sample
is
for
at least 15 for
minattoleast
regulate
the temperature
of the
system. Then,
the system.
sample is
adjusted
to a fixed
˝
adjusted
to
a
fixed
orientation
and
distance
(e.g.,
distance

y
=
0
mm;
slanting
angles

x
=

z
=
0°),
and
orientation and distance (e.g., distance δy = 0 mm; slanting angles ω x = ω z = 0 ), and the system is
the system
is sampledfor
continuously
for 6 min.
The
results
of the test
system
stability
are8.shown
in
sampled
continuously
6 min. The results
of the
system
stability
are shown
in test
Figure
As seen,
˝ , of
Figure 8.
As seen,
stability
achieved
within 6and
mininline
at a distance
inline
angle
3 μm and
system
stability
is system
achieved
within is
6 min
at a distance
angle ofand
3 µm
and 0.01
respectively.
0.01°, respectively.
However
the measured
accuracy
of the proposed
measurement
is not
However
the measured
accuracy
of the proposed
measurement
system
is not only system
influenced
byonly
the
influenced
by the laser
beam
instability
but alsoaberrations,
by misalignments,
aberrations,
sensitivity,
etc.
laser
beam instability
but
also by
misalignments,
CCD sensitivity,
etc. CCD
and can
be enhanced
andusing
can be
enhanced
by using
high quality
laser
and close-loop
with a feedback
methods
by
high
quality laser
and close-loop
with
a feedback
signal methods
[16,17]. Assignal
a result,
in the
[16,17]. these
As a result,
theneed
future,
these
factors will
need to be to
considered
and optimized
reduce
future,
factors in
will
to be
considered
and optimized
reduce system
errors andtoimprove
system errors
and improve
accuracy.
Alsophysics
we willmodel
produce
modified
physicsinmodel
for
measured
accuracy.
Also wemeasured
will produce
modified
for real
applications
industry
real
applications
in
industry
and
we
must
overcome
some
additional
challenges.
In
our
plan,
we
will
and we must overcome some additional challenges. In our plan, we will combine our previous study
combine
our
previous toward
study togeometrical
improve robustness
toward
fluctuations
of the
laser beam
to
improve
robustness
fluctuations
of thegeometrical
laser beam of
the proposed
measurement
of the proposed
measurement
system and
analyze
accuracy
influences
the components
position
system
and analyze
accuracy influences
by the
components
position
error,by
misalignments,
aberrations,
error, misalignments,
aberrations,
errors,effect,
vibrations,
quantization
errors, vibrations,
andquantization
laser non-stability
etc. and laser non-stability effect, etc.
(yy)) and (b) slanting
Figure 8. Results of proposed measurement system stability test: (a) distance (δ
angles (ω
(xx,, ωz).
z ).
7. Comparison
7.
Comparisonof
ofProposed
ProposedMeasurement
MeasurementSystem
Systemand
andAutomatic
AutomaticCollimator
Collimator
We performed
performedaacomparison
comparison
test
using
a commercially
available
automatic
collimator
(H400We
test
using
a commercially
available
automatic
collimator
(H400-C050,
C050, measurement
Suruga
SeikiShizuoka,
Co., Shizuoka,
the proposed
measurement
measurement
range: range:
˘0.5˝ , ±0.5°,
Suruga
Seiki Co.,
Japan) Japan)
and theand
proposed
measurement
system
system
to
measure
the
slanting
angles
of
the
sample
(

x
,

z
).
As
shown
in
Figure
9,
the sample
is
to measure the slanting angles of the sample (ω x , ω z ). As shown in Figure 9, the sample
is placed
placed
onaxis
a dual
axis goniometer
that is on
installed
a linear
stage. Themeasurement
proposed measurement
on
a dual
goniometer
that is installed
a linearon
stage.
The proposed
system and
system
and
the
commercially
available
automatic
collimator
are
configured
alongside
each
otherthe
to
the commercially available automatic collimator are configured alongside each other to monitor
monitor
the
slanting
angles
(

x, z) of the sample on the linear stage simultaneously. The linear stage
slanting angles (ω x , ω z ) of the sample on the linear stage simultaneously. The linear stage is driven
is driven
that the automatic
and the proposed
measurement
system
can simultaneously
so
that thesoautomatic
collimatorcollimator
and the proposed
measurement
system can
simultaneously
measure
measure
the
sample
slanting
angle
information
in
scanning
mode.
Figure
10
shows
the
results
of
the sample slanting angle information in scanning mode. Figure 10 shows the results of comparing
comparing
the
proposed
measurement
system
and
the
conventional
automatic
collimator.
Using
the
the proposed measurement system and the conventional automatic collimator. Using the dual axis
˝ x = 0.5° and
˝
dual axis goniometer
to set the
sampleangles
slanting
as
z = 0°, one measurement is
goniometer
to set the sample
slanting
as ωangles
x = 0.5 and ω z = 0 , one measurement is recorded
recorded
for
every
1
mm
movement
of
the
linear
stage.
Each
recorded
point
is theresult
averaged
of
for every 1 mm movement of the linear stage. Each recorded point is the averaged
of 10 result
separate
10 separate measurements.
The experimental
that themeasurement
proposed measurement
measurements.
The experimental
results haveresults
shownhave
that shown
the proposed
system has
system
has
excellent
performance
and
its
practicability,
compared
to
the
conventional
automatic
excellent performance and its practicability, compared to the conventional automatic collimator.
collimator.
Sensors 2016, 16, 1061
Sensors 2016,
2016,16,
16,1061
1061
Sensors
12 of 13
of 13
12 12
of 13
Figure
automatic
collimator
comparison
test.
Figure9.
9.Set-up
Set-upof
ofthe
theproposed
proposedmeasurement
measurementsystem
systemand
and
automatic
collimator
comparison
test.
Figure
9.
Set-up
of
the
proposed
measurement
system
and
automatic
collimator
comparison
test.
Figure
measurement system
system and
andautomatic
automaticcollimator:
collimator:(a)
(a)ω x
Figure 10.
10. Comparative
Comparative test
test results
results of proposed measurement
Figure 10. Comparative test results of proposed measurement system and automatic collimator: (a)
experimental
x experimental
results
and
(b)

z
experimental
results.
results and (b) ω z experimental results.
x experimental results and (b) z experimental results.
8.
8. Conclusions
Conclusions
8.
Conclusions
An
has
been
successfully
developed
in this
study.
An optical
opticalnon-contacting
non-contactingmeasurement
measurementsystem
system
has
been
successfully
developed
inthis
this
study.
An
optical
non-contacting
measurement
system
has
been
successfully
developed
in
study.
This
system
can
measure
workpiece
samples
with
a
slanting
angle
and
simultaneously
measure
thethe
This system
system can
can measure
measure workpiece
workpiece samples
samples with
with aa slanting
slanting angle
angle and
and simultaneously
simultaneously measure
measure
This
sample
distance and
its orientation. The
optical simulation
software Zemax
has been used to performthe
sample distance
distance and
and its
its orientation.
orientation. The
The optical
optical simulation
simulation software
software Zemax
Zemax has
has been
been used
used to
to perform
perform
asample
feasibility analysis
of this
system. Additionally,
a model of the proposed
measurement
system
has
a feasibility
feasibility analysis
analysis of
of this
this system.
system. Additionally,
Additionally, aa model
model of
of the
the proposed
proposed measurement
measurement system
system has
has
a
been established using a HTM. The skew-ray tracing method is employed to trace the actual ray
been
established
using
a
HTM.
The
skew-ray
tracing
method
is
employed
to
trace
the
actual
ray
been established
using
a HTM.
The skew-ray
tracing
methodFinally,
is employed
to trace theprototype
actual ray
propagation
path to
obtain
the system
measurement
equations.
a laboratory-built
propagation
path
to
obtain
the
system
measurement
equations.
Finally,
a
laboratory-built
prototype
propagation
to optical
obtain the
system
measurement
equations.
Finally,
a laboratory-built
is
constructedpath
on an
bench
to verify
the performance
of the
proposed
measurement prototype
system.
is
constructed
on
an
optical
bench
to
verify
the
performance
of
the
proposed
measurement system.
system.
is
constructed
on
an
optical
bench
to
verify
the
performance
of
the
proposed
measurement
The orientation and distance measurement experiments show that the measuring ranges for the
The
orientation
and
distance
measurement
experiments
show
that
the
measuring
ranges
for
the
angles
The orientation
andare
distance
experiments
show
that the
measuring
ranges
angles
and distance
±1° andmeasurement
±1 mm, respectively,
and the
measured
errors
of the angles
andfor
thethe
˝ and
and
distance
are
˘1
˘1
mm,
respectively,
and
the
measured
errors
of
the
angles
and
the
distance
angles and
are 6±1°
and
±1 mm, respectively,
and the
measured
errors
of the
and the
distance
aredistance
0.009° and
µm,
respectively.
In other words,
the
measured
linearity
of angles
the slanting
are 0.009˝are
and 6 µm,and
respectively.
In other words,
the measured
linearity
of thelinearity
slanting of
angles
(ω x , ω z )
distance
6 µm, (respectively.
In 0.3%,
other
words, the
measured
the slanting
angles
(x, z)0.009°
and the distance
y) is 0.45% and
respectively.
and the(distance
(δ the
) isdistance
0.45% and
respectively.
angles
x, z) and y
(y)0.3%,
is 0.45%
and 0.3%, respectively.
Sensors 2016, 16, 1061
13 of 13
Acknowledgments: The authors gratefully acknowledge the financial support provided to this study by the Ministry
of Science and Technology of Taiwan under Grant Nos. MOST 103-2221-E-194-006-MY3 and 105-2218-E-194-004.
Author Contributions: Yen-Sheng Huang proposed the original idea, carried out the simulation and performed
acquisition of the experimental data. Yu-Ta Chen implemented the mathematical model and analyzed the
experiment results. Chien-Sheng Liu proposed the research direction and gave the technical support and
conceptual advice. Yu-Ta Chen and Chien-Sheng Liu drafted, organized, revised, and proofed the paper.
Conflicts of Interest: The authors declare no conflict of interest.
References
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
Du, Z.; Wu, Z.; Yang, J. Error ellipsoid analysis for the diameter measurement of cylindroid components
using a laser radar measurement system. Sensors 2016, 16, 714. [CrossRef] [PubMed]
Kaloshin, G.; Lukin, I. Interferometric laser scanner for direction determination. Sensors 2016, 16, 130.
[CrossRef] [PubMed]
Liu, C.S.; Jiang, S.H. A novel laser displacement sensor with improved robustness toward geometrical
fluctuations of the laser beam. Meas. Sci. Technol. 2013, 24, 105101. [CrossRef]
Lu, S.H.; Hua, H. Imaging properties of extended depth of field microscopy through single-shot
focus scanning. Opt. Express 2015, 23, 10714–10731. [CrossRef] [PubMed]
Liu, C.S.; Jiang, S.H. Precise autofocusing microscope with rapid response. Opt. Lasers Eng. 2015, 66, 294–300.
[CrossRef]
Liu, C.S.; Jiang, S.H. Design and experimental validation of novel enhanced-performance autofocusing microscope.
Appl. Phys. B 2014, 117, 1161–1171. [CrossRef]
Liu, C.S.; Wang, Z.Y.; Chang, Y.C. Design and characterization of high-performance autofocusing microscope
with zoom in/out functions. Appl. Phys. B 2015, 121, 69–80. [CrossRef]
Makai, J.P. Simultaneous spatial and angular positioning of plane specular samples by a novel double beam
triangulation probe with full auto-compensation. Opt. Lasers Eng. 2016, 77, 137–142. [CrossRef]
Lee, S.J.; Chang, D.Y. A laser sensor with multiple detectors for freeform surface digitization. Int. J. Adv.
Manuf. Technol. 2007, 31, 1181–1190. [CrossRef]
Shiou, F.J.; Lee, R.T. Opto-Electronic Detector for Distance and Slanting Direction Measurement of a Surface.
TW Patent I246585, 1 January 2006.
Gao, W.; Huang, P.S.; Yamada, T.; Kiyono, S. A compact and sensitive two-dimensional angle probe for
flatness measurement of large silicon wafers. Precis. Eng. 2002, 26, 396–404. [CrossRef]
Lin, P.D.; Sung, C.K. Matrix-based paraxial skew ray-tracing in 3D systems with non-coplanar optical axis.
Opt. Int. J. Light Electron Opt. 2006, 117, 329–340. [CrossRef]
Chen, C.J.; Lin, P.D.; Jywe, W.Y. An optoelectronic measurement system for measuring 6-degree-of-freedom
motion error of rotary parts. Opt. Express 2007, 15, 14601–14617. [CrossRef] [PubMed]
Lin, P.D. Derivative matrices of a skew ray for spherical boundary surfaces and their applications in system
analysis and design. Appl. Opt. 2014, 53, 3085–3100. [CrossRef] [PubMed]
Lin, P.D. New Computation Methods for Geometrical Optics; Springer: Singapore, 2013.
Grover, G.; Mohrman, W.; Piestun, R. Real-time adaptive drift correction for super-resolution localization
microscopy. Opt. Express 2015, 23, 23887–23898. [CrossRef] [PubMed]
Rodríguez-Navarro, D.; Lázaro-Galilea, J.L.; Bravo-Muñoz, I.; Gardel-Vicente, A.; Tsirigotis, G. Analysis and
calibration of sources of electronic error in PSD sensor response. Sensors 2016, 16, 619. [CrossRef] [PubMed]
© 2016 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC-BY) license (http://creativecommons.org/licenses/by/4.0/).