MODELLING THE 3D POSE OF A HUMAN ARM

MODELLING THE 3D POSE OF A HUMAN ARM AND THE SHOULDER COMPLEX
UTILISING ONLY TWO PARAMETERS
Thomas B. Moeslund, Claus B. Madsen and Erik Granum
[email protected]
Laboratory of Computer Vision and Media Technology
Aalborg University, Niels Jernes Vej 14
DK-9220 Aalborg East, Denmark
ABSTRACT
In model-based computer vision it is necessary to have a geometric model of the object the pose of which is being estimated.
In this paper a very compact model of the shoulder complex and
arm is presented. First an investigation of the anatomy of the arm
and the shoulder is conducted to identify the primary joints and
degrees of freedom. To model the primary joints we apply image
features and clinical data. Our model only requires two parameters to describe the configuration of the arm. It is denoted the
local screw axis model since a new representation is produced for
each image. In the light of this model we have a closer look at
the parameters in the shoulder complex. We show how to eliminate the effects of these parameters by estimating their values
in each image. This is done based on experimental data found
in the literature together with an investigation of the movement
of the bones in the shoulder - the “shoulder rhythm” - as a function of the position of the hand in the image. Finally we justify
our approach by comparing the model with real data. Similar
tendencies are reported, suggesting a valid modelling scheme.
1. INTRODUCTION
Capturing the motion of humans has in the later years become a focus of much attention. An entire Motion capture
(MoCap) industry has developed due to the commercial
interest, especially driven by the use of animated computer graphics in motion pictures and commercials, but
also due to the use of MoCap data in clinical studies e.g.,
diagnostics of orthopedic patients. Furthermore, a more
long term interest has emerged where the goal is to develop better human-computer interfaces (HCI).
The need for an invasive MoCap technology is obvious
and has led to many research initiative into marker-free
computer vision MoCap systems. In [1] a comprehensive
survey is presented covering this topic. From the survey a
clear tendency towards the use of model-based approaches
can be seen. The typical model-based approach is to have
a 3D geometric model of the parts of the human that one
tries to capture the motion of. Different configurations of
the model is projected into the image for comparison with
the image data. The best match determines the current
pose of the human parts. A vast amount of different implementations of this idea can be found in the literature
with the one common issue that the results are highly dependent on the accuracy of the geometric model. In fact,
the captured motion can never be more accurate than the
geometric model. A precise geometric model is therefore
a necessity for the long term goal of capturing the exact
motion of a human body. However, as model-based vision in general requires much processing even for todays
hardware, simple models are usually applied.
In computer graphics the need for an accurate model is
more profound as this is crucial to make animations look
convincing. Opposite computer vision, computer graphic
algorithms do not have to run in real time and complex
models can therefore be applied. In this paper we try to
bridge the gap between the two domains by introducing a
complex model into computer vision but only using a few
parameters to model it.
Our work focuses on capturing the motion of a human
arm as this is the primary body part in general HCI applications. Furthermore we are interested in solving this
problem utilising a monocular-approach as this results in
the simplest HW setup.
In computer vision the arm is most often assumed to be
connected to the shoulder which is fixed to the torso, hence
the position (not pose!) of the wrist is defined by four rotation parameters with respect to the torso (given the length
of the arm is known), where in fact 12 degrees of freedom
(DoF) exists [2].
The problem this paper deals with is then stated as:”Given
the pose of the torso, how can the motion of the human
shoulder and arm be modelled with as few parameters as
possible”.
We address the problem by building a model not based
on the anatomic joints but rather on the idea of applying
informative parameters to model the motion coursed by
the anatomic joints. Concretely we apply clinical data together with image features to derive a very compact representation of the motion requiring few parameters compared to the DoF of the anatomic joints.
In fact we will show how to model the 12 anatomic DoF
in the shoulder and arm utilising just two parameters!
The structure of the paper is the following. In section 2 the
DoF in the shoulder complex and the arm are described. In
section 3 a compact model of the arm is presented. In section 4 a compact model of the DoF in the shoulder complex are presented and linked to the model developed for
the arm. In section 5 the modelling approach is evaluated,
and finally section 6 concludes the paper.
2. THE DOF OF THE SHOULDER AND ARM
When modelling the pose of the arm and shoulder it is necessary to understand the anatomic features which control
the movements, hence the bones and the joints connecting
them. A complete description of the interaction between
the joints and bones and the DoF, leads to a very comprehensive model. However, from a visualisation point
of view many of these DoF are insignificant and we will
therefore only focus on the large scale motion of the shoulder and arm, meaning that only the primary DoF are modelled. Furthermore, we assume that each anatomic joint
can be modelled by a geometrically ideal joint. Another
consequence of the focus on large scale motion is that we
assume the hand to be a part of the lower arm.
Overall the arm consists of the upper- andlower arm,
the
lengths of which are assumed known as
and , respectively. The length of the lower arm is defined as the
distance from the elbow to the centre of the hand. As for
the rest of the movable rigid entities in the human body,
the arm consists of bones which are moved by muscles following the design of the joints connecting the bones. The
limits of the movements are imposed by the ligaments,
muscles, joints, and other bones, but in this text we simply refer to the limits of a joint independent of the origin.
The lower arm consists of two bones; the ulna and the
radius [3]. They are connected to each other with a pivot
joint at the elbow and with a pivot joint at the wrist. These
two joints allow the hand to rotate around the long axis of
the lower arm by rotating the radius around the ulna. The
pivot joint in the elbow is part of the elbow joint which
also connects the two bones with the upper arm bone, the
humerus. The ulna is connected in a hinge joint and the
radius in a ball-and-socket joint. Overall the primary DoF
at the elbow is modelled very well using one hinge joint
and one pivot joint even though the elbow motion is more
complex [4]. Since we ignore the motion of the hand we
can ignore the pivot rotations of the radius around the ulna,
hence the DoF in the elbow is modelled by one hinge joint.
The upper arm consists of one bone, the humerus, and
is connected to the shoulder in the gleno-humeral joint
(GH-joint), see figure 2.A. Even though the GH-joint contains two sliding DoF [5] it is well modelled as a ball-and-
joint since the surfaces of the joint is more than
socket
spherical
[6].
The ”socket” part of the joint is a part of the shoulder and
called the glenoid. Its motion with respect to the torso,
or more precisely the thorax, originates from the complex
structure of the shoulder, known as the shoulder complex
[7]. The shoulder complex provides mobility beyond any
other joint in the entire human body. The shoulder complex contains up to 11 DoF [2]. However, since the mobility of the shoulder complex can be described as a closed
kinematic chain these 11 DoF are not independent and in
fact the relative pose of the glenoid is well described by
only four independent parameters [7].
To model the shoulder complex requires a comprehensive
biomechanical model based on knowledge of the bones,
joints, ligament, muscles, and their interactions. Such
models have been developed, see e.g. [2, 8, 9, 10, 11].
Obviously these models contain a high number of parameters usually found by solving the inverse kinematic problem. That is, given the location of the hand, how is the
different joints and muscles configured?
We are not interested in such models since they contain
too many details, hence too many parameters. However,
by analysing the outcome of advanced biomechanical models we observe that the primary motion of the glenoid with
respect to the torso, hence the four parameters, is: rotations around the z- and y-axes, and vertical and horizontal displacements along the y- and x-axes, see 1.A for a
definition of the axes. The rotations can be governed by
the GH-joint by increasing its ranges accordingly whereas
the two displacements can be modelled by two prismatic
joints (allowing 1D translation), each having one DoF.
Altogether six DoF are needed to model the primary DoF
of the arm and the shoulder complex. These are listed in
table 1.
Table 1: The different joints utilised to model the primary
DoF of the shoulder complex and arm. Displacement =
disp.
Description of the joint
Type of joint (DoF)
Bending of the elbow
Hinge (1)
The GH-joint
Ball-and-socket (3)
Vertical disp. of the glenoid
Prismatic (1)
Horizontal disp. of the glenoid Prismatic (1)
3. MODELLING THE DOF OF THE ARM
The four DoF of the arm, see table 1, are virtually always
modelled by four Euler angles as they very convincingly
represent the anatomic DoF of the arm. In earlier work
[12] we have suggested the idea of applying image measurements to derive a more compact representation of the
α
2π
S
α
H
π
E
Y
0
Z
Hz
X
-L
A
B
0
L
C
Figure 1: A: The concept utilised in the local screw axis model. B: The possible configurations for one image in a 3D
space. The line indicates the possible configurations of the hand. The surface illustrates the possible configurations for
80 discrete position of the hand on the line, hence 80 circles.
The ’*’ is the position of the shoulder. C: The possible
.
configurations in the local screw axis model, where arm. Concretely we estimate the position of the hand in
the image (using colour vision) and, given a calibrated
camera, represent the position of the hand in 3D by just
one parameter. That is, the hand will be located somewhere on a line from the calibrated camera and through
the image position of the hand. We denote the parameter,
.
Assuming the GH-joint is fixed with respect to the actual
3D position of the hand, the possible elbow positions will
describe a circle in space represented by one parameters,
, see figure 1.A. Combining the two parameters, ,
we have a model that can represent all possible configurations of the arm utilising only two parameters. We have
so to speak, used the image measurements to collapse the
standard four DoF (Euler’s angles) into a two DoF model.
We denote this novel model the local screw axis model.
For each new image a unique instance of the representation exist, hence the name ”local”. In figure 1.B the
possible configurations are illustrated in 3D for one image where the hand is located somewhere on the line. In
figure 1.C the local representation is illustrated for the parameters of the local screw axis model. Clearly this model
has a very simple and compact representation of the possible configurations compared to the complex shape of the
possible configurations in figure 1.B1 . In this compact representation is bounded by one circle-sweep ( )
! is bounded by " the total length of the arm.
while
4. MODELLING THE DOF OF THE SHOULDER
In every community where a model of the arm and shoulder is required, the model either consists of four DoF or
seven+ DoF, depending on the level of details required
and the processing power available. The former number
1 Not
to mention that of the four Euler angles!
is seen in computer vision and many computer graphic
applications whereas the latter number is seen in clinical studies, and advanced computer graphics. However,
Badler et al. [14] have shown that the primary DoF in the
shoulder can be modelled without increasing the number
of DoF in the model, hence the primary DoF in the shoulder are related to the DoF in the GH-joint.
We will utilise the same idea as [14] proposed but in the
context of our modelling approach utilising two prismatic
joints. Since [14] is in the context of computer graphics
they have to prepare their model for the inverse kinematic
problem, hence they need an analytic model. As is often the case when modelling a biomechanical phenomena
information is lost when fitting an analytic model to the
measurements. We do not have to solve the inverse kinematic problem but rather the forward kinematic problem.
Therefore we can allow ourself to use clinical data without
forcing an analytic solution upon the data. Furthermore,
we will apply in-depth knowledge of the functionality of
the shoulder complex in motion to enhance the clinical
data.
Concretely we seek to eliminate the effect of the prismatic
joints altogether by estimating the displacements of the
glenoid in each image and correcting the shoulder position
accordingly.
The idea is to find a relationship between the displacements of the GH-joint2 (denoted #$ and #&% ) and the angle between the torso and the upper arm, ' . For each image ' is estimated based on the position of the hand in the
image and #($ and #&% are estimated as #$)*#+-, #+/.
2 The exact location of the GH-joint is defined as the centre of the
humerus head which is the point the humerus rotates about, hence its
position is fixed with respect to the glenoid. This means that finding the
displacement of the GH-joint is equivalent to finding the displacements
of the glenoid. The reason for choosing the GH-joint over the glenoid is
that the former has a more clear anatomic definition, see figure 2.A.
44 4554 5454 5454 5454 5454 5454
646 654776 Clavicle
765476 765476 765476 765476 54
6 67 Scapula
76 76 76 76
9898 9898 9898
AC-joint
9898 9898 9898
GH-joint
00 100 1010 1010
0 10 10 10
SC-joint
3232 3232 3232 3232
Thorax
32 32 32 32
;:;: Humerus
;:;: ;:;: ;:;: ;:;: ;:;: ;:;:
SC-joint
AC-joint
α
β
y
z
A
x
B
r
GH
=> <<< =>
<
∆ y1<<
AC-joint
τ
β
GH
GH’
SC-joint
α
∆x1
∆ y2
∆x2
C
Figure 2: A: The different bones and joints in the shoulder complex. Figure after [13]. B: The initial configuration of the
SC- and AC-joints. C: The configuration of the SC- and AC-joints during elevation of the arm, see text.
and #&%?
#(@A, #@. , respectively, see figure 2.C. In the
following we describe this approach in more details.
4.1. Relating ' and #$
To understand the relationship between ' and #$ the shoulder complex is investigated in more detail. The shoulder
complex consist of two bones (the shoulder girdle); the
scapula and the clavicle, see figure 2.A. The former is the
large bone on the back of the shoulder and the latter is
the one from the breast-bone to the top of the shoulder tip
[15]. Both bones are usually visible through the skin. The
clavicle is a long thin bone connected to the breast-bone
in the sterno-clavicular joint. It functions as a ball-andsocket joint and is denoted the SC-joint, see figure 2.A.
In the other end the clavicle is connected to the scapula
in the acromio-clavicular joint which also functions as a
ball-and-socket joint. This joint is denoted the AC-joint.
The scapula is a large triangular bone which contains the
glenoid below the AC-joint. The scapula is not connected
directly to the thorax through a joint but instead via muscles. This results in a very flexible ”joint” which can both
rotate and translate with respect to the thorax.
The value of #$ is the same as the vertical displacement
of the GH-joint with respect to the resting position where
'*B and originates from elevation in the SC-joint and
the AC-joint. Actually, without the rotation in the SC-joint
and AC-joint the elevation of the arm would be limited.
When elevating the arm to ,DC only ,D. comes form
the rotation in the GH-joint and the rest comes from the
rotation of the scapula [18], hence in the SC-joint and the
AC-joint.
The primary displacement of the GH-joint comes from the
elevation of the AC-joint, hence the upwards rotation in
the SC-joint. In the work by Dvir and Berme [5] measurements describing the vertical displacement of the AC-joint
as a function of ' is presented, see the dashed graphs in
figure 3.A. This gives us #+-, , but to obtain a complete
relationship we need to add #+E. which expresses the vertical displacement of the GH-joint based on the rotation in
the AC-joint. In figure 2.C the additional displacement is
illustrated. In figure 2.B the initial configurations of the
SC-joint, the AC-joint, and the GH-joint are shown, i.e.
when 'FG . We assume that the clavicle is horizontal when 'BH and that the GH-joint is located IJJ
directly below the AC-joint, see figure 2.A.
In figure 2.C GH’ is the position of the GH-joint if we
only apply #+-, . The additional displacement of the GHjoint when the rotation is taken into account, #+E. , can be
calculated as
#+E.K*ILM,NPORQTSVU (1)
where UWX.YD Z and I is approximately equal
to ,[]\ ^ (` of the length of the upper arm [19]. That is,
I_
L\a,[^ .
To be able to calculate #+/. as a function of ' we need
to known how and Z depend on ' . The relationship is
via the rotation of the scapula which is therefore investigated further. To structure our investigation we utilise the
concept of the ”shoulder rhythm” [7] or scapulohumeral
rhythm [18]. The shoulder rhythm is defined as the ratio, b , of the upwards rotation in the GH-joint, c , and the
angle between the scapula and torso, d , hence begh f .
Since 'ic
d we can apply the ratios reported in the
literature to find the relationship between the rotation of
the scapula and ' as dH k jlm . The question is now how
to relate the rotation of the scapula, d , to and Z .
In average the ratio of a ,[C elevation of ' is 2:1 [20].
However, the ratio varies a lot during a ,DC elevation.
Usually it is divided into four phases [18]. In table 2 the
ratios and the actual rotation changes of c and d in each
phase are listed, hence #c and #d .
In the first phase the primary rotation of the arm is done
in the GH-joint, hence a high ratio. The small rotation of
the scapula is assumed to originate from rotation in the
SC-joint. The clavicle and scapula therefore move as one
object meaning that Z is constant in this phase.
In the second phase the SC-joint is rotated alongside the
rotation in the GH-joint and a smaller ratio is present. In
40
20
0
−20
30
20
10
0
−10
−40
−60
0
Horizontal displacements [in mm]
Vertical displacements [in mm]
Table 2: The four phases in the shoulder rhythm. For each phase the relative change in the gleno-humeral angle, #c , the
relative change in the scapula angle, #(d , and the ratio between the two, are shown. The sources of the ratios are shown by
a reference after each ratio. Furthermore the table shows the relationship between ' and the angles and Z . The interval
of in the first phase is denoted n mDo min p mqo max r etc. We assume the ratio in n . C r reported in [16] to be equal the
ratio in the interval n C r .
Phase Interval of '
Ratio
Interval for Interval for Z
#c
#(d
n pn
Ys\ . [17] .]\ ^ ]\ n #d m r n r
1
n C tn
L\ . [16] ^]\ ,D^]\ n mqo max p mqo max #dAu r n Z mqo max Z mqo maxr 2
n C ,[^ n L\aY , [16] .^]\ v]\w, n u o max p u o maxr n Zu o max Zu o max #dyx r
3
p
o
o
4
nz,D^ ,DC r L\ ^ [16] ,{\w, C]\ n x max x max #dA| r n Z x o max Z x o maxr
−20
30
60
90
φ [in deg]
120
150
180
−30
0
A
30
60
90
φ [in deg]
120
150
180
B
Figure 3: The vertical (A) and horizontal (B) displacements, respectively, as a function of ' . The dashed graphs are the
displacements of the AC-joint due to the rotation in the SC-joint. The circles are the actual measurements from [5] while
the graphs are obtained by spline interpolations. The dotted graphs are the additional displacements of the GH-joint due
to rotation in the AC-joint. The solid graphs are the total displacements of the GH-joint with respect to the SC-joint
normalised to zero at '& .
this phase the assumption applied above holds [5] and Z
can be assumed to be constant.
In the third phase the primary rotation of the scapula comes
from a rotation in the AC-joint. For simplicity we ignore
the small rotation in the SC-joint and can therefore assume
to be constant.
In the last phase little elevation is done in both the SC-joint
and AC-joint. However, in this phase the clavicle starts rotating about its long axis and since it is crank-shaped it displaces the AC-joint and therefore rotates the scapula (and
the GH-joint) further [18]. Again the assumption about
the clavicle and scapula moving as one object holds [5]
and we can therefore assume Z to be constant.
In the last two columns in table 2 the relationships between ' and the angles and Z are summarised. From the
results the relationship between ' and the angles and Z
} o and Z } ' is given as ~} ' F# } '
min
#Z } ' Z } o min , respectively, where
~} o max } o
min `
}
#
€' ' } o
€'‚' } o min (2)
}
o

'
max
min
}
o
}
o
Z
)
Z
max
min `
#Z } ' ' } o
€'‚' } o min (3)
max ' } o min
where ƒ indicates the number of the phase. Combining these equations with equation 1 and the results summarised in table 2 allows us to calculate the relationship
between #+E. and ' as
#(+E.' ILM,„…ORQTS yP]\w,D.]' †
ILM,„…ORQTS ^L\ C‡…L\ .C]' M
ILM,„…ORQTSM,, \ . ‡P]\ v ' †
ILM,„…ORQTSwCL\ v …L\ ..]' M
if #1
if #2
if #3
if #4
(4)
where ’#’ indicates the phases shown in table 2. In figure
3.A the equation is illustrated as the dotted graph. The
sum of #+-,' and #(+E.' , hence #$' is illustrated
as the solid graph in figure 3.A.
#@ . €' is similar
The estimate of #%~' F#@A,'
to the above procedure and therefore not shown here. The
results are shown in figure 3.B [21].
X
Y
E1
4.2. Relating the Position of the Hand and '
To utilise the results derived above the angle, ' , needs to
be estimated. In this section we present two methods to
find the relationship between the position of the hand and
' . Which method to use depends on whether predictions
of the kinematics of the arm are available or not.
Ec
Hc
Hp
Ep
l
Au
Z
4.2.1. Without Predictions
We assume a constant rotation angle X..v of the
shoulder-elbow-hand triangle around the line spanned by
the shoulder and the hand. This assumption allows for a
calculation of the position of the elbow. Given the position of the elbow we can calculate the relationship as
m Ž
'ˆ‰,DC‡…Št‹OŒ ‘ .
’
Unfortunately we do not know
and finding the 3D position of the hand given its 2D projection in the image is an
ill-posed problem. However, in the context of estimating
“ is the primary parameter if we assume
the angle, ' ,
the camera plane to be approximately parallel to the torso
and having approximately the same y-coordinate as the
shoulder. These reasonable assumptions allow a realistic
estimation of the average position of the hand in a particwE”[w’ •D– . We define —E”[a’ •D– as the centre position
ular image,
between the two points where the camera ray through the
hand, and a sphere
 with centre in the shoulder and radius
equal to
, intersect.
4.2.2. With Prediction
The predicted values of the position of the hand and elbow
are calculated based on the estimated values in the previous images and a motion model using a standard Kalman
Filter framework. The idea behind this method is to combine the predictions and the image measurements to obtain
a better estimate of ' . In figure 4 the predictions are illustrated using subscript ’p’ while the corrected predictions
are illustrated using subscript ’c’.
Since we know the camera ray through the hand in the
current image, ˜ , we can correct the prediction by project/’ ™ , to the line, ˜ .
ing the predicted position of the hand,
’
š
The corrected prediction is denoted
as
’ š ›’ M ]’ ™ ›‰
’ ` œX
’ Mœ ’ where and
›’ iscalculated
a point on ˜
œ ’ is the unit directional vector of this line. Both are
and
Figure 4: The shoulder coordinate system seen from
above. The circle illustrates the limits of the elbow position. The dashed line indicates the camera ray through
the hand. See text for a definition of the parameters.
found through camera calibration and the position of the
hand in the image. The difference between the predicted
and corrected vectors yields a measure of the prediction
’  and calculated as ’  s’ š !
/’ ™ .
!
error, denoted
We have no measurements of the position of the elbow in
the current image which can be used to correct the prediction of the elbow. However, we know it is likely to
have an error vector closely related to that of the hand as
the hand and elbow are part of the same open-looped kinematic chain. We therefore calculate the corrected position,
žR’ š , by first adding the predicted error of the hand to the
ž[’
žt’ ™ ’  ,
predicted value of the elbow, yielding m ž[’
and then finding the point closest to m that results in
a legal configuration of the arm. In mathematical terms
žVR’ š arg Ÿ QTS ž ’ ž[’ m subjected to the constraints
ž ’
Ž Œ¡
ž
€] ’ š
and
. A solution to this
problem can be found in [21]. Having found the corrected position of the elbow we can now calculate the
desired angle between the torso and upper arm as '¢
m ޤ 
,DC ‡PŠp‹O£Œ  ‘ .
4.3. Eliminating the Effect of #$ and #&%
After having estimated the angle between the torso and
upper arm, ' , with or without the use of prediction, we can
apply the result and hereby eliminating the effect of the
prismatic joints #$ and #&% altogether. That is, ' is used
to estimate the vertical and the horizontal displacements
Figure 5: Three frames from the test sequence.
of the GH-joint, respectively, simply by applying the value
of ' to the results shown as the solid graphs in figure 3,
hence no additional parameters are required.
For each image we can now estimate the position of the
shoulder, or rather the GH-joint, and therefore achieve a
more accurate model of the arm for synthesising into the
image.
5. EVALUATING THE MODELLING APPROACH
Modelling the pose of the arm, hence the GH-joint and
is a novel approach. Through
elbow joints by and
geometric reasoning it can easily be shown that there exists a one-to-one mapping between our representation and
the standard representation via four Euler angles. Since
the Euler representation is sound, the same must be true
for the local screw axis model. Hence we do not test this
modelling approach.
Modelling the motion of the GH-joint with respect to the
torso by two prismatic joints, on the other hand, needs to
be justified through tests. In this section we therefore conduct a qualitative test in order to evaluate this modelling
approach. In the test, model data are compared with real
measurements. A sequence of 16 images wherein a subject is moving his arm is recorded, see figure 5.
The subject’s torso is parallel with respect to the image
plane and it remains fixed throughout the entire sequence.
The arm of the subject is completely outstretched and the
hand describes a circle in a plane parallel to the camera.
Given a distance of 4 meter between the camera and the
subject we assume a orthographic projection. So, from the
circle described by the hand the position of the GH-joint
and the angle ' can be measured directly in the image in
the following manner.
The angle ' is measured in each image as the angle between the upper arm and the vertical border of the image.
To find the position of the GH-joint we first manually go
through the entire sequence and determine the distance in
pixels from the fingertips to the assumed position of the
GH-joint, hence the length of the arm - denoted ˜ . Then
we go through the sequence again and in each image we
define a vector starting at the finger tips and with length
˜ . The vector is positioned so that it passes through the
upper arm 1/3 from the ’upper edge’ of the arm and 2/3
from the ’lower edge’ of the arm. The end position of the
vector defines the position of the GH-joint in the current
image. In figure 6.A the displacements of the GH-joint
in the image are shown. The solid line illustrates the horizontal displacement while the dashed line illustrates the
vertical displacement.
The ratio between the maximum vertical change and the
maximum horizontal change is L\ ^ for the measurements.
For the model data in figure 3 the relationship is .L\ ^ indicating a relatively high similarity between the model and
the measurements. In figure 6.B and 6.C the measurements are illustrated as dashed graphs scaled to the range
of the model data. Even though the graphs are not in complete alignment it is clear that they have similar tendencies. Furthermore, it is obvious that the model data fit
the measurement data far better than any horizontal line
(fixed shoulder position) that might be applied instead of
our model.
In conclusion it needs to be added that even though we
in the above suggest high similarity, hence a good model,
the test is merely qualitative. To be able to make a more
solid conclusion a better test setup with fewer assumptions and quantitative tests is required. However, we stick
to our qualitative tests and accept a less definite conclusion, hence we can only conclude that a clear tendency is
observable.
20
100
15
80
10
5
0
60
5
∆ h(φ)
∆p
∆ v(φ)
10
−5
40
−10
−15
20
−20
0
−5
0
0
30
60
90
φ
120
150
180
−20
0
−25
30
60
A
90
φ
120
B
150
180
−30
0
30
60
90
φ
120
150
180
C
Figure 6: A: The displacements of the GH-joint measured in the image sequence. The solid line illustrates the horizontal
displacement and the dashed line illustrates the vertical displacements. The displacements, #¥ , are measured in pixels
with respect to the position in the first frame ( '…¦,Dv . B: The measured data of the vertical displacement scaled to the
range of the model data. The solid line and the dashed line illustrate the model data and measurements, respectively. C:
The same as B, but for the horizontal displacement.
§
6. CONCLUSION
In this paper we have presented an efficient way of modelling the DoF of the shoulder complex and the arm. In
fact we have shown how to model the 12 primary DoF
using only two parameters.
We initially suggested to model the motion of the GHjoint with respect to the torso using two prismatic joints
and the motion of the arm by four rotational DoF. We then
presented the local screw axis model which through the
image measurements of the position of the hand can represent the configurations of the arm by only two parame . Next we showed how to eliminate the need
ters, -
for parameters to model the prismatic joints by estimating their values in each image based on the position of the
hand in the image. This estimation is achieved by investigating the movement of the bones in the shoulder - the
clavicle and scapula - as a function of the position of the
hand. The movement of the clavicle is defined from experimental data found in the literature. The movement of
the scapula is found by analysing the different phases of
the ”shoulder rhythm”.
The concrete usage of our results in model-based computer vision can be summarised as the following steps in
each image.
§
§
§
Find the position of the hand in the image using
colour vision.
Estimate the angle between the torso and the upper
arm, ' , with or without the use of predictions. Section 4.2.
Estimate the displacements of the GH-joint utilising
the two solid graphs in figure 3 as a look-up-table.
This results in the 3D position of the shoulder in the
current image. Section 4.1.
§
Using the 3D position of the shoulder, the position
of the hand in the image and a camera calibration
we can define the local screw axis model, hence
all possible configurations are represented with just
! . Section 3.
two variables, -
Synthesise a configuration of the arm, , and
compare it with image data, e.g. edges. This is done
for a number of different configurations until the
’best’ match is found. How to decide which different configurations to synthesise we could use e.g.,
all possible, hence an exhaustive search, all possible within a predicted region, an iterative search,
or a particle-filter, hence a sequential Monte Carlo
search.
From a computer vision point of view the model developed in this paper is believed to be exactly what one desires. Namely a model of the primary DoF where the
states of the prismatic joints are directly coupled to the
angle between the upper arm and torso, hence the extra
two DoF come for free. If the position of the hand can not
be found in the image our modelling approach for the arm
fails. The modelling approach for the shoulder complex
is, however, still useful. That is, the primary eight DoF in
the shoulder complex is modelled with only two prismatic
joints, hence two DoF.
The main contributions of this paper are first of all the
local screw axis model which allows for a two DoF model
of the arm. Secondly, the idea of modelling the DoF of the
shoulder complex by a ball-and-socket joint together with
two prismatic joints is also believed to be new. Thirdly, the
modelling of the displacements of the GH-joint without
introducing additional parameters can hopefully provide
others with a more detailed model without increasing the
number of DoF.
7. REFERENCES
[1] T.B. Moeslund and E. Granum, “A Survey of Computer Vision-Based Human Motion Capture,” Computer Vision and Image Understanding, vol. 81, no.
3, 2001.
[2] W. Maurel, 3D Modeling of the Human Upper Limb
including the Biomechanics of Joints, Muscles and
Soft Tissues, Ph.D. thesis, Laboratoire d’Infographie
- Ecole Polytechnique Federale de Lausanne, 1998.
[3] B.F. Morrey, “Anatomy of the Ellow Joint,” in The
Elbow and its Disorders, chapter 2. W.B. Saunders
Company, 1985.
[4] K.N. An and B.F. Morrey, “Biomechanics of the Elbow,” in The Elbow and its Disorders, chapter 3.
W.B. Saunders Company, 1985.
[5] Z. Dvir and N. Berme, “The Shoulder Complex
in Elevation of the Arm: A Mechanism Approach,”
Journal of Biomechanics, vol. 11, 1978.
[6] L.J. Soslowsky, E.L. Flatow, L.U. Bigliani, and V.C.
Mow, “Articular geometry of the glenohumeral
joint,” Clinical Orthopaedics and Related Research,
vol. 285, 1992.
[7] V.M. Zatsiorsky, Kinematics of Human Motion,
Champaign, IL: Human Kinetics, 1998.
[8] “The
AnyBody
http://anybody.auc.dk/.
Project,”
2002,
[9] A.E. Engin and S.T. Tümer, “Three-Dimensional
Kinematic Modelling of the Human Shoulder Complex - Part 1: Physical Model and Determination of
Joint Sinus Cones,” Journal of Biomechanical Engineering, vol. 111, 1989.
[10] S.T. Tümer and A.E. Engin, “Three-Dimensional
Kinematic Modelling of the Human Shoulder Complex - Part 2: Mathematical Modelling and Solution
Via Optimization,” Journal of Biomechanical Engineering, vol. 111, 1989.
[11] C. Högfors, B. Peterson, G. Sigholm, and P. Herberts, “Biomechanical Model of the Human Shoulder Joint - 2. The Shoulder Rythm,” Journal on
Biomechanics, vol. 24, no. 8, 1991.
[12] T.B. Moeslund and E. Granum, “3D Human Pose
Estimation using 2D-Data and an Alternative Phase
Space Representation,” in Workshop on Human
Modeling, Analysis and Synthesis at CVPR, Hilton
Head Island, South Carolina, June 16 2000.
[13] K. Breteler, C.W. Spoor, and F.C.T. Van der Helm,
“Measuring Muscle and Joint Geometry Parameters
of a Shoulder for Modelling Purposes,” Journal of
Biomechanics, vol. 32, no. 11, 1999.
[14] N.I. Badler, C.B. Phillips, and B.L. Webber, Simulating Humans - Computer Graphics Animation and
Control, Oxford University Press, 1999.
[15] E.A. Codman, The Shoulder, Boston: Thomas Todd
Co., 1934.
[16] S.D. Bagg and W.J. Forrest, “A Biomechanical
Analysis of Scapula Rotation during Arm Abduction
in the Scapula Plane,” Am J Phys Med Rehabil, vol.
67, 1988.
[17] S.G. Doody, L. Freedman, and C.J. Waterland,
“Shoulder Movements during abduction in the
Scapular Plane,” Arch Phys Med Rehabil, vol. 51,
1970.
[18] E. Culham and M. Peat, “Functional Anatomy of
the Shoulder Complex,” Journal of Orthopaedic and
Sports Physical Therapy, vol. 18, no. 1, 1993.
[19] P.de Leva, “Joint Center Longitudinal Positions
Computed from a Selected Subset of Chandler’s
Data,” Journal of Biomechanics, vol. 29, 1996.
[20] V.T. Inman, J.B. Saunders, and L.C. Abbott, “Observations on the Function of the Shoulder Joint,” Bone
and Joint Surgery, vol. 26, 1944.
[21] T.B. Moeslund, “Modelling the Human Arm,” Tech.
Rep. CVMT 02-01, Laboratory of Computer Vision
and Media Technology, Aalborg University, Denmark, 2002.