CT Image Based 3D Reconstruction for Computer

CT Image Based 3D Reconstruction for Computer Aided
Orthopaedic Surgery
F. Blanco
IDMEC-IST, Technical University of Lisbon, Av Rovisco Pais, 1049-001 Lisboa, Portugal
P.J.S. Gonçalves
Polytechnic Institute of Castelo Branco, Av Empresario, 6000-767 Castelo Branco, Portugal
IDMEC-IST, Technical University of Lisbon, Av Rovisco Pais, 1049-001 Lisboa, Portugal
J.M.M. Martins & J.R. Caldas Pinto
IDMEC-IST, Technical University of Lisbon, Av Rovisco Pais, 1049-001 Lisboa, Portugal
Abstract
This paper presents an integrated system of 3D medical image visualization and
robotics to assist orthopaedic surgery, namely by positioning a manipulator robot end-effector
using a CT image based femur model.
The 3D model is generated from CT images, acquired in a preoperative exam, using
VTK applied to the free medical visualization software InVesalius. Add-ons were developed for
communication with the robot which will be forwarded to the designated position to perform
surgical tasks.
A stereo vision 3D reconstruction algorithm is used to determine the coordinate
transform between the virtual and the real femur by means of a fiducial marker attached to the
bone.
Communication between the robot and 3D model is bi-directional since visualization is
updated in real time from the end-effector movement.
1. Introduction
In recent years, robotic applications regarding surgical assistance have been appearing
more often. Their advantages in precision, accuracy, repeatability and the ability to use
specialized manipulators are granting them a place in the operating theatre.
By introducing robotic technology in surgery, the goal is not to replace the surgeon but,
instead, to develop new tools to assist in accurate surgical tasks.
The ability of precise robotic execution from image based models is showing promising
results in orthopaedic surgery. Applications such as total hip replacement require an accurate
broaching of the femur to match the implant shape as it may influence biomechanics.
Medical
imagery
is
widely
used
in
Computer
Aided
Orthopaedic
Surgery
(CAOS)[DGN02] to attain data for pre-operative planning, simulation and intra-operative robotic
1
navigation, by the use of sensory systems to track fiducial markers attached to the patient’s
bone.
This paper presents an experimental robotic system that integrates CT based imaging
data and fiducial marker tracking for robot navigation.
Figures 1 and 2 summarize the stages
for intra-operative robot navigation.
Figure 1 – Operational diagram
Figure 2 – Intra-operative system
The 3D model is processed based on CT scans acquired pre-operatively and through it
the surgeon can visually position the end-effector. The robotic arm can navigate by tracking the
position of fiducial markers and update in real-time the position in the virtual model. Fiducial
marker tracking is performed by the use of stereo vision.
Section 2 presents the 3D medical visualization software developed to interact with the
robot. Computer Vision is described in section 3 and in section 4 is presented the robotic
system. Section 5 describes the overall system and some results obtained. In section 6 some
conclusions are presented.
2. 3D Medical Visualization Software
2.1 Introduction
The process of medical image acquisition is an area that is being very developed in
recent decades. Computer tomography (CT), magnetic resonance imaging (MRI) and
ultrasound imagery revolutionized the diagnostic in medicine. These techniques made possible
an internal view of practically every organ and structure of human anatomy, without the resort of
evasive techniques, led to more precise diagnosis.
This section presents tools for visualization and segmentation of three-dimensional data
from a set of medical images. Developed software is presented which is able to merge 2D CT
images into a single 3D volume and in a visual and interactive way position the end-effector of
the robot in a real-time environment.
2
2.2 Software Development
It was used as platform basis the open-source medical visualization software,
InVesalius, and from then on add-on modules were created.
InVesalius is a three-dimensional medical imaging software created by a team from
Centro de Pesquisas Renato Archer (CenPRA) [MBS07], Brazil, targeted to Brazilian public
hospitals.
Regarding the system architecture, InVesalius uses Python and C++ programming
languages and employs the Visualization ToolKit Library (VTK)
The developed software allows the creation of a volumetric rendering based on a set of
CT imagery. The user may input data referring to the position and orientation of the end-effector
that is sent to the robot and it can then be visually displayed over the volumetric rendering
which is updated based on the actual position of the robot’s end-effector. A intersection plane
over the end-effector reference frame cuts the volumetric object and the section is shown in
another rendering frame.
The software, shown on Figure , provides two rendering frames, one with the
volumetrical rendering and the intersection plane, and another showing the intersection cut.
Figure 3 – 3D Visualization Software
3. Computer Vision
3.1 Introduction
Computer Vision is used to locate the position of the fiducial marker to bridge the virtual
model and the surgical environment.
3D reconstruction algorithm is applied using two uEye high-speed cameras in an Eyeto-Hand configuration, thus allowing them to have a broader view of the robot’s working area
and some degree of freedom on the fiducial marker movement. Even though high capture rates
3
were also available, the image acquisition was set at 25 frames per second because the object
to be tracked is not a fast moving target and thus providing more time for image processing.
3.2 Fiducial Marker
Many commercially available optical tracker systems use infrared light-emitting diodes
or reflective targets to determine the fiducial marker location. However these types of targets
were inadequate for the environment conditions available. To resolve this problem, it was used
small cold cathode lights. A minor change was necessary; the main vein was too bright and the
transparent acrylic housing didn’t convey a homogeneous light distribution. This problem was
resolved by giving a matte surface finishing. This procedure gave a homogeneous colour and
light intensity distribution both radially and lengthwise.
Figure 4 – Fiducial Marker Prototype
The fiducial marker prototype appears in Figure 4. It is square shaped and has four
different colours. The square shape was chosen due to its easy measurable configuration and
as a fast computational way to determine orientation. The colours chosen were Red, Green,
Blue and Yellow by their distinct histogram traits which facilitate recognition. Black tape was
used to cover the extremities of the lights so that the centre of each light may be confined into a
small space and enhance precision.
The cold cathode lights revealed to be so bright to normal acquisition parameters that
the cameras’ exposure time had to be reduced to 1.2 ms. This made visible only the target and
blackened all the surrounding environment which turned to be a quite efficient filter
3.3 Software Development
As development basis it was used a software by Silva [S08] that could compute spatial
positioning of a single coloured object. From that platform basis it was developed ways to detect
the various colours of the marker, compute the marker’s centroid for spatial positioning and
through vectorial algebra determines its orientation.
4
The software is written in C++, which provides fast processing capabilities for real time
calculations and OpenCV library.
4. Robotics
In the present work a PUMA 560 manipulator robot was used. Previous kinematic
equations had been implemented [CFS05][S08] but were only relative to the first 3 joints which
confers only correct positioning. Full inverse kinematic equations where implemented using
Kang’s solutions [K07]. On behalf of trajectory planning it was also implemented a fifth order
polynomial to guarantee a stable motion.
5. System Integration and Results
5.1 System Integration
Figure 5 illustrates how each individual system is interlinked. The core of the system
network is the PC Target and is concerned with the Puma controller. All the other systems
communicate with it providing and receiving data for its calculations.
To manage the kinematic and control algorithms of the PC Target, the PC Host is used
to communicate through Xpc Target via UDP. The computer in charge of the computer vision is
PC Vision. This computer calculates the spatial location of the fiducial marker and sends the
data to PC Target via UDP communication. Three-dimensional bone modelling and visualization
are processed in PC 3D and its relative position and orientation are sent to the PC Target via
UDP communication.
With the information provided by both PC Vision and PC 3D, the PC Target is able to
determine the joint coordinates for the desired position to the robot’s controller. Additionally, PC
Target computes de current relative position between the marker and the end-effector and
sends it back to PC 3D, as real-time update.
Figure 5 – System integration
5
5.2 Results
5.2.1 - 3D Medical Visualization Software
For this work it was used a set of 150 CT images from the pelvis to the knee, from
Biomed Town repository [Biomed]. The images are formatted as DICOM files, with 430x430
pixels, 2mm of spacing between images and a 12-bit resolution which represent a scale of 4096
gray levels. Figure 6 shows the rendered femoral structure from the CT images.
Figure 6- Femur 3D Model
5.2.2 – Computer Vision
Proper camera calibration is crucial for correct 3D reconstruction. To perform the
calibration of the cameras, it was used the Camera Calibration Toolbox for Matlab [B08]. Once
the extrinsic and intrinsic parameters were determined, precision and accuracy tests were
applied to determine if the fiducial marker was properly recognized and located.
Figure 7 shows graphically the displacement of the marker’s centroid in a fixed position.
1
Y [cm]
0.5
-0.2
0
-0.1
-0.5
-0.2
0
-0.1
0.1
0
0.1
0.2
0.2
Z [cm]
X [cm]
Figure 7 – Centroid displacement
6
Positioning tends to oscillate at a high frequency which affects the 3D reconstruction.
Figures 8, 9 and 10 show the various positions of the fiducial marker along each axis.
The coordinates are referent to the left camera coordinate frame.
15
10
0.5
0.5
Y [cm]
0
-5
Y [cm]
Y [cm]
5
-10
0
0
-15
0.3
-0.5
0.5
0.2
0
X [cm]
126
-0.5
10
0.1
-0.1
116.2
116.3
116.4
116.5
116.6
116.7
0
115
122
-5
-10
X [cm]
Z [cm]
0
124
5
116.8
120
120
125
130
110
X [cm]
Z [cm]
-0.5
105
100
Z [cm]
Figure 8 - Displacement along
Figure 9 - Displacement
Figure 10 - Displacement
the vertical axis
along the horizontal axis
along the optical axis
This test revealed that horizontal and vertical displacement was quite accurate, where
the maximum error rounded 0.005m at a distance of 1.16m. Depth perception however was
being miscalculated reaching errors up to 0.05cm. To attenuate this error a corrective
parameter, from depth data interpolation, was added to the calculation. This allowed the
displacement error value to diminish up 0.003m in the calibration area.
5.2.3 – Simulated Integrated System
The integrated system was tested on a virtual environment implemented in Simulink. In
it, all the information on the fiducial marker location, robot kinematic equations and end-effector
positioning were processed.
Figure 12 – Physical Environment
Figure 11 – Virtual Environment
In the fiducial position shown above (figure 12), it was sent the end-effector to a
determined position, 5cm from the femur head in the femoral axis and a rotation of 50º along the
coronal axis.
7
Figure 14 - Desired section cut
Figure 13 – End effector orientation on virtual
model
Figure 15 - Obtained section
The intersection plane was mostly accurate but tended to move a little bit around the
desired position due to the marker precision error. A snapshot of the section cut reveals the
slight displacement mentioned above in figures 14 and 15.
It is important to refer the fact that if the actual bone is far from the working area of the
robot or if it is oriented in such way that the end-effector can’t reach with the desired orientation,
the section cuts do not match at all.
6. Conclusions and Future Work
Firstly it was developed tools for volume rendering from CT imagery and use it for real
time manipulation. This tool provided a fair visual structure that enables an in depth perception
of the end-effector positioning. The 3D model reacts rapidly in real time processing but a better
image segmentation may be needed to evidence the full femur structure, such as the upper and
lower extremities.
Computer vision was used to determine the spatial location of a fiducial marker created
for this thesis. Tests show that the stereo vision computes with precision the positioning of the
8
marker but its accuracy depends on a rigorous calibration. On the region where the calibration
was taken place, the system computes accurately horizontal and vertical positioning. However
depth perception was not satisfactory for the purpose of this thesis. A corrective parameter was
applied thus diminishing the error but not eradicating completely.
The marker’s orientation is influenced by the spatial position of the elements of the
marker and due to the depth perception error the orientation varies up to 5 degrees in some
areas which is not acceptable when positioning the end-effector at a large distance from the
fiducial marker.
The inverse kinematic equations were properly implemented as well as its home
configuration conversion. On trajectory planning, it was implemented a fifth order polynomial.
7. References
[Biomed] Biomed Town Repository 2009 – LHDL Building,
http://www.biomedtown.org/biomed_town/LHDL/Reception/datarepository
[MBS07] Martins, T. A. C. P. ; Barbara, A. S. ; Silva, G. B. C. ; Faria, T. V. ; B. Cassaro ; Silva,
J. V. L. 2007. InVesalius: Threedimensional medical reconstruction software. In: 3rd
International Conference on Advanced Research in Virtual and Rapid Prototyping, Leiria
- Portugal. Virtual and Rapid Manufacturing, Taylor & Francis, 2007,. p. 135-142.
[DGN02] DiGioia III A. M. & Nolte L. P., 2002. “The challenges for CAOS: what is the role of
CAOS in orthopaedics? Computer Aided Surgery”, vol. 7, pp. 127–128.
[T94] Taylor, R.H, et al., June 1994, “An image-directed robotic system for precise orthopaedic
surgery” IEEE Transactions on Robotics and Automation, Vol.10, No.3, pp. 261-275,
[PMM+88] Paul, H.A., Mittlestandt, B.D, Musitis, B.L.Bargar, W.L., Hayes, D.E, 1998,
“Application of CT and robotic technology to hip replacement surgery” Proc. First
International Symposium on Custom
[S98] Simon, D.A., “What is ‘‘registration’’ and why is it so important in CAOS?” Proc. Computer
Aided Orthopaedic Conference, pp. 57-60, 1998
[S98a] Simon D.A.” Intra-operative position sensing and tracking devices”, Proc. Comp.
Aided Orthop. Surg. Conf, Pittsburg, pp. 62–64, 1998
[S00] Schroeder, W. ,Avila, L.S. and Hoffman, W. (2000) Vizualizing with VTK: A Tutorial. IEEE
Computer Graphics & Applications, Vol. 20, No. 5, pp. 20-27
9
[GW00] Gonzalez, R.C., Woods, R.E. (2000), “Processamento de Imagens Digitais” Edgard
Blücher, São Paulo, SP.
[S08] Silva,P., 2008, ” Controlo Visual de Robôs Manipuladores Utilizando um Sistema de Visão
Estéreo”, MSc Thesis, Instituto Superior Técnico, Portugal
[K07] Chul-Goo Kang, October 2007, “Online Trajectory Planning for a PUMA Robot”,
International Journal of Precision Engineering and Manufacturing Vol. 8, No.4,
pp.16-21
[CGL95] Cote, J., Gosselin, C. M. and Laurendeau, D., 1995. “Generalized inverse kinematic
functions for the Puma manipulators,” IEEE Transactions on Robotics and Automation,
Vol. 11, No. 3, pp 404–408,
[CFS05] Carvalho, André R. D., Faria, Jorge M. P. S., Silva, Pedro A. M., Fevereiro 2005,
Controlo por Visão do Puma 560 – Robótica de Manipulação, Instituto Superior
Técnico, Lisboa, Portugal,
10