Rehabilitation Robot in Intelligent Home Environment – Software

FrA01-02
Proceedings of the 2005 IEEE
9th International Conference on Rehabilitation Robotics
June 28 - July 1, 2005, Chicago, IL, USA
Rehabilitation Robot in Intelligent Home Environment –
Software Architecture and Implementation of a Distributed System
Oliver Prenzel, Johannes Feuser, Axel Gräser
s
Abstract – Rehabilitation robots (e.g. FRIEND as intelligent
wheelchair mounted manipulator) are being developed to gain
their user’s autonomy within daily life environment. To
prevent a high cognitive load onto the user, task input on a high
level of abstraction is mandatory. State-of-the-art
rehabilitation robots are still not capable to integrate fragments
of intelligent behavior into an overall context and to solve
complex tasks. A basic problem is how to cope with system
complexity as well as computational complexity that evolve
during task planning. A compromise towards feasibility is to
equip the system’s environment with smart components that
provide own intelligence and thus reduce the complexity of the
robotic system. However, a structured approach is necessary to
fuse the distributed intelligence. This paper is about the
concept and realization of a software-framework being able to
execute autonomous system operations together with
information retrieving capabilities and user interactions within
a distributed system. Key aspects of development have been to
provide robust run-time behavior of the system along with the
inclusion and resolving of redundant sensor information as well
as to reduce the effort of system programming to a minimum.
The application of the developed framework will be
demonstrated on base of sample steps of its integration with the
FRIEND II rehabilitation robotic system within an intelligent
home environment.
I. INTRODUCTION
R
robotic systems are being developed
with the intention to support disabled persons during
daily life activities as well as in the working environment.
Currently available systems that are designed for flexible
use, i.e. they are not pre-programmed, merely offer support
on a relatively low task level. This puts a high cognitive load
on the user, so that the control of these systems becomes
tiresome. The communication on a higher level of task
abstraction is desirable [1]. A possible solution is to develop
a fully autonomous system that acts in an unstructured
environment. This is the main objective of projects like
MOVAR [1], MOVAID [2] or Care-O-bot [3]. Even though
the results of these projects are promising it has to be stated
that the implementation of such systems is a long term goal
and will not be accomplished within the next years [2]. The
core problem of systems that are designed according to the
paradigm of complete autonomy is the resulting high
technical complexity and poor efficiency.
EHABILITATION
Manuscript received May, 2005.
All authors are with the Institute of Automation, University of Bremen,
28359 Bremen, Germany, e-mail: {prenzel,feuser,ag}@iat.uni-bremen.de,
(corresponding author: O. Prenzel, phone: +49-421-218 3594, fax: +49-421218-4596)
0-7803-9003-2/05/$20.00 ©2005 IEEE
II. TASK EXECUTION IN INTELLIGENT HOME ENVIRONMENT
To reduce the overall system complexity distributed smart
devices are taken into account. During the development of
the first generation FRIEND system [4], experiences had
been acquired to equip the system with smart components
[5] and thus decisively increase the rehabilitation robot’s
autonomous capabilities. The FRIEND II system is
enhanced on the hardware-level [6], is equipped with a new
software architecture and will be used as development and
test platform for the interaction with intelligent home
environment components. To be able to illustrate the
application of the software framework, a rehabilitation
robotic sample scenario is introduced in the following.
A. Sample Application Scenario
The scenario that shall be considered as sample in this paper
is to support a disabled user with meal-preparation. This
kind of scenario would provide autonomy to the user for a
considerable period of time. Various scenarios may be
derived from this sample scenario subsequently. The
scenario description is as follows: A suitable meal-box (i.e.
to be graspable by the robot’s gripper) containing some dish
has been placed on the worktop by care-personnel. The task
is to open a microwave-oven, grasp the meal-box and
transfer it to the oven. Then the oven's door will be closed
and the warming-up-process is started (e.g. initiated via
remote control). Finally the meal shall be removed from the
microwave-oven and be served to the disabled onto the tray
of his wheelchair. This description seems to be
straightforward and hardly do justify consideration of large
software-systems for the automation of this sequence. But as
soon as we consider the variability in the scenario and all
sensors to supply the robot with necessary information, the
complexity becomes obvious. Smart components introduce
redundant information in addition to already available
information of the system’s sensors and are the basis for a
robust system. This means, due to the equipment of the
scenario’s environment with smart devices (Fig. 1) the
intelligence of information retrieval is partially relieved
from the rehabilitation robot. The detection of an object
position can be supported e.g. by tactile skins [5] or static
cameras, whereas RFID-tags (radio frequency identification)
attached to the objects provide specific object information
(e.g. cooking instructions in this case).
530
necessary boundary conditions, e. g. the availability of
required data to parameterize the operations or the
availability of system resources, are maintained. Another
result of this approach is to achieve robust runtime execution
by offline-verification of the a priori descriptions.
Furthermore the finite descriptions restrict the search space
for task planning algorithms and thus do avoid state space
explosion.
Fig. 1: FRIEND II in intelligent home environment
B. Methods for Intuitive Task Knowledge Specification
To support the intuitive specification of a scenario,
description methods that have been developed within basic
research projects are used [7]. These formal task
descriptions prescribe possible operation-sequences on a
high and low level of abstraction and are called processstructures. The process-structures on the abstract level are
suitable for task knowledge input by even non-technical
personnel (e.g. care personnel). The ergonomic input of task
knowledge on this level results from the consideration of
object constellations and operations that interconnect the
object constellations. Each operation in the abstract processstructure is then decomposed to a low-level processstructure, where the availability of the operations’ boundary
conditions is modeled. This a priori planning knowledge is
automatically verified offline to ensure an error-free task
execution. Verification is done with first order-predicate
logic [8] and mathematically by the means of Petri-nets [9]
and temporal logic.
During run-time a planning module within the controlarchitecture first plans on the abstract level of processstructures and subsequently on the lower level. The result of
this planning procedure is a sequence of operations that are
directly executable within the reactive layer of the system
(see Fig. 2). The suitability of the methods described has
been investigated for several different support scenarios and
tested thoroughly by extensive simulations [7].
III. CONCEPTUAL BASE
To be able to design a feasible system, a software
framework has been developed at the IAT that subsumes
several basic premises. Instead of fully autonomous task
execution the framework supports autonomous generation
and adaptation of operation-sequences within the context of
semi-autonomous system control [7]. Scenario descriptions
are introduced that flexibly adapt to the current
environmental situation as well as to dynamical changes.
These descriptions enable the activation of sequences of
autonomous system operations, coordinate user interactions,
direct control of the manipulator and deliberation on
redundant sensor information. At the same time all
Fig. 2: Control architecture
The central component of the framework is a hybrid multilayer control architecture ([10] - [12]) that has been
modified with respect to the requirements of semiautonomy. The deliberative layer has been replaced by a
human machine interface and the sequencer includes the
deliberator and thus became the central control instance
(Fig. 2, see also [13], [14]).
IV. SOFTWARE ARCHITECTURE FOR SEMI-AUTONOMOUS
DISTRIBUTED SYSTEMS
Extensions of the basic concept are required to replace the
simulator with the connection of the sequencing layer to real
system hardware as well as to introduce smart equipment
with own sensors and processing power. These extensions
are described in the following sections A – C.
A. Requirements
The extensions’ requirements that are especially
intensified due to smart devices and distributed intelligence
are:
x Hardware-independence,
x Reusability of software,
x Generic and modular setup,
x Scalability of computing power and
x Encapsulation of communication capabilities and data
management.
The fulfilment of these requirements leads the following
advantages:
x Modifications in the hardware-setup with minimal reimplementations,
x Extensibility of the system,
531
x Quick and simple distribution of system components
onto several machines (location transparency of the
modules) and
x Reduction of programming effort with respect to
implementation of basic system-skills.
B. Enhancements within the Sequencer
According to Fig. 2 the control of the robotic system and
the accessing of the system’s environment are established by
the reactive layer. However, to realize a communication
between the sequencer and the operation executing elements
in the reactive layer, the sequencer needs to be adapted.
The sequencer consists of two modules that are designed
as active objects: The Task Planner and the Skill Executer
(Fig. 2). (The operations initiated by the sequencer are
referred to as skills here, since they represent basic system
functionality, e.g. a skill to grasp an object or to pour in a
beverage). Active objects are a software design pattern to
separate the execution of a method from its calling context
with the help of threads, whereas the method’s
implementation is independent of any threading details [15].
Thus, the planner and executer are able to act independently.
The communication between task planner and executer is
established via message queues. The task planner enqueues
sets of operations to be executed in parallel onto the
message queue of the execution module. It then waits for
return values from the executer. In case a return value
corresponds to the expected value, the planner waits for any
other parallel executed skill and then proceeds with the next
planned step and sends it to the executer. In case the value is
not equal to the expected one, re-planning is necessary. In
addition to these modes an aborting command may occur at
any time. This can be initiated by the user or some system
skill, if continuing of execution is not possible or not
reasonable anymore. The system then returns to a well
defined state and thus is ready to receive new task input.
To be able to execute several skills simultaneously,
asynchronous calls of the skill-methods are necessary.
Furthermore, the skills may run on different processors, e.g.
because of system-hardware that is distributed physically as
it is the case for remote smart devices. However, the
distribution of skill execution capabilities should be
adaptable in a flexible manner (without changing the system
structure or extensive re-implementations) to have the
opportunity at hand to scale the computing power available
for one skill. All these demands are fulfilled entirely with
the help of standardized and platform-independent
communication infrastructures based on CORBA (Common
Object Request Broker Architecture, [16]). The following
paragraph describes the design and realization of the
reactive layer that extensively benefits from CORBA.
C. Reactive Layer
The name reactive layer resides from its purpose to
provide reactive behavior. This means to directly couple
sensorial input with the control of an actuator (i.e. to design
a control loop) to establish autonomous behavior that is
robust against dynamic environmental changes. As depicted
in Fig. 2 the reactive layer is furthermore responsible for
offering monitoring operations (based on input from the
sensors) as well as direct control of the actuator
(manipulative skills). The latter aspect is important for
example when user interaction in the form of direct actuatorcontrol becomes necessary (semi-autonomy).
Therefore, several skill servers provide the necessary
basic operations (skills) of the robotic system by accessing
the sensors and actuators of the system or remote smart
devices. This means, a skill layer has access to a hardware
layer, where different hardware servers encapsulate basic
hardware functionalities. (See e.g. Fig. 3.)
Fig. 3: Reactive layer for FRIEND II
Furthermore, skills have to operate on sub-symbolic
information, i.e. on specific environmental information such
as geometry, position, color, etc. (to be discussed in detail
below). As shown in Fig. 2 the sequencer including the
symbolic planning engine accesses the symbolic layer of the
world model. Thus, the sequencer (on the basis of high-level
process-structures and symbolic descriptions) is responsible
for the correct abstract modeling of that segment of the
environment that is relevant to the current task execution. To
administrate all sub-symbolic information in a structured
manner, a sub-symbolic world model server is introduced in
the reactive layer. Sub-symbolic information is stored there
with reference to symbolic information from the upper layer
of the world model and consequently a connection between
both layers of the world model is established.
An event logging server collects all information about
events within the reactive layer (successful and erroneous
ones), so that the course of actions can be analyzed anytime
(e.g. offline). In the following the essential parts of the
reactive layer will be explained in detail.
1) Skill Server
The criterion for separation into several skill servers is
derived from the functional entities of the system. Thus, one
skill server offers all the system operations that have to be
assigned basically to one certain entity. In case of the
FRIEND II system [6] this results for example in a
manipulator-skill-server, a tray-skill-server and an imageprocessing-skill-server as depicted in Fig. 3. If remote
intelligent devices are introduced - e.g. an intelligent
microwave oven like in the sample scenario in chapter II.A a microwave-oven-skill-server would provide skills to
532
control this remote device or to extract data from it.
Seen from the software-technical point of view, skills are
methods of a skill server that are executed asynchronously.
This means skill-methods are non-blocking and will return
immediately after starting. The problem of asynchronous
execution is that no values or parameters can be returned.
Therefore sub-symbolic data that is generated during skill
execution is stored in the world model. The information on
the status of skill execution (e.g. successful execution) has to
be transmitted via another communication way. For this
issue call-back objects are used, which can be accessed by
the skill caller and the skill method itself. Call-backs are also
used for sending information from the skill caller to the skill
while it is executing. This could be for example the
command to stop the skill. Fig. 4 demonstrates the
communication mechanisms between sequencer- and skilllayer that have been realized by means of CORBA.
Fig. 4: CORBA-based asynchronous communication
To be further capable to simulate the skill execution (e.g. in
case no hardware is coupled to the sequencer or to test new
process structures) it is also possible to execute all skills in a
simulative mode. In this mode no hardware operations are
processed and resulting values and call back messages are
determined randomly according to probabilities from a
database.
2) Hardware Server
For every hardware component (e.g. a robot arm) a certain
hardware server is implemented (see Fig. 3). A hardware
server is described in the software by a CORBA servant [16]
what means a class as data type.
If for a hardware-component / -server a very fast
communication is needed where the communication offset
of CORBA (which is normally very low) is nevertheless
problematic, a local instance of the hardware server
implementation class [16] can be used and the
communication can be done via shared memory. Both
possibilities of using a hardware server are fully
exchangeable and no re-design or new implementation has
to be done.
The encapsulation by hardware servers makes the upper
layers of the software structure hardware independent, i.e. if
a hardware component is replaced, only the implementation
of the hardware server methods have to be adapted, whereas
the parameters of every method remain.
Every hardware server is connected to one skill server.
Only methods of this skill server can access the hardware
directly. Accessing a hardware server connected to another
skill server can only be done indirectly by skills of the
corresponding skill server due to the server-organization
with respect to functional togetherness.
3) Sub-Symbolic World Model
The process of acquiring and working on concrete
environmental information (sub-symbolic information) is
distributed over the reactive layer network. Specific data
types are used here according to the type of information to
be stored. For instance, the following data-types are
available (to be extended if necessary):
TABLE 1: TYPES OF SUB-SYMBOLIC INFORMATION
Color
SizeOfCylinder
SizeOfCuboid
SizeOfSphere
Position
Orientation
Location
SmartPlatformCircle
SmartPlatformRect
Frame
RGB (or HSV) –values
Radius (r), Height (h)
Length (l), Width (w), Height (h)
Radius (r)
x, y, z
Rotx, Roty, Rotz
Position, Orientation
Plane-Position (x, y) and Object-Radius (r)
Plane-Position (x, y), Lengths (a, b) , Rotation (D)
Matrix to describe translation and rotation, [17]
Again the realization of the sub-symbolic world model
takes profit from several useful CORBA-mechanisms. First,
the world model is realized as CORBA servant to be
accessible anywhere within the reactive layer. The data to be
put into the world model is administered within a map that
realizes an assignment of the data's symbolic identifier to a
container that is able to store any data type (CORBA::any).
Thus, a lot of complexity is kept away from the world
model. The skills co-operating with the world model simply
send any sub-symbolic data to the world model via a single
interface. The specific type of sub-symbolic information is
determined through a descriptor in the identifier string that is
used to access the data in the world model. Thus the
correctness of a parameter’s data-type is verifiable in
advance (in the context of the low-level process-structures’
offline verification). To give an example, a skill may operate
on the dimensions of a cylinder. In this case, the identifier
string of the respective skill parameter must contain a
descriptor SizeOfCylinder according to Table 1.
V. DEVELOPMENT OF FUNCTIONALITY BY SKILLPROGRAMMING
With the help of the infrastructure described so far it is
possible to concentrate on the development of basic system
functionality, i.e. the programming of skill methods. This
will be illustrated on the basis of sample skills executed
within the sample scenario from chapter II.A. First, the
decomposition of an abstract operation to a low-level
process-structure is exemplified. Afterwards the respective
skills are discussed in detail.
Fig. 5 provides a description of a low-level processstructure on the base of a function block network to perform
a “GraspObjectOnSmartPlatform”.
533
Fig. 5: Low-level process structure for GraspObjectOnSmartPlatform
The function entities within the function block network
represent skills of different skill servers or describe user
interaction. Essentially two processes take place during the
operation: The determination of the gripper’s target position
to grasp the object as well as the execution of the grasping
task itself. According to the concept of semi-autonomy, the
first skill will be solved with the help of user interaction, if
the autonomous retrieval is not successful. Due to the
determined data flow, resolving of redundant sensor
information can be modeled within the low-level processstructures. A formal specification is introduced for each
skill. Those of the necessary main skills are as follows:
TABLE 2: SPECIFICATION OF SAMPLE SKILLS
Skill
SkillServer
Resource(s)
Parameters
ReturnVal
SearchObjPosOnSmartPlatform
Worktop / USER
SmartPlatform
SizeObj {in}
Matrix_Platform2World {in}
PosObj {out}
Success
Failure / Abort
MoveAndAdjustGripperToObject
Manipulator
RobotArm, Gripper
Manipulator {in}
SizeObj {in}
PosObj {in}
GripRelFrame {in}
Success
Abort
position in world coordinates. The skill SearchObjPosOnSmartPlatform() (Fig. 5) first has to generate the
information about the position of the object to grasp. A
helper-skill GetSkinInfo() detects the footprint of the object
on the platform relative to the PlatformFrame P (see Fig. 6).
This information is stored within a SmartPlatformRect struct
(Table 1) in the sub-symbolic world model.
The matrix to transform from platform coordinates P to
world coordinates W is
­1 0 0 PW ½
PW
[17]. With
SizeCuboid = {Sl, Sw, Sh)} and
SmartPlatformRect = {SPRx, SPRy, SPRa, SPRb, SPRD}
we get:
PosObj = {Px, Py, Pz}T = PW ˜ {SPRx, SPRy, Sh/2}T
Within the next step the MoveAndAdjustGripperToObject
skill is executed. This includes the calculation of the
position to grasp with the help of the GripRelFrame (see
parameter in Table 2) that describes the position to grasp the
object relative to its center position. The data flow chart in
Fig. 7 shows the execution of the skill in detail, whereas the
respective explanations are listed in Table 3.
By the means of low-level process-structures, offlineverification of the system’s runtime behavior becomes
possible with respect to the following categories:
x Compliance of operation sequences with their formal
description,
x Correctness of data-flow (e.g. the PosObj parameter
being out- and in-parameter respectively, Table 2),
x Usage of proper data-types of sub-symbolic
information (see Table 1) as well as
x Exclusion of resource conflicts.
After a priori determination and verification of possible
operation sequences within low-level process-structures,
skill execution may take place.
Fig. 6: Sample scenario to grasp object on smart platform
x
°
°
°0 1 0 PWy °
®
¾
°0 0 1 PWz °
°0 0 0 1 °
¯
¿
Fig. 7: Data flow chart of the skill MoveAndAdjustGripperToObject
TABLE 3: STEPS OF THE SKILL MOVEANDADJUSTGRIPPERTOOBJECT
(1)
Sequencer stores the needed data (size, position and relative frame
to grasp the object) in the world model.
(2) Sequencer starts asynchronous execution of skill in the manipulator
skill server.
(3) Skill retrieves data (set by the sequencer in (1)) from world model.
(4) Skill logs its starting.
(5) After computing configuration for gripping pose and trajectory to
this configuration, skill starts movement of joints.
(6) Hardware server logs starting of movement of joints.
(7) Skill retrieves from hardware server joint position to determine,
whether movement is finished.
(8) Hardware server logs request for joint positions.
(9) Skill sends to call back that it has been executed successfully.
(10) Skill logs its termination.
(11) Sequencer retrieves skill-message from call-back (sent in (9)).
To grasp the object, we are interested in a gripping
534
During the asynchronous execution of the skill the
sequencer polls the call back object (IsUpdated()) for a
message with a certain sample time.
Summarizing, based on the skills SearchObjPosOnSmartPlatform and MoveAndAdjustGripperToObject it has
been demonstrated, how to program skills within the given
framework. On the one hand, the usage of sub-symbolic
data-types (Table 1) has been discussed and on the other
hand atomic steps of a skill-execution were given (Fig. 7 and
Table 3). The integration of new skills into the whole
framework (i.e. provide them for programming of processstructures) is discussed in [7]. Special ergonomic
programming interfaces have been designed to introduce the
skill’s specifications (according to Table 2) as well as to
compose process-structures on the two different layers of
abstraction.
[2]
[3]
[4]
[5]
[6]
[7]
[8]
VI. CONCLUSION
With the framework presented in this paper methods are
at hand to focus on the development of functionality for a
rehabilitation robotic system. Based on implementation
details of an abstract operation, it has been shown how the
programming of system skills is essentially simplified. This
results from the encapsulation of communication
mechanisms between distributed modules, from providing
methods to handle environmental data and methods to
establish concurrent skill execution. The ability of offlineverification of process-structures and the systematic
inclusion of redundant sensor information as well as of user
interactions guarantee robust runtime behavior of the
evolving system. The modular and generic distributed
system setup enforces extensibility of the robotic system,
scalability of computing power and access of remote smart
devices in an intelligent home environment. A separate
hardware-layer enables adaptation to new components with
a minimal amount of re-implementations. The intuitive
method for specification of abstract task planning
knowledge provides an ergonomic introduction of new
scenarios even by non-technical personnel.
A next step of development is the implementation of the
human-machine-interface, whose design with respect to
semi-autonomous task execution is already fixed [13].
Moreover a software-agent will be introduced that is able to
monitor the skill-server networks which are established
within the reactive layer. This global mechanism will be able
to check the availability of the servers and detect timeouts of
communication requests more flexibly than it has been
realized up to now.
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
VII. REFERENCES
[1]
K. Kawamura, S. Bagchi, M. Iskarous, and M. Bishay, “Intelligent
robotic systems in service of the disabled”, in IEEE Transactions on
Rehabilitation Engineering, Volume: 3 Issue: 1, Mar 1995, pp. 14 –
21.
535
P. Dario et al., “EURON Research Roadmaps 2002”, Research
Roadmaps of the European Robotics Research Network 2002,
http://www.euron.org.
M. Hans, B. Graf, and R. D. Schraft. “Robotics home assistant CareO-bot: past - present - future“, in Proc. of the 11th IEEE Int.
Workshop on Robot and Human interactive Communication,
ROMAN2002, Berlin, Germany, September 25-27, 2002, pp. 380-385.
C. Martens, O. Lang, N. Ruchel, O. Ivlev, and A. Gräser, “A FRIEND
for assisting handicapped people“. March 2001 issue of IEEE
Robotics and Automation Magazine.
I. Volosyak, O. Radchenko, A. Pape, C. Martens, H. She, E.
Wendland, and A. Gräser, “Smart tray for the support of a wheelchair
mounted manipulator”, in Proceedings of the International
Conference on Economic, Engineering and Manufacturing Systems
ICEEMS 2003, Brasov, Romania .
O. Ivlev, C. Martens, and A. Gräser, “Rehabilitation robots FRIEND-I
and FRIEND-II with the dexterous lightweight manipulator”, in
Proceedings of 3rd International Congress: Restoration of (wheeled)
mobility in SCI rehabilitation, April 19-21, 2004, Vrije Universiteit,
Amsterdam, The Netherlands.
C. Martens, “Teilautonome Aufgabenbearbeitung bei Rehabilitationsrobotern mit Manipulator“, Ph.D. thesis University of Bremen, 2003.
C. Martens, J. Schüttler, and A. Gräser, "Logical verification of
AND/OR-net structures for task-knowledge representation in service
robotic scenarios" in Proceedings of the ICORR 2003 (The 8th
International Conference on Rehabilitation Robotics), 23-25 April
2003, Taejon, Korea, http://icorr2003.rehabrobotics.org.
C. Martens, “Generation of parallel executable control sequences for
rehabilitation robotic systems on the basis of hierarchical Petri-net
based task representations”, in B. Lohmann, A. Gräser (Hrsg.),
Methoden und Anwendungen der Automatisierungstechnik, Shaker
Verlag 2003.
R. P. Bonasso, D. Kortenkamp, D. R. D. Schreckenghost, “Three Tier
Architecture for controlling space life support systems”, in Proceeding
of IEEE SIS’98, May 21-23, Washington DC, USA, 1998.
C. Schlegel, R. Wörz, “Interfacing different layers of a multilayer
architecture for sensorimotor systems using the object-oriented
framework SmartSoft”, in Proceedings of the 3rd European Workshop
on Advanced Mobile Robots, EUROBOT, Zürich, Schweiz, September
1999.
R. Simmons “Architecture, the backbone of robotic systems”, in
Proceedings of the 2000 IEEE International Conference on Robotics
and Automation, San Francisco, CA, April 2000.
C. Martens, D. J. Kim, J. S. Han, A. Gräser, and Z. Bien, “Concept for
a modified hybrid multi-layer control-architecture for rehabilitation
robots”, in Proceedings of the 3rd International Workshop on Humanfriendly Robotic Systems, Taejon, Korea, January 21-22, 2002.
C. Martens, and A. Gräser, “Design and implementation of a discrete
event controller for high-level command control of rehabilitation
robotic systems”, in Proceedings of the IEEE/RSJ International
Conference on Intelligent Robots and System, September 30 - October
4, 2002, Lausanne, Switzerland.
S. D. Huston, J. C. E. Johnson, and U. Syyid, “ACE Programmer's
Guide - Practical Design Patterns for Network and Systems
Programming”, Addison Wesley, 2004, ISBN 0-201-69971-0.
S. Vinoski, and M. Henning, “Advanced CORBA® Programming with
C++”, Addison Wesley Professional, 1999, ISBN: 0201379279.
R. P. Paul, “Robot Manipulators: Mathematics, Programming and
Control”, Cambridge: MIT Press, 1992, ISBN 0-262-16082-X.