Human-Robot Haptic Joint Actions Is an Equal Control

HSI 2011
Yokohama, Japan, May 19-21, 2011
Human-Robot Haptic Joint Actions
Is an Equal Control-Sharing Approach Possible?
Abderrahmane Kheddar
and several other examples exists for wheeled robots,
most of which are demonstrated by Kosuge and Hirata’s
group [2].
Performing physical joint actions between a team of robots
or a team of virtual avatars is not that hard comparing to situations where a human operator is in the loop. Indeed, when a
human operator is not in the loop, an engineering approach
can define a communication protocol and a programmable
behavior to automate the joint tasks among robots or virtual
animates. Several papers, even very recent ones, are dealing
with this issue using advanced control techniques.
Recently, pHRI is gaining a renewed interest in the view of
various needs in terms of close-contact human-robot teaming
or partnership in several applications ranging from domotics
to industry. Yet, in the past, pHRI has been demonstrated
where a human and a robot or a group of robots collaborate
in performing tasks. Among the most know ones, we note
Khatib’s group work on human manipulating an object with
multi-effector mobile robots [3] and the DrHelpers robots of
from Kosuge and Hirata’s group [2]. There also several other
examples. However, these impressive demonstrators share a
common ground approach: the human is leading the tasks:
s/he always acts as an instructor for the robot(s) that must
follow at best his way.
Our motivation is to step toward new control strategies
in which the robot partner is not assigned a fixed role all
along the task. We aim at enhancing human-robot physical
interaction toward an equal sharing of responsibility and
initiative, and design pro-active robot’s behavior in haptic
joint actions. This is actually very challenging when the
human is in the loop because this presumes that human’s
intentions and actions can be guessed by the partner robot.
We have a particular interest in using humanoid robots as
human partners. This is because a humanoid robot calls
to carefully address the problem of synchronizing discreet
states (i.e. contact transitions [4]) with continuous motions:
the problem is subsequently a hybrid pHRI. In the Fig. 1,
a human operator is transporting a wooden beam with the
HRP-2 humanoid robot [5]: the joint action induce whole
body motion which consists in a blending sum of the motion
which is necessary to move the body of the robot and
that which is necessary to move and manipulate the object,
and that which coordinate the task with the human partner.
The walking motion calls for a discreet choice of footsteps
where as the manipulation and the coordination motions are
continuous. Prior to tackling this problem as a whole, a new
control framework is needed and the following describes a
possible one.
Abstract— This paper proposes a speculative model and control framework for human-robot haptic joint (i.e. collaborative)
actions in which the roles human and robot partners play
as leaders or followers is not predetermined prior to tasks
execution. In this proposed scheme, role switching is made
by a homotopy: that is, a time-variant, continuous or setvalue, weighting interpolation function between the two extreme
(leader of follower) behaviors for each individual. The role of
each individual is changeable anytime with any plausible dynamic all along task execution. The formalism allows arbitrary
functions for each partner’s leader and follower control laws.
We discuss strengths, weaknesses and perspectives of such an
approach.
I. I NTRODUCTION
Haptic or physical human-robot joint actions mean tasks
performed by a human-robot partnership through brief or
sustained contacts. The term haptic or physical emphasizes
on the dominance of concurrent contacts for the tasks to
be performed; it distinguishes them from other general joint
actions in human-robot teaming. Recently, this problem has
been termed pHRI, which means physical Human Robot
Interaction; physical has been adjunct to the well established
general HRI field to emphasize on the haptic nature of the
tasks.
There are also two situations which in principle do not
call for a particular separation, but possible variants in the
requirements and the allowed hypotheses can be specific to
each of them:
∙ direct contact tasks meaning tasks where the human
and the robot are in direct contact. These are situations
where for example a robot (resp. a human) is using
close contact to guide a human (resp. a robot) to achieve
a task, e.g. a dancing robot partner, or a nurse-robot
assisting an impaired person. Human operators handguiding the walking of a humanoid robot is already
exemplified with the Honda’s ASIMO or Kawada’s
HRP-2. A humanoid dancing partner robot has not been
yet demonstrated for humanoids, but bi-manual wheeled
robots has been demonstrated by Kosuge and Hirata’s
group at the Tohoku University; also more recently by
Buss’s group at the Technical University of Munich;
∙ indirect -through object intermediary- tasks meaning the
robot and the human are jointly manipulating an object
of interest for the task. In humanoid robotics, this has
been early demonstrated by Hirukawa’s HRG group [1]
A. Kheedar is with the CNRS-AIST Joint Robotics Laboratory (JRL), UMI3218/CRT, Tsukuba, Japand and with the CNRSUM2 LIRMM, Interactive Digital Human’s group, Montpellier, France
[email protected]
978-1-4244-9639-6/11/$26.00 ©2011 IEEE
268
Fig. 1.
this is also a generalized controller that need to be designed properly for each follower agent. However, here ui
is obviously more difficult to conceive because εc implies
knowing or fast estimating collaborator’s intentions. This
is straightforward when the dyads are both robots or both
simulated avatars, but more challenging when one agent is
a human whereas the other is robot or simulated avatar, see
for instance [7]. In fact, it is nearly equally difficult for the
robot to estimate properly the human’s intentions comparing
to asking the human to properly guess the robot’s ones if the
human operator decides to behave as a ‘pure’ follower.
But do we really need to guess other partner’s intention to
behave in a follower mode: of course the answer is generally
no. For instance, one could implement a follower controller
which consists in continuously reducing interaction forces.
In this case we obtain a purely passive follower behavior
(practically, by adding an artificial damping), as noted in [8].
However, the main drawback of such an approach is its
performance since the human operator provides all the power
to realize the task in the orthogonal (or complementary)
space to the automated one. Indeed, it has been recently
demonstrated by several researchers, e.g. [9][10], that a
controller can be tuned so as to behave in a pro-active
way i.e. of equal initiative and task sharing in the non
automated space. In this case the follower robot/human needs
to estimate human/robot’s intentions respectively. We will
come back to this issue later on; namely, what happens if the
intention estimation is wrong? Note however that a perfect
estimation result simply in an improved follower controller.
Until this point, you may say that I simply reformulated,
in a compact abstract form, what is well established and
known in the domain of human/robot cooperation and simply
reasoned and applied it on individuals: you are right because
the main problem is to establish the coupling scheme and
how to shape correctly mutual influence of these extreme
and ideal controllers.
All the control strategies that have been proposed so far
assign a fixed role to the partners: that is, for a given task,
it is a priori decided who will be leading and who will be
following. Reed [11] showed that this may even emerge from
a specialization after several trials. In general, the robot is
always assigned a follower behavior and recent work is focusing on how to make this follower behavior pro-active, see
examples from dancing [12][13][14]. In practical situations,
this is not true. Partners in perfect synergy exchange roles
in function of the situations and the task execution context.
And this exchange is implicit through mutual understanding,
negotiation and finally agreement.
Now another question arises: if a switching in role assignment can occur during task execution is this switching
set-valued or functional? Does the switching obey a certain
model or dynamic?
Because it is very hard to answer the previous questions
considering our challenging option, we propose to augment
the system in a way to introduce a switching function
which definition is flexible. We chose this function to be
a homotopy variable between the two extreme behaviors. To
Joint action in transporting a beam inside a building.
II. A H OMOTOPY S WITCHING M ODEL FOR DYAD
H APTIC I NTERACTION
In order to have cybernetic or machined agent collaborating with a human, it is important to devise a model
abstraction which encompasses most situations encountered
on collaborative haptic tasks, i.e. from the very conflicting
to the ideal synergistic ones. With Evrard [6], we devised
an abstraction model which we believe reflects most haptic
collaborative tasks and may even be extended to other
modalities. Let us recall its basis and provide a more detailed
motivation for it.
A. Preliminary basis and intuition
First, for a model to be valid for our purpose it must
be totally symmetric: the controller or the strategy that
is devised for one agent must not jeopardize or reduce
potentialities in terms of equal initiatives or pro-activity of
the other agent.
Let us assume that undergoing collaborative tasks have at
least two behaviors (that we consider as extreme): a leader
behavior and a follower behavior.
The leader behavior maps into the agent control space
u, agent own intentions, plans or strategies that are chosen
to achieve a given task. This mapping L operates on the
error εt in the task space, i.e. the error between the task
state as desired and its actual value such that, for a given
agent/human i:
(1)
ui = Li (εt )
this is no more then a generalized controller that need to be
defined properly for each leader agent.
The follower behavior maps into the agent control space
u, the other agent’s (i.e. collaborator’s) intentions in a way
that plans and strategies for this agent are devised exclusively
on the other agent (collaborator) intentions. This mapping F
operates on the error εc in the collaborator agent j’s intention
space such that, for the follower agent/human i:
ui = Fi (εc )
(2)
269
keep both symmetry and autonomy of the dyad we assign
a similar and independent homotopy switching function for
each agent i and hence propose a template model that realizes
an implicit bilateral coupling in the following form:
ui = αi Li + (1 − αi )Fi
time-weight adjuster which allows a continuous slider that
balances between two extreme theoretical behaviors. Note
that X represent the closed set of robot’s state measures
whereas Y is the closed set of its control signals. Care should
be taken in defining properly these two sets (whereas Y is
always reduced to motor current or torque, X should gather
the same signals for both controllers).
If, for any special case, the controller functions f and g
have the same analytical form but have different parameters,
then the reasoning would simply mean that we are dealing
with homotopy at the parameters (gain) space. For example,
if a human uses a similar impedance control function when
s/he is active or passive but with different impedance gains
for each behavior, the homotopy (e.g. an interpolation in
its simplest form) occurs simply at the gains’ level (i.e. the
impedance parameters level).
(3)
αi are functions of time and will evolve separately for
each agent. Both L and F work as closed-loop filters (as
mentioned before). It can be that L and F have similar
analytical forms or operate on different modalities and have
different expressions. Note that α , in its simple expression
(first order interpolation), acts as a weighting factor and
a smooth transition between the two behaviors, see the
illustration on Fig. 2. A simpler expression of α could also
be a non-smooth set value function, that is:
αi ∈ [0, 1]
(4)
C. Analysis of singular cases
In this case, we believe that its practical implementation
will result in a discontinuous control which will be however
filtered by the dynamic of the robot and actuators. But we
assume that it is very unlikely that the functional is setvalued.
leader
1
Before discussing the general control, let us consider
singular cases and their meaning. As we will see, extreme
cases are set-valued h in which the homotopy variable is 0
or 1. The main reason of this discussion is also to show that
such a model encompasses most situations encountered in
joint collaboration. Let A be the number of partners.
∙ ∀i ∈ A ∣ αi = 0:
this case corresponds to having all actors behaving as perfect
followers. Hence, in the task space where these controllers
will be applied, the object and the agents wont move: everyone is expecting initiative from the other. For example, the
internal forces should be constant and in a static equilibrium.
Note that extra care should be made in the way force-based
controllers are implemented. For example, a noise on the
force sensor (on which may relay the controllers) would be
interpreted as a direction for desired motion instructed by
the other collaborator(s) resulting in a motion made by this
agent (since being a perfect follower). This motion will result
on an instructed direction for the other agents and so forth...
which may then (theoretically) result in an unstable behavior.
∙ ∀i, j ∈ A ∣ i ∕= j : ∃i ∣ αi = 1 and αi ⋅ α j = 0:
this case corresponds to one actor being a pure leader
whereas all the remaining ones are necessarily perfect followers. All existing strategies in human-robot pHRI implement this control scheme, which somehow borrows from
bilateral coupling in master-slave teleoperation. I already
pointed that the follower can be passive or proactive.
∙ ∃i, j ∈ A ∣ i ∕= j and αi, j = 1:
this case corresponds to have at least two actors being pure
leaders (whereas the others are purely follower). In fact, what
I wanted to highlight by having a pair of actor as pure leaders
concerns:
leader
1
α1
α2
0
follower
0
follower
Fig. 2. Illustration of one degree of freedom homotopy for each individual
of the physical interaction task dyad (lifting a table). Each αi∈{1,2} may
evolve independently from the other and their time function results on a
dynamic sliding between 0 and 1 during the task execution.
Now, it is legitimate to see if having a homotopy mathematical basis is of some real/practical use? Let us recall its
definition.
B. Homotopy
Definition 2.1: A homotopy between two continuous
function maps f : X → Y and g : X → Y , where X and Y
are topological spaces, is defined to be a continuous function
map h : X × [0, 1] → Y such that:
h(x, 0) = f (x) and h(x, 1) = g(x)
(5)
Assuming the second parameter h to be the time, then h
describes a “continuous deformation” of f into g. At time 0
we have the function f and at time 1 we have the function
g.
Now, for a given operator (human, avatar, or robot) if
we think of f as the leader behavior controller and g as
the follower behavior controller, what remains is to define
the homotopy h. Here the homotopy h can be seen as a
1) the perfect ideal cases where the task is executed with
(theoretically) an optimal internal force optimization
which corresponds to the impossible ideal case where
the task and motion trajectory plans are identical for
all the agents involved in the pHRI. In other words,
all the agents have exactly the same trajectory plan,
270
capabilities and sensory-motor behavior in realizing a
given task.
2) the opposite scheme where the agents have all totally
motion-contradictory plans, such as the one wants to
go some direction and the other wants to go the exact
opposite one. Such conflicting situations are resolved
through either (i) the strongest-wins paradigm, or (ii)
negotiations.
Negotiation results in either a number of agents drops
the leadership –this means that an agent will change her/his
behavior to a follower–, or in a compromised achievement
plan, which is (in terms of trajectories) different from any
agent’s intended plan and being (in terms of trajectories)
some thing which is in-between all intentions.
and asked another operator to perform the task. Surprisingly,
the robot, when reproducing the task, and in response to
the human operator, behaved in switching its role for this
particular task. This preliminary study is being investigated
further to assess for the viability of our control approach.
Hereafter, we report some problems that we encountered.
First, now that the control framework is flexible enough
to perform any kind of switching (even a binary or setvalue ones); the true problem is how the homotopy variable
is regulated? Behind this question there are actually two
challenging problems:
1) the first problem is inherent to human cognition, since
we presume that this model applies also for humans,
we must be able to identify, for a given task, how this
homotopy variable evolves and why it evolves so?
2) the second one is what strategy and on the basis of
what the robot’s homotopy variable is regulated?
Answering these two questions is necessary for the viability of the propose paradigm. The problem, from the technical
point-of-view, is that only the human output, and eventually
its variations, are observable for the robot, where as the assumed model comprises, simply saying, three unknown: the
leader and follower controllers, and the homotopy variable.
Hence a regulation approach for the robot’s homotopy based
on the observation of that of the human appears to be very
difficult. In simple words, at each time step, the output signal
is a sum of two outputs, assumed to outcome from black
boxes, weighted with a time-varying variable.
Can the problem be reduced assuming any expression for
the follower controller since its outcome might be predicted?
Indeed, it is simply a follower! (e.g. always track the forces
to zero) whereas the leader controller would call for guessing
the global and/or local plan of the human operator. In fact,
even the follower controller, when it is proactive would
require predicting local human intention to be tracked. This
situation creates the following ambiguity: even when the
robot is a pure follower, but proactive, it relays on operator
intentions, if not properly guessed, resulting motion would
conflict at some degree with that of the human (leader) and
consequently, this is also what happens if the robot would
switch to leadership with a different motion that the one
intended by the operator. This means that the leader behavior
and none proper proactive follower behavior can hardly be
distinguished.
Second, the physical interpretation of a weighting could
find its explanation as a way to achieve a compromise. It
is not the only way; besides a compromise may also result
in plans and trajectories readjustments or redefinition which
needs to be considered.
Last and not least, when two controllers are synthesized
with proper gains ensuring stability and performance, the
homotopy mapping does not guarantee the stability. This
problem must be carefully addressed [18].
D. Non-singular case
The non-singular case corresponds to a continuous change
of the homotopy variable all along task execution enabling
smooth hybridization of the leader and follower controllers. It
also allows interpolating between the leader and the follower
behaviors with any control law function of the leader and
follower. Subsequently, the controller is flexible enough to
switch roles during tasks rather then between tasks. Intuitively, a continuous weighting between two behaviors, not
only translates for the necessary smoothness in the change, it
also induces a latency which may translate what we usually
call hesitation.
Since the abstraction of the model is high, its practical
implementation is certainly subject to several technical limitations as already experienced in the pilot studies conducted
in [6]. Nevertheless, when assuming all of it is known,
this paradigm enables coding some pHRI characteristics
observed in dyad such as specialization [11], see [8]. This
model can also be used to switch the behavior considering
robot constraints/limitations. That is, consider a robot is
assigned with a number of tasks: the homotopy is regulated
according to robot limitations such as joint or torque limits.
For example, in [8] the robot is programmed so that the homotopy variable is tuned progressively toward 1 (leadership)
with a dynamic similar to that which is to violate a joint
limit. By doing so, near the joint limits, the robot imposes
its plan progressively to the human collaborator so as to
achieve the task without danger for the robot. As far as the
robot’s homotopy regulation is independent from that of the
partner’s, all what needs to be implemented is the leader and
follower controllers for the robot and the homotopy function.
Together with A. Billard’s group at EPFL, we illustrated
the continuous switching by using learning techniques [15].
The idea was to teach the robot to perform a joint haptic
task (lifting a beam) and instruct it to behave either as a
follower or as a leader. In other words, we teach the robot to
be only as a perfect follower or a perfect leader in response
to a human operator instructed to be the opposite case respectively. Obtained data from teaching with an instrumented
haptic interface [16] suggest that the follower and the leader
behaviors can be clearly distinguished in the force/velocity
map. In [17] we programmed the robot to reproduce the task
E. Integration
The Fig. 3 illustrates the components of the overall system
and its integration for a haptic joint action implying an joint
271
A priori plan
Planner/motion generation
Desired state
Perception and
interpretation
Task/robot state
P
Operator
intentions
A
Follower
controller
Leader
controller
Homotopy weighting
O
Haptic pattern of
communication
Desired robot
state
Haptic device
Fig. 3.
Components of the collaborative architecture.
object. Prior to any execution, the task is well defined and
known by each individual. The human operator has its own
idea or plan on the way the task is to be done. The robot may
call for planning methods in which the human is considered
as a digital actor [19]. A possible way to lower discrepancies
between robot’s and human’s plans would be for the robot to
display the outcome of the planner to its human partner (e.g.
using 3d graphics animation on a screen). The human can
then agree or interactively adjust the plan or the trajectories
for the robot planner. In the end, the robot and the human
agree on a common plan/trajectory and negotiate it all along
the task. In Fig. 3, this is represented by the blue curves
(continuous and dot) lines sharing the same staring (red) and
end point (blue).
The leader and follower controllers are predefined. The
leader law relay mainly on the individual plan of the robot
and work similarly to a trajectory tracking controller based
on the discrepancy between the actual and desired state
of the manipulated object. This state, including the robot’s
one are provided by embedded robot perception (sensors)
capabilities. The follower law relies mainly on human operator intentions and can be either passive or proactive
(locally). When being passive it needs only to accompany the
motion made by the human operator. When being proactive,
guessing operator intentions or predicting her/his motion is
needed. This can be build from the aggregate of the ongoing
task motions, read by the robot perceptual capabilities, and
most likely by using knowledge on near future plan when
available.
The homotopy weighting is yet not clearly defined. I
expect that if a strategy is devised, it is very likely that
it will make use of a sophisticated human intention identification, ideally the actual value (or best local knowledge
of an approximate function) of the homotopy function of the
human. This is an open problem. However, the homotopy, as
we exemplify in [8], can also be tuned directly from robot
state or desired task state in plus of human intentions interpretation, see the two arrows (other then the leader/follower
laws) entering the homotopy weighting bloc.
At last, we added the capacity for the robot to acknowledge and/or transmit its own intention through haptic patterns
of communication (should be in addition to body posture and
speech). This is the reason of the haptic pattern communication bloc which output may or may not be blended to the
control law.
III. H UMAN -H UMANOID J OINT ACTIONS
Physical joint actions with a human partner are among
the most challenging tasks a humanoid robot is expected to
achieve with robustness and safety. In order to successfully
achieve the scenario presented by the Fig. 1, the following
‘ingredients’ are necessary:
∙ a legged robot moves by alternating and sequencing
contact stances: therefore the state of the robot is generally
either in a motion with contact switching phase or in a
motion with a fixed contact support phase. For example, in
bi-legged walking, this corresponds to a double support and
a single support phases (respectively). When the human is
in the loop, it is very likely that the motion of the object
being jointly manipulated is negotiated continuously and can
not be predetermined a priori. Hence, the robot needs to
be reactive in the generation of the footprints and during
walking whatever walking phase in progress [20][21];
∙ Whatever perturbation is made on the robot, namely
that coming from the applied forces by the human partner;
the robot should be able to maintain its walking or standing
equilibrium. This problem has been tackled in several ways:
for example, by monitoring the difference between the desired ZMP and the actual one, plus the applied forces on the
wrists.
272
∙ if one assumes that operator intentions can be defined
and interpreted e.g. from monitoring force patterns, it is
questionable how these interpretations can be mapped into
reactive footprint generation?
∙ Finally, the most complex part is the synchronization/coordination of the humanoid arms’ motion with walking by taking into account both the human partner’s intentions, object motion, robot strategy in walking, equilibrium
and task constraints. The Humanoid arms must be controlled
to play several roles: absorb excess of forces, smooth the
motion of the object and accompany its motion during
walking, smooth changes in walking directions, etc.
For the time being a very simplified version of these modules is integrated [22] and a human-HRP-2 joint transportation of a table has been successfully implemented. However,
the actual version relays on a singular implementation of the
homotopy controller (the HRP-2 is set as a pure follower).
The mapping from force to footprint generation is made
based on simple heuristics.
[5] K. Kaneko, F. Kanehiro, S. Kajita, H. Hirukawa, T. Kawasaki, M. Hirata, K. Akachi, and T. Isozumi, “Humanoid robot HRP-2,” in IEEE
International Conference on Robotics and Automation, New Orleans,
LA, April 2004, pp. 1083–1090.
[6] P. Evrard and A. Kheddar, “Homotopy switching model for dyad
haptic interaction in physical collaborative tasks,” in Joint EuroHaptics
Conference and Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems, Salt Lake City, USA, March 2009.
[7] H. Arai, T. Takubo, Y. Hayashibara, and K. Tanie, “Human-robot
cooperative manipulation using a virtual nonholonomic constraint,”
in IEEE Int. Conf. on Robotics and Automation, San Francisco, CA,
April 2000, pp. 4064–4070.
[8] P. Evrard and A. Kheddar, “Homotopy-based controller for humanrobot interaction,” in IEEE International Symposium on Robot and
Human Interactive Communication, Toyama, Japan, 27 September-2
October 2009.
[9] B. Corteville, E. Aertbelien, H. Bruyninckx, J. D. Schutter, and
H. V. Brussel, “Human-inspired robot assistant for fast point-to-point
movements.” in IEEE Int. Conf. on Robotics and Automation. IEEE,
2007, pp. 3639–3644.
[10] V. Duchaine and C. M. Gosselin, “General model of human-robot
cooperation using a novel velocity based variable impedance control,”
in Joint EuroHaptics Conference and Symposium on Haptic Interfaces
for Virtual Environment and Teleoperator Systems. Washington, DC,
USA: IEEE Computer Society, 2007, pp. 446–451.
[11] K. B. Reed and M. A. Peshkin, “Physical collaboration of humanhuman and human-robot teams,” IEEE Transactions on Haptics, vol. 1,
no. 2, pp. 108–120, August-December 2008.
[12] Y. Hirata, T. Hayashi, T. Takeda, K. Kosuge, and Z.-D. Wang,
“Step estimation method for dance partner robot MS DanceR using
neural network,” in IEEE International Conference on Robotics and
Biomimetics, Hong-Kong and Macau, June–July 2005, pp. 523–528.
[13] S. Gentry and E. Feron, “Musicality experiments in lead and follow
dance,” in IEEE International Conference on Systems, Man, and
Cybernetics, vol. 1, 2004, pp. 984–988.
[14] D. Feth, R. Groten, A. Peer, S. Hirche, and M. Buss, “Performance
Related Energy Exchange in Haptic Human-Human Interaction in a
Shared Virtual Object Manipulation Task,” in Joint EuroHaptics Conference and Symposium on Haptic Interfaces for Virtual Environment
and Teleoperator Systems, 2009, pp. 338–343.
[15] S. Calinon, P. Evrard, E. Gribovskaya, A. Billard, and A. Kheddar,
“Learning collaborative manipulation tasks by demonstration using a
haptic interface,” in IEEE Int. Conf. on Advanced Robotics, Munich,
Germany, June 2009.
[16] A. Kheddar, V. Gourishankar, and P. Evrard, “A PHANTOM device
with 6dof force feedback and sensing capabilities.” in EuroHaptics,
ser. Lecture Notes in Computer Science, M. Ferre, Ed., vol. 5024.
Springer, 2008, pp. 146–150.
[17] P. Evrard, E. Griboskaya, S. Calinon, A. Billard, and A. Kheddar,
“Teaching physical collaborative tasks : object-lifting case study with
a humanoid,” in IEEE-RAS International Conference on Humanoid
Robots, Paris, France, 7-10 December 2009.
[18] H. Niemann, J. Stoustrup, and R. B. Abrahamsen, “Switching between
multivariable controllers,” Optimal control applications and methods,
vol. 25, pp. 51–66, 2004.
[19] C. Esteves, G. Arechavaleta, J. Pettré, and J.-P. Laumond, “Animation
planning for virtual characters cooperation,” ACM Transactions on
Graphics, vol. 25, no. 2, pp. 319–339, 2006.
[20] M. Morisawa, K. Harada, S. Kajita, S. Nakaoka, K. Fujiwara, F. Kanehiro, K. Kaneko, and H. Hirukawa, “Experimentation of humanoid
walking allowing immediate modification of foot place based on
analytical solution,” in IEEE International Conference on Robotics
and Automation, Roma, Italy, 2007, pp. 820–826.
[21] O. Stasse, P. Evrard, N. Perrin, N. Mansard, and A. Kheddar, “Fast
foot prints re-planning and generation during walking in physical
human-humanoid interaction,” in IEEE-RAS International Conference
on Humanoid Robots, Paris, France, 7-10 December 2009.
[22] A. Kheddar, O. Stasse, P. Evrard, N. Mansard, P. Gergondet,
E. Yoshida, and K. Yokoi, “Pervading humanoid robots in collaborative working environments,” in RSJ Domestic Robotic Conference,
Yokohama, Japan, 15-17 September 2009.
IV. C ONCLUSION
Service robots are paving the way toward the possibility
to work in close collocated space and close contact with
humans. From technology view-point, noticeable advances
have been made and best exemplified by the humanoid
robots that have reached remarkable advances in terms of
functionalities such as walking and manipulating. One of
the challenging scenarios is to make them achieve haptic
joint actions with humans. Subsequently functionalities, skill
and performance must fulfill what human partners expect
from them. To do so, we must move from the classical
control scheme where the robot is affixed to a follower
(passive or proactive) role to an adjustable, equal sharing
of responsibilities and initiatives in performing tasks. This
would certainly call for advanced multimodal communication
skill for mutual intentions understanding and instructing.
By multimodal, I mean those that human uses in such
scenarios, not limited to speech but also to haptic, body
posture and vision languages. To accompany this, a flexible
control framework and models are also necessary. This paper
presented a possible one, and discussed how it has been
implemented and what remains to be overcome to reach the
target goal.
R EFERENCES
[1] K. Yokoyama, H. Handa, T. Isozumi, Y. Fukase, K. Kaneko, F. Kanehiro, Y. Kawai, F. Tomita, and H. Hirukawa, “Cooperative Works by a
Human and a Humanoid Robot,” in IEEE Int. Conf. on Robotics and
Automation, 2003.
[2] K. Kosuge, M. Sato, and N. Kazamura, “Mobile robot helper,” in IEEE
Int. Conf. on Robotics and Automation, San Francisco, CA, USA, April
2000, pp. 583–588.
[3] O. Khatib, Object manipulation in a multi-effector robot system.
Cambridge: MIT Press, 1988, ch. Robotics Research 4, pp. 134–144.
[4] A. Kheddar and A. Escande, “Challenges in contact-support planning
for acyclic motion of humanoids and androids,” in 39th International
Symposium on Robotics, Seoul, Korea, October 15-17 2008, pp. 740–
745.
273