Projects resulting from Call 1 - Cordis

CHALLENGE 2: Cognitive Systems, Interaction, Robotics
Projects resulting from Call 1* – short abstracts
January 2008
of the Information and Communication Technologies (ICT) Theme of the European Commission’s 7th Framework
Programme: http://cordis.europa.eu/fp7/dc/index.cfm?fuseaction=UserSite.CooperationDetailsCallPage&call_id=11
FP7-ICT-214856-ALEAR
Artificial Language Evolution on Autonomous Robots
Participants
Humboldt-Universität zu Berlin
Sony France S.A. CSL Paris
Universität Osnabrück
Universitat Autonoma de Barcelona
Vrije Universiteit Brussel
Universitatea Alexandru Ioan Cuza, Iasi
Abstract
ALEAR will build autonomous social humanoid robots that co-develop cognitive and language capabilities
through situated interaction. It adopts a whole system approach tackling the complete chain from
embodiment and sensori-motor action to conceptualisation and language. Work includes carefully controlled
experiments in which autonomous humanoid robots self-organise rich conceptual frameworks and
communication systems with features similar to those found in human languages. The machinery required for
these experiments will drive the state-of-the-art in all relevant technologies, particularly robotics, concept
formation, computational linguistics and AI.
Duration
Total EU funding
36 months (01/02/2008 - 31/01/2011)
3.399.223 EUR
Manfred Hild
Luc Steels
Frank Pasemann
Oscar Villaroya
Ann Nowé
Dan Cristea
-1-
FP7-ICT-215370-ChiRoPing
Developing Versatile and Robust Perception using Sonar Systems that integrate Active Sensing, Morphology and Behaviour
Participants
Syddansk Universitet
Universiteit Antwerpen
The University of Edinburgh
Universität Ulm
Abstract
The principal objective of this project is to find ways of engineering versatile and robust systems able to
respond sensibly to challenges not precisely specified in their design. It focuses on embodied active sonar
perception systems that can serve as a complement to vision and facilitate the deployment of robotic systems
in situations where vision is infeasible. To achieve its objective the project will model the bat's coordination
of its acoustic, behavioural and morphological choices while hunting. Two bio-mimetic demonstrators will be
implemented, and evaluated on tasks analogous to the hunting tasks of their living prototypes. Roboticists and
ethologists will closely collaborate.
Duration
Total EU funding
36 months (01/02/2008 - 31/01/2011)
2.500.000 EUR
John Hallam
Herbert Peremans
Robert B. Fisher
Elisabeth Kalko
-2-
FP7-ICT-215805-CHRIS
Cooperative Human Robot Interaction Systems
Participants
University of the West of England, Bristol
Centre National de la Recherche Scientifique
Fondazione Istituto Italiano di Tecnologia
Max-Planck Gesellschaft zur Förderung der Wissenschaften EV
University of Bristol
Université Lyon 2 Louis Lumière
Abstract
CHRIS addresses fundamental issues related to the design of safe human robot interaction. Robots and humans
are assumed to share a given environment and to cooperate on tasks. The primary research question is: How
can interaction between a human and an intelligent autonomous agent be safe without being pre-scripted and
still achieve the desired goal? The key hypothesis is that safe interaction between humans and robots can be
engineered physically and cognitively for joint physical tasks requiring co-operative manipulation of real world
objects. Engineering principles for safe movement and dexterity will be explored on three robot platforms,
and developed with regard to language, communication and decisional action planning where the robot
reasons explicitly with its human partner. Integration of cognition for safe co-operation in the same physical
space will spawn significant advances in the area, and be a step towards genuine service robotics.
Duration
Total EU funding
48 months (01/03/2008 - 29/02/2012)
3.650.000 EUR
-3-
Christopher Melhuish
Rashid Alami
Giorgio Metta
Felix Warneken
Mike Fraser
Peter Ford Dominey
FP7-ICT-216594-CLASSiC
Computational Learning in Adaptive Systems for Spoken Conversation
Participants
The University of Edinburgh
Ecole Supérieure d' Electricité - Supélec
The Chancellor, Masters and Scholars of the University of Cambridge
Université de Genève
France Télécom SA
Abstract
The overall goal of the CLASSiC project is to facilitate the rapid deployment of accurate and robust spoken
dialogue systems that can learn from experience. The approach will be based on statistical learning methods
with unified uncertainty treatment across the entire system (speech recognition, natural language processing,
dialogue generation, speech synthesis). It will result in a modular processing framework with an explicit
representation of uncertainty connecting the various sources of uncertainty (understanding errors, ambiguity,
etc) to the constraints to be exploited (task, dialogue, and user contexts). The architecture supports a layered
hierarchy of supervised learning and reinforcement learning in order to facilitate mathematically principled
optimisation and adaptation techniques. It will be developed in close cooperation with the industrial partner
in order to ensure a practical deployment platform as well as a flexible research test-bed.
Duration
Total EU funding
36 months (01/03/2008 – 28/02/2011)
3.400.000 EUR
-4-
Oliver Lemon
Olivier Pietquin
Stephen Young
Paola Merlo
Philippe Bretier
FP7-ICT-214975-CoFRIEND
Cognitive and Flexible learning system operating Robust Interpretation of Extended real sceNes by multi-sensors Datafusion
Participants
Silogic SA
Universität Hamburg
University of Leeds
The University of Reading
Institut National de Recherche en Informatique et en Automatique
Aeroport Toulouse Blagnac SA
Abstract
The Co-FRIEND project aims to create a prototype system for the representation and recognition of human
activity and behaviour. This requires improving the performance of relevant cognitive functions such as
learning, dynamic context adaptation, perception, tracking, recognition, and reasoning, and their integration
in a complete artificial cognitive vision system. The project will develop a framework for understanding human
activities in real environments through the identification of objects and events. Feedback and multi-data
fusion will be exploited to achieve robust detection and efficient tracking of objects in complex scenes. The
cognitive capabilities of the system, implemented as a heterogeneous sensor network, will be demonstrated
by applying it to the monitoring of outdoor airport activities.
Duration
Total EU funding
36 months (01/02/2008 -31/01/2011)
2.800.000 EUR
-5-
Luc Barthelemy
Bernd Neumann
Anthony G. Cohn
James Ferryman
François Bremond
Lionel Bousquet
FP7-ICT-215181-CogX
Cognitive Systems that Self-Understand and Self-Extend
Participants
The University of Birmingham
Deutsches Forschungszentrum für Künstliche Intelligenz GmbH
Kungliga Tekniska Hogskolan
Univerza V Ljubljani
Albert-Ludwigs-Universität Freiburg
Technische Universität Wien
Abstract
CogX tackles the challenge of understanding the principles according to which cognitive systems should be
built if they are to handle novelty, situations unforeseen by their designers, and open-ended, challenging
environments with uncertainty and change. The aim is to meet this challenge by creating a theory — evaluated
in robots — of how a cognitive system can model its own knowledge; use this to cope with uncertainty and
novelty during task execution; extend its own abilities and knowledge; and extend its own understanding of
those abilities. Imagine a cognitive system that models not only the environment, but its own understanding of
the environment and how this understanding changes under action. It identifies gaps in its own understanding
and then plans how to fill those gaps so as to deal with novelty and uncertainty in task execution, gather
information necessary to complete its tasks, and to extend its abilities and knowledge so as to perform future
tasks more efficiently.
Duration
Total EU funding
50 months (01/05/2008 - 30/06/2012)
6.799,947 EUR
-6-
Jeremy Wyatt
Geert-Jan Kruijff
Patric Jensfelt
Aleš Leonardis
Bernhard Nebel
Markus Vincze
FP7-ICT-216239-DEXMART
DEXterous and autonomous dual-arm/hand robotic manipulation with sMART sensory-motor skills: A bridge from natural to
artificial cognition
Participants
Università degli Studi di Napoli Federico II
Centre National de la Recherche Scientifique
Deutsches Zentrum für Luft und Raumfahrt E.V.
Forschungszentrum Informatik an der Universität Karlsruhe
OMG PLC
Seconda Università degli Studi di Napoli
Alma Mater Studiorum - Università di Bologna
Universität des Saarlandes
Abstract
DEXMART focuses on artificial systems reproducing smart sensory-motor human skills, which operate in
unstructured real-world environments. The emphasis is on manipulation capabilities achieved by dexterous,
autonomous and human-aware, dual-arm/hand robotic systems. The goal is to allow a dual-arm robot including
two multi-fingered redundant hands to grasp and manipulate the same objects used by human beings. Objects
will have different shapes, dimensions and weights and manipulation will take place in an unsupervised, robust
and dependable manner so as to allow the robot to safely cooperate with humans in the execution of given
tasks. The robotic system must autonomously decide between different manipulation options. It has to react
properly and quickly to unexpected situations and events, and understand changes in the behaviour of humans
cooperating with it. Moreover, in order to act in a changing scenario, the robot should be able to acquire
knowledge by learning new action sequences so as to create a consistent and comprehensive manipulation
knowledge base through an actual reasoning process. The possibility to exploit the high power-to-weight ratio
of smart materials and structures will be explored with a view to designing new hand components (finger,
thumb, wrist) and sensors that will pave the way for the next generation of dexterous robotic hands.
Duration
Total EU funding
48 months (01/02/2008 - 31/01/2012)
6.300.000 EUR
-7-
Bruno Siciliano
Daniel Sidobre
Gerhard Grundwald
Rüdiger Dillmann
Andrew Stoddart
Giuseppe De Maria
Claudio Melchiorri
Christopher May
FP7-ICT-215078-DIPLECS
Dynamic Interactive Perception-action LEarning in Cognitive Systems
Participants
Linkopings Universitet
Ceske Vysoke Uceni Technicke V Praze
The University of Surrey
Autoliv Development AB
Michael Felsberg
Tomas Werner
Josef Kittler
Johan Karlsson
Association pour la Recherche et le Développement des Méthodes et
Processus Industriels
Erik Hollnagel
Abstract
The DIPLECS project aims at designing an Artificial Cognitive System architecture that allows for learning and
adapting hierarchical perception-action cycles in dynamic and interactive real-world scenarios. The
architectural progress will be evaluated within the scenario of a driver assistance system that continuously
improves its capabilities by observing the human driver, the car data, and the environment. The system is
expected to emulate and predict the behaviour of the driver, to extract and analyse relevant information from
the environment, and to predict the future state of the car in relation to its context in the world. Starting
from a rudimentary, pre-specified, i.e., man-modelled system, the architecture is expected to successively
replace manually modelled knowledge with learned models, thus improving robustness and flexibility.
Bootstrapping and learning is applied at all levels, in a dynamic and interactive context.
Duration
Total EU funding
36 months (01/12/2007 - 30/11/2010)
2.600.000 EUR
-8-
FP7-ICT-213845-EMIME
Effective Multilingual Interaction in Mobile Environments
Participants
The University of Edinburgh
Fondation de l'Institut Dalle Molle d'Intelligence Artificielle Perceptive
Teknillinen Korkeakoulu
Nagoya Institute of Technology
Nokia OYJ
The Chancellor, Masters and Scholars of the University of Cambridge
Abstract
EMIME intends to personalise speech processing systems by learning individual characteristics of a user's speech
and reproducing them in synthesised speech, in a language not spoken by the user. It will transfer the
statistical modeling and adaptation approach from speech recognition to speech-to-text synthesis – possibly
resulting in a uniform model for both technologies. Research will further a better understanding of the
relationship between speech recognition and synthesis. While focused on speech recognition and text-tospeech synthesis (not on machine translation), EMIME will ultimately help to overcome the language barrier
through mobile devices that perform personalized speech-to-speech translation, in the sense that a user's
spoken input in one language is used to produce spoken output in another language, while continuing to sound
like the user's voice. Results will be evaluated against state-of-the art techniques and in a practical mobile
application.
Duration
Total EU funding
36 months (01/03/2008 – 28/02/2011)
3.050.000 EUR
-9-
Simon King
John Dines
Mikko Kurimo
Keiichi Tokuda
Janne Vainio
William Byrne
FP7-ICT-217077-EYESHOTS
Heterogeneous 3-D Perception Across Visual Fragments
Participants
Università degli Studi di Genova
Westfälische Wilhelms-Universität Münster
Alma Mater Studiorum - Università di Bologna
Universitat Jaume I de Castellon
Katholieke Universiteit Leuven
Abstract
This project will investigate the interplay between vision and motion control, and study ways of exploiting this
interaction to achieve the knowledge of the surrounding environment that allows a robot to act correctly.
Crucial issues addressed are object recognition, dynamic shifts of attention, 3D space perception including eye
and arm movements and including action selection in unstructured environments. Work will result in: a robotic
system for interactive visual stereopsis; a model of a multisensory egocentric representation of the 3D space;
a model of human-robot cooperative actions in a shared workspace.
Duration
Total EU funding
36 months (01/03/2008 - 28/02/2011)
2.400.000 EUR
Silvio P. Sabatini
Markus Lappe
Patrizia Fattori
Ángel Pascau del Pobill
Marc Van Hulle
- 10 -
FP7-ICT-215821-GRASP
Emergence of Cognitive Grasping through Emulation, Introspection and Surprise
Participants
Kungliga Tekniska Hogskolan
Universität Karlsruhe
Technische Universität München
Lappeenrannan Teknillinen Yliopisto
Technische Universität Wien
Foundation for Research and Technology – Hellas
Universitat Jaume I de Castellon
Otto Bock Healthcare Products GmbH
Abstract
GRASP aims to design a cognitive system capable of performing object manipulation and grasping tasks in
open-ended environments, dealing with uncertainty and novel situations. The system exploits innate
knowledge and self-understanding and gradually develops cognitive capabilities. GRASP will provide means for
robotic systems to reason about graspable targets, to investigate and explore their physical properties and
finally to make artificial hands grasp any object. Underpinning the practical work are theoretical,
computational and experimental studies on modelling skilled sensorimotor behaviour based on known
principles governing grasping and manipulation tasks performed by humans.
Duration
Total EU funding
48 months (01/03/2008 - 29/02/2012)
5.950.000 EUR
Danica Kragic Jensfelt
Tamim Asfour
Darius Burschka
Ville Henrik Kyrki
Markus Vincze
Antonis Argyros
Antonio Morales
Hans Dietl
- 11 -
FP7-ICT-214668-ITALK
Integration and Transfer of Action and Language Knowledge in robots
Participants
University of Plymouth
Fondazione Istituto Italiano di Tecnologia
Universität Bielefeld
Consiglio Nazionale delle Ricerche
The University of Hertfordshire Higher Education Corporation
Syddansk Universitet
The Institute of Physical and Chemical Research
Abstract
ITALK will develop artificial embodied agents, based on the iCub humanoid platform, able to acquire complex
behavioural, cognitive, and linguistic skills through individual and social learning, and to adapt their abilities
to changing internal, environmental and social conditions. The project intends to corroborate the hypothesis
that the parallel development of action, conceptualisation and social interaction permits the bootstrapping of
language capabilities, which parting turn enhance cognitive development. It will also lead to: (a) new models
and scientific explanations of the integration of action, social and linguistic skills; (b) new interdisciplinary
sets of methods for analysing the interaction of language, action and cognition in humans and artificial
cognitive agents; (c) new cognitively-plausible engineering principles and approaches for the design of robots
with behavioural, cognitive, social and linguistic skills.
Duration
Total EU funding
48 months (01/03/2008 - 29/02/2012)
6.250.000 EUR
- 12 -
Angelo Cangelosi
Giorgio Metta
Gerhard Sagerer
Stefano Nolfi
Chrystopher L. Nehaniv
Kerstin Fischer
Jun Tani
FP7-ICT-215554-LIREC
LIving with Robots and intEractive Companions
Participants
Queen Mary and Westfield College, University of London
SICS, Swedish Institute of Computer Science AB
Inesc ID - Instituto de Engenharia de Sistemas e Computadores:
Investigacao e Desenvolvimento em Lisboa
The University of Hertfordshire Higher Education Corporation
Otto-Friedrich-Universität Bamberg
Heriot-Watt University
Politechnika Wroclawska
Eotvos Lorand Tudomanyegyetem
Foundation of Aperiodic Mesmerism
Cnotinfor - Centro de Novas Tecnologias da Informacao, Limitada
Peter William Mc Owan
Lars Erik Holmquist
Ana Paiva
Kerstin Dautenhahn
Harald Schaub
Ruth Aylett
Krzysztof Tchon
Ádám Miklósi
Nicholas Gaffney
Correia Secundino
Abstract
LIREC will establish a multi-faceted (memory, emotions, cognition, communication, learning, etc.) theory of
artificial long-term companions, embody it in innovative technology, verify the theory and technology
experimentally in real social environments, and provide guidelines for designing and using such companions.
The project draws on studies of human-pet interaction and builds upon existing robotics technologies such as
Pioneers, Peoplebots, & iCat in order to develop and evaluate experimentally the theoretical framework.
Companions will have different capabilities, based on a common cognitive-affective architecture, depending
on their intended use. This may involve the ability to respond sensitively to the user, regard his or her possible
motives and intentions, and encompass several forms of communication. Different scenarios will be set up,
such as the "robot house", "spirit of the building" and "my mentor", and several activity types will be tested in
each. The scenarios will involve humans interacting with robots and/or graphical companions in their day-today lives over periods of weeks or months. The migration of companions to different “bodies”, for instance a
mobile phone, will also be explored.
Duration
Total EU funding
54 months (01/03/2008 – 31/08/2012)
8.200.000 EUR
- 13 -
FP7-ICT-215756-MIMICS
Multimodal Immersive Motion rehabilitation with Interactive Cognitive Systems
Participants
Eidgenössische Technische Hochschule Zürich
Hocoma AG
Univerza V Ljubljani
Universitat Politecnica de Catalunya
Neurologische Klinik Bad Aibling GmbH & Co Betriebs KG
Abstract
MIMICS will enhance a robot-assisted motion rehabilitation system with adaptive feedback based on
physiological and cognitive data (motion, forces, voice, muscle activity, heart rate, skin conductance etc.).
Data will be acquired in real-time, and the intention of the patient and the overall psycho-physiological state
will be inferred from them. This information will be used to drive the therapy robots, in combination with
immersive virtual reality systems including 3D graphics and 3D sound, in order to make rehabilitation training
more realistic and motivating. Progress is likely in, for instance, real-time sensing, fusion of multi-sensory
real-time data streams, and multi-modal immersive VR interaction. Much effort will be devoted to evaluation
with patients to assess the effects of using the system. It is expected that MIMICS technology will enter clinical
routine so that large patient populations (e.g. stroke, spinal cord injury patients) can benefit.
Duration
Total EU funding
36 months (01/01/2008 – 31/12/2010)
1.600.000 EUR
- 14 -
Robert Riener
Lars Lünenburger
Marko Munih
Mel Slater
Friedemann Müller
FP7-ICT-216886-PASCAL2
Pattern Analysis, Statistical Modelling and Computational Learning 2
Participants
University of Southampton
University College London
The University of Edinburgh
Centre National de la Recherche Scientifique
Xerox SAS
Jozef Stefan Institute
Università degli Studi di Milano
University of Bristol
The University of Manchester
Helsingin Yliopisto
Fondation de l'Institut Dalle Molle d'Intelligence Artificielle Perceptive
Stichting Centrum voor Wiskunde en Informatica
Fraunhofer Gesellschaft zur Förderung der Angewandten Forschung EV
Max-Planck Gesellschaft zur Förderung der Wissenschaften EV
Teknillinen Korkeakoulu
Bar Ilan University
Université Pierre et Marie Curie - Paris 6
Abstract
PASCAL2 builds on the FP6 PASCAL Network of Excellence that has created a distributed institute pioneering
principled methods of pattern analysis, statistical modeling, and computational learning (see http://www.pascalnetwork.org/). While retaining some of the structuring elements and mechanisms (such as the semi-annual Themes,
and the Pump-Priming and Challenges programmes) of its predecessor, PASCAL2 refocuses the institute towards the
emerging challenges created by the ever expanding applications of adaptive systems technology and their central
role in the development of artificial cognitive systems of different scales. Learning technology is key to, for
instance, making robots more versatile, effective and autonomous, and to endowing machines with advanced
- 15 -
Steve Gunn
John Shawe-Taylor
Christopher Williams
William Triggs
Nicola Cancedda
Dunja Mladenic
Nicolò Cesa-Bianchi
Nello Cristianini
Neil Lawrence
Petri Myllymäki
José del R. Millán
Peter Grunwald
Gilles Blanchard
Koji Tsuda
Samuel Kaski
Ido Dagan
Patrick Gallinari
interaction capabilities. The PASCAL2 Joint Programme of Activities responds to these challenges not only through
the research topics it addresses but also by engaging in technology transfer through an Industrial Club to effect
rapid deployment of the developed technologies into a wide variety of applications. In addition, its Harvest subprogramme provides opportunities for close collaboration between academic and industry researchers. Other
noteworthy outreach activities include curriculum development, brokerage of expertise, public outreach, and
liaison with relevant R&D projects.
Duration
Total EU funding
60 months (01/03/2008 - 28/02/2013)
6.000.000 EUR
- 16 -
FP7-ICT-216529-PinView
Personal Information Navigator Adapting Through Viewing
Participants
Teknillinen Korkeakoulu
University of Southampton
University College London
Montanuniversitaet Leoben
Xerox SAS
Celumsolutions Software GmbH & Co KG
Abstract
PinView investigates novel approaches to adaptive, content-based, multi-media information retrieval. Implicit
information such as a user’s gaze patterns or murmured utterances will be integrated with collaborative
filtering techniques to provide less cumbersome but more robust feedback mechanisms. This will be achieved
by applying advanced machine learning methods to infer the implicit topic of a user's interest and the sense in
which it is interesting in the current context. In addition, the project will devise novel techniques for
presenting less biased database selections while interacting with a user. A prototype of the proactive
information navigator will be evaluated in a set of targeted application scenarios, including analysis of medical
images and utilisation of diverse media assets.
Duration
Total EU funding
36 months (01/01/2008 - 31/12/2010)
2.550.000 EUR
Samuel Kaski
Craig Saunders
John Shawe-Taylor
Peter Auer
Marco Bressan
Erich Mahringer
- 17 -
FP7-ICT-215843-POETICON
The “Poetics” of Everyday Life: Grounding Resources and Mechanisms for Artificial Agents
Participants
Institute for Language and Speech Processing – 'Athena' Research
Centre
The University System of Maryland Foundation, Inc.
Univerza V Ljubljani
Max-Planck Gesellschaft zur Förderung der Wissenschaften EV
Fondazione Istituto Italiano di Tecnologia
Università degli Studi di Ferrara
Abstract
POETICON views a cognitive system as a set of different languages (the spoken, the motor, the vision
language) and provides a set of tools for parsing, generating and translating them. The objective is two-fold:
a) to create a PRAXICON, an extensible computational resource which associates symbolic representations with
corresponding sensorimotor representations and that is enriched with information on patterns among these
representations for forming conceptual structures; b) to explore the association of symbolic and sensorimotor
representations through cognitive and neurophysiological experiments and experimentation with a humanoid
robot as driving forces and implementation tools for the development of the PRAXICON, respectively. Work
will be guided by experiments in psychology and neuroscience, and employ cutting-edge equipment and
established cognitive protocols for collecting face and body movement measurements, visual object
information and associated linguistic descriptions from interacting human subjects.
Duration
Total EU funding
36 months (01/01/2008 - 31/12/2010)
3.250.000 EUR
- 18 -
Katerina Pastra
Yiannis Aloimonos
Aleš Leonardis
Heinrich Buelthoff
Giulio Sandini
Luciano Fadiga
FP7-ICT-214901-PROMETHEUS
PRediction and interpretatiOn of huMan behaviour based on probabilistic sTructures and HEterogeneoUs Sensors
Participants
Totalforsvarets Forskningsinstitut
University of Patras
Technische Universität München
Faculdade Ciencias e Tecnologia da Universidade de Coimbra
Probayes SAS
Marac Electronics, SA
Technological Educational Institute of Crete
Abstract
PROMETHEUS develops new ways for multimodal individual and collective person tracking and behavior
prediction in crowds, within complex indoor environments, using multiple heterogeneous sensors (cameras,
laser scanners, infrared sensors, microphone arrays, proximity detectors) and a statistical corpus-driven
approach. The proposed research will advance the state-of-the-art in sensor fusion as well as in computer
vision with respect to crowd density, occlusion, lighting, and movement speed. It is driven by various potential
applications, including unattended surveillance and intelligent space monitoring.
Duration
Total EU funding
36 months (01/01/2008 – 31/12/2010)
2.150.000 EUR
- 19 -
Jörgen Ahlberg
Nikolaos Fakotakis
Gerhard Rigoll
Jorge Manuel Miranda Dias
Emmanuel Mazer
Vasilios Leloudas
Ilias Potamitis
FP7-ICT-216240-REPLICATOR
Robotic Evolutionary Self-Programming and Self-Assembling Organisms
Participants
Universität Stuttgart
Universität Graz
Sheffield Hallam University
Universität Karlsruhe
Scuola Superiore di Studi Universitari e di Perfezionamento Sant'anna
Fraunhofer Gesellschaft zur Förderung der Angewandten Forschung EV
Institut Mikroelektronickych Aplikaci S.R.O.
Ubisense Limited
Almende BV
Ceske Vysoke Uceni Technicke V Praze
Abstract
The main goal of the REPLICATOR project is to develop novel principles underlying robotic systems that consist
of a super-large-scale swarm of small autonomous mobile micro-robots that are capable of self-assembling into
self-sustaining, self-adjusting and self-learning large artificial organisms. Ultimately, these adaptive, robust,
and scalable robotic organisms, endowed with rich sensing and actuating capabilities, will be used to build
sensor networks operating autonomously in open-ended environments. The overall approach draws on
evolutionary strategies for the development of appropriate functionalities and hardware structures.
Duration
Total EU funding
60 months (01/03/2008 - 28/02/2013)
5.414.052 EUR
- 20 -
Serge Kernbach
Thomas Schmickl
Fabio Caparrelli
Marc Szymanski
Paolo Dario
Thomas Velten
Tomas Trpisovsky
David Theriault
Alfons Hermanus Salden
Libor Preucil
FP7-ICT-215190-ROBOCAST
ROBOt and sensors integration as guidance for enhanced Computer Assisted Surgery and Therapy
Participants
Politecnico di Milano
Azienda Ospedaliera di Verona
Universito degli Studi di Siena
Imperial College of Science, Technology and Medicine
Prosurgics Limited
The Hebrew University of Jerusalem
Technion - Israel Institute of Technology
Mazor Surgical Technologies Ltd
Technische Universität München
Universität Karlsruhe
Consulting Finanziamenti Unione Europea S.R.L
Abstract
Robocast aims to develop an innovative and cost-effective system for aiding surgeons in keyhole neurosurgery.
This modular system, allowing a reduced footprint, will be developed with two robots and one active biomimetic probe, able to cooperate among themselves in a biomimetic sensory-motor integrated framework. A
gross positioning 3-axes robot will support a miniature parallel robot holding the probe to be introduced
through a “keyhole” opening into the skull of the patient. Optical trackers, an imaging endoscope camera, and
electromagnetic position and force sensors will extend robot perception by providing the control system with
position and force feedback from the operating tools, and with visual information of the surgical field. It will
have an intuitive haptic interface allowing surgeons to receive maximum feedback data with minimum extra
effort on their side. The system will also be endowed with learning and interactive plan updating capabilities,
based on a “risk atlas” reproducing a fuzzy representation of a brain atlas, and on context-based
interpretation of surgeon commands.
Duration
Total EU funding
36 months (01/01/2008 – 31/12/2010)
3.450.000 EUR
- 21 -
Giancarlo Ferrigno
Roberto Israel Foroni
Domenico Prattichizzo
Ferdinanco Rodriguez Y Baena
Patrick Finlay
Leo Joskowicz
Moshe Shoham
Moshe Shoham
Nassir Navab
Joerg Raczkowsky
Carla Finocchiaro
FP7-ICT-21612- ROSSI
Emergence of communication in RObots through Sensorimotor and Social Interaction
Participants
Alma Mater Studiorum - Università di Bologna
Università degli Studi di Parma
Universität zu Lübeck
Anna Maria Borghi
Giovanni Buccino
Ferdinand Binkofski
Högskolan I Skövde
Middle East Technical University
Aberystwyth University
Tom Ziemke
Erol Sahin
Mark Lee
Abstract
ROSSI aims at building robots endowed with sensorimotor and neural/computational mechanisms that allow
them to: (a) flexibly manipulate and use objects in the environment, (b) use a simple form of language, i.e.
nouns and verbs referring to objects and object-oriented actions, (c) use such concepts and verbal labels in
social interaction with humans. Control mechanisms for these robots will be based on insights into the neural
mechanisms underlying human concepts and language. Computational modelling of such mechanisms (in
particular, canonical neurons and mirror neurons) will provide novel approaches to the grounding of robotic
conceptualization and language. The project thus also contributes to a better understanding of the grounding
of human conceptualization and language.
Duration
Total EU funding
36 months (01/03/2008 - 28/02/2011)
2.800.000 EUR
- 22 -
FP7-ICT-216465-SCOVIS
Self-configurable COgnitive VIdeo Supervision
Participants
Institute of Communication and Computer Systems/National Technical
University of Athens
University of Southampton
Joanneum Research Forschungsgesellschaft mbH
Eidgenössische Technische Hochschule Zürich
Atos Origin Sociedad Anonima Espanola
Katholieke Universiteit Leuven
Suinsa Medical Systems SA
Thedora A. Varvarigou
Matthew Addis
Georg Thallinger
Bastian Leibe
Santiago Ristol
Jos Dumortier
Oscar Gómez
Abstract
SCOVIS investigates weakly supervised learning algorithms and self-adaptation strategies for obtaining and
analysing video imagery from surveillance cameras. The project takes a synergistic approach, combining
largely unsupervised learning and model evolution in a bootstrapping process. It aims to greatly simplify the
deployment and operation of monitoring systems, for instance by significantly reducing the interaction with
users, as compared to current methods, through self-configuration and relevance feedback procedures. The
expected results will measurably improve the versatility and performance of future monitoring systems. Tests
will be undertaken in public and industrial environments. Privacy issues will be strictly respected.
Duration
Total EU funding
36 months (01/03/2008 - 28/02/2011)
2.750.000 EUR
- 23 -
FP7-ICT-21586-SEARISE
Smart Eyes: Attending and Recognizing Instances of Salient Events
Participants
Fraunhofer Gesellschaft zur Förderung der Angewandten Forschung EV
Università degli Studi di Genova
Universität Ulm
Institut National de Recherche en Informatique et en Automatique
University of Wales, Bangor
Trackmen Limited
Düsseldorf Congress Veranstaltungsgesellschaft mbH
Abstract
The SEARISE project will develop a trinocular active cognitive vision system, the Smart-Eyes, for detection,
tracking and categorization of salient events and behaviours. Unlike other approaches in video surveillance,
the system will have human-like capability to learn continuously from the visual input, self-adjust to ever
changing visual environments, fixate salient events and follow their motion, and categorize salient events
dependent on the context. Inspired by the human visual system, a cyclopean camera will perform wide range
monitoring of the visual field while active binocular stereo cameras will fixate and track salient objects,
mimicking a focus of attention that switches between different locations of interest. The core of this artificial
cognitive visual system will be a dynamic hierarchical neural architecture – a computational model of visual
processing in the brain. Smart-Eyes will be tested in real-life scenarios for observation of large crowded public
spaces and of individual activities within restricted areas.
Duration
Total EU funding
36 months (01/03/2008 - 28/02/2011)
2.150.000 EUR
- 24 -
Marina Kolesnik
Silvio P. Sabatini
Heiko Neumann
Pierre Kornprobst
Martin Giese
Wolfgang Vonolfen
Heiko Müller
FP7-ICT-211846-SEMAINE
Sustained Emotionally coloured Machine-human Interaction using Nonverbal Expression
Participants
Deutsches Forschungszentrum für Künstliche Intelligenz GmbH
The Queen's University of Belfast
Imperial College of Science, Technology and Medicine
Universiteit Twente
Université Paris VIII
Technische Universität München
Abstract
The aim of the SEMAINE project is to build a Sensitive Artificial Listener – a multimodal dialogue system with
the social interaction skills needed for a sustained conversation with a human user. Research undertaken in
SEMAINE contributes to making artificial systems interact more naturally with human users. Operating in realtime, the system perceives a human user's facial expression, gaze, and voice, and engages with the user
through an Embodied Conversational Agent's body, face and voice. The agent will exhibit audiovisual listener
feedback while the user is speaking, and will take the user's feedback into account while the agent is
speaking. The agent will pursue different dialogue strategies depending on the user's state; it will learn to
interpret the user's non-verbal behaviour and adapt its own behaviour accordingly. Data to train system
components will be collected initially using a Wizard-of-Oz setup, later on using the autonomous system at
increasing levels of maturity. Some of the data will be released to the research community.
Duration
Total EU funding
36 months (01/01/2008 -31/12/2010)
2.750.000 EUR
- 25 -
Marc Schröder
Roderick Cowie
Maja Pantic
Dirk K.J. Heylen
Catherine Pelachaud
Björn Schuller
FP7-ICT-217148-SF
Synthetic Forager
Participants
Universitat Pompeu Fabra
Tel Aviv University
Consorci Institut d'Investigacions Biomediques August Pi I Sunyer
Universiteit van Amsterdam
Universität Osnabrück
Guger Technologies OEG
Robosoft SA
Abstract
The Synthetic Forager project seeks to identify the neuronal, cognitive and behavioral principles underlying
optimal foraging in rodents and to implement these principles in a real-world foraging artefact equipped with
visual, auditory, olfactory and tactile sensors (Synthetic Forager, SF). The theoretical underpinning includes
statistical analysis methods and game theory. The Distributed Adaptive Control architecture will be the
integration framework. The SF will be evaluated in a number of benchmarks ranging from robot equivalents of
rodent foraging tasks to simulated de-mining. Other potential applications of the technologies to be developed
include: service robotics, search and rescue, terrestrial and planetary exploration, delivery systems,
autonomous transportation, environmental monitoring, and Internet information analysis and retrieval.
Duration
Total EU funding
36 months (15/01/2008 - 31/12/2010)
2.750.000 EUR
- 26 -
Paul F.M.J. Verschure
Matti Mintz
Ma Victoria Sánchez-Vives
Cyriel Pennartz
Peter König
Christoph Guger
Joseph Canou
FP7-ICT-216227-SPARK II
Spatial Temporal Patterns for Action-Oriented Perception in Roving Robots II: An Insect Brain
Computational Model
Participants
Università degli Studi di Catania
Universidad Complutense de Madrid
Johannes Gutenberg-Universität Mainz
Innovaciones Microelectronicas SL
Abstract
SPARK II will develop, evaluate, optimise and generalise a new, insect brain inspired, computational model.
The architecture will be hierarchical, based on parallel sensory-motor pathways, implementing reflex-driven
basic behaviours. These will be enriched with higher and complex insect brain structural models and more
physically-inspired nonlinear lattices. The latter will be able to generate “self-organizing” complex dynamics,
while the former will reproduce relevant cognitive functions in insects such as attention-like processes, shortterm memory and reward mechanisms. Both kinds of mechanism will work concurrently to generate cognitive
behaviours at the output motor layer. The model will be applied to different robotics architectures, deployed
in unstructured, cluttered and dynamically changing real-life environments, as well as to small robot swarms,
leading to the emergence of cooperation among robots on tasks a single robot cannot carry out.
Duration
Total EU funding
36 months (01/02/2008 - 31/01/2011)
1.000.000 EUR
Paolo Arena
Manuel Ga. Velarde
Roland H. Strauss
Rodriquez Vazquez Angel
- 27 -