Self in, through and thanks to Artifacts

1
AAAI 2009 fall symposium series
Arlington, Virginia — November 5–7, 2009
Special SESSION:
Self in, through and thanks to Artifacts
Chair: Kamilla Johannsdottir
Agenda
2:00 – 2:05
Kamilla Johannsdottir: Introduction to the Session on Self in Artifacts
2:05 – 2:45
Short presentations (Gromala, Song, Schmidt, Fedorovskaya):
2:05 – 2:15
Diane Gromala, Chris Shaw, & Meehae Song. Chronic Pain and the Modulation
of Self in Immersive Virtual Reality.
2:15 – 2:25
Carman Neustaedter & Elena A. Fedorovskaya. Avatar Appearances and
Representation of Self: Learing from Second Life.
2:25 – 2 :35
Colin T. Schmidt, Olivier Nannipieri & Simon Richir. Identity and Virtual
Reality: a Uchronic Approach.
2:35 – 2:45
Meehae Song & Diane Gromala. Ecstasis: Walking to Mindful Awareness.
2:45 – 3:30
Panel discussion (chairs: Johannsdottir, Schmidt and Samsonovich)
Panelists: Colin Schmidt (moderator), Igor Aleksander, Bernard Baars, Diane
Gromala, Owen Holland, Ian Horswill, Cindy Mason
Blurb
Despite recent demos of artifacts that formally pass tests for higher cognitive functions (e.g.,
Breazeal et al., 2006; Haikonen, 2007), to date no AI research has produced an intelligent system
capable of human-like cognitive development from a baby to an adult. Could it be that the
missing key element is related to the notions of self and self-awareness? That is, do we need
artifacts to have a sense of self, and if so, what exactly does an artificial selfhood imply?
The concept of self and self-awareness has been used loosely within AI research (e.g. Agarval,
2009), referring to any recurrent feature, or the personal pronoun ‘I’, or the notion of Agent A
(named “me”, “R2”, etc.) vs. Agent B, or the robot’s body, or the running software, or the set of
variables under homeostatic control by the system. This use of the concept of self in AI research
does not necessarily capture the various levels of phenomenal experiences and cognitive
processes identified in the human sense of self. For example, an important distinction is made in
the literature between the narrative self (largely reflective and conscious) and the minimal self,
normally considered pre-reflective and involving both conscious and non-conscious processes
2
(see Gallagher, 2000 and Gallagher, in press). The two important aspects of the minimal self are
self-agency, i.e., the notion that I am the cause of my action, and self-ownership, i.e., the notion
that it is I (my body) that is set in motion by my volition to act.
Closely related to these aspects are the notions of subjectivity, self-attribution of mental states,
and first-person subjective experience, or consciousness. The subject-self (including the minimal
self and the conscious self: Samsonovich & Nadel, 2005) seems from an educational and
developmental perspectives to be a key to development of human intelligence (specifically, selfregulated learning: Zimmerman, 1990; Winne & Perry, 2000; cognitive leap: Moore &
Lemmon, 2001). It is also one that becomes brittle in human-computer interactions (e.g., breaks
of presence in VR: Slater & Usoh, 1994).
The objective of the panel discussion is to identify the meaning of the self-concept and the related
capabilities that are critical for achieving human-like cognitive development in artifacts. How
closely does artificial selfhood need to match up to human selfhood, and what are the potential
(moral, ethical, social) implications of implementing a sense of self as a part of the “critical
mass” of a human-level learner?
Key Questions
A.
•
•
•
•
How is the meaning of the concept of self relevant to artifacts?
How critical is this notion of Self to learning and intelligence in humans and in artifacts?
Should there be an artificial version of selfhood, and if so, how closely does it have to
resemble human selfhood and why?
How much of human-like Self and self-awareness do we need for human-level learning
and cognitive development to take place?
B.
•
•
•
Is there a vital for intelligence aspect of self that lends itself more easily than other
aspects to being artificially reproduced?
Is it easy (trivial, impossible, makes no sense, etc.) to achieve subjective first-person
experience in artifacts?
What is the relationship between the different aspects of self (e.g., the minimal self and
the narrative self) and how important is it to understand this relationship in order to
implement artificial selfhood?
C.
•
•
•
•
If one aspect of the human self is recreated artificially, does the artifact automatically
acquire another aspect: e.g., becomes self-aware in the human sense?
How important is communication with, and awareness of, others for a sense of Self in
artifacts?
At what point in recreating self in artifacts do we raise moral or ethical problems?
What would be the moral, ethical and social impact of self-aware artifacts on the human
society?
3
Format
The above questions divided into three groups will be presented to the panelists, one group of
questions at a time. Accordingly, there will be three rounds of answers by panelists. Each
panelist will be allowed to speak for up to 2 minutes in each round (it is expected that most
answers will be shorter than 2 minutes). Then the 10-min final discussion will follow, during
which the audience will be allowed to make comments and to ask panelists further questions.
Panelists are encouraged to prepare answers in advance following these constraints. If necessary,
a panelist can email to the panel organizers a slide or two that will be displayed on the screen by
the moderator during the discussion.
Venue
The Special Session “Self in, through and thanks to Artifacts” will be held on Thursday,
November 5, 2009, from 2:00 pm till 3:30 pm, as a part of the AAAI 2009 Fall Symposium on
Biologically Inspired Cognitive Architectures (BICA-2009), which is to be held as a part of the
AAAI 2009 Fall Symposium Series on November 5-7, 2009, at the Westin Arlington Gateway
Hotel, 801 North Glebe Road, Arlington, Virginia 22203, USA (adjacent to Washington, DC).
Selected References
Agarwal, A., & Harrod, B. (2009). Self Aware Organic Computing. MIT CSAIL & DARPA
IPTO. http://www.darpa.mil/ipto/personnel/docs/Self_Aware_Organic_Computing.pdf
Ascoli, G. A., & Samsonovich, A. V. (2008). Science of the conscious mind. The Biological
Bulletin 215 (3): 204-215.
Breazeal, C., Berlin, M., Brooks, A., Gray, J., & Thomaz, A.L. (2006). Using perspective taking
to learn from ambiguous demonstrations. Robotics and Autonomous Systems (RAS) Special
Issue on The Social Mechanisms of Robot Programming by Demonstration, 54 (5): 385-393.
Damasio, A.R. (1999). The Feeling of What Happens. New York: Harcourt.
Gallagher, S. (in press). Multiple aspects of agency. New Ideas in Psychology.
Gallagher, S. (2000). Philosophical conceptions of the self: implications for cognitive science.
Trends in Cognitive Science 4: 14-21.
Haikonen, P. O. A. (2007). Reflections of Consciousness: The Mirror Test. In Chella, A., &
Manzotti, R. (Eds.). AI and Consciousness: Theoretical Foundations and Current
Approaches: Papers from the AAAI Fall Symposium, AAAI Technical Reports FS-07-01, pp.
67-71. Menlo Park, CA: AAAI Press, http://www.aaai.org/Papers/Symposia/Fall/2007/FS07-01/FS07-01-012.pdf
Moore, C., & Lemmon, K. (Eds.) (2001). The Self in Time: Developmental Perspectives.
Mahwah, NJ: Lawrence Erlbaum Associates.
Samsonovich, A. V. (in press). Is it time for a new cognitive revolution? Commentary to Aaron
Sloman. International Journal of Machine Consciousness 1 (3).
4
Samsonovich, A. V. & Ascoli, G. A. (2005). The conscious self: Ontology, epistemology and the
mirror quest. Cortex 41 (5): 621–636.
Samsonovich, A. V., De Jong, K. A., and Kitsantas, A. (2009). The mental state formalism of
GMU-BICA. International Journal of Machine Consciousness 1 (1): 111-130.
Samsonovich, A.V. & Nadel, L. (2005). Fundamental principles and mechanisms of the
conscious self. Cortex 41 (5): 669–689.
Slater, M., and Usoh, M. (1994). Depth of presence in virtual environments. PresenceTeleoperators and Virtual Environments 3 (2): 130–144.
Sloman, A. (2008). “The Self”: A bogus concept. Manuscript published online at
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/the-self.html
Sloman, A. (in press). An alternative to working on machine consciousness. International
Journal of Machine Consciousness, 1 (3).
Schmidt, C. T. A. (2006). A relational Stance in the Philosophy of Artificial Intelligence. In L.
Magnani (ed.). Computing and Philosophy, Associated International Academic Publishers,
Pavia (Italy).
Schmidt, C. T. A. (1997). The Systemics of Dialogism: On the Prevalence of the Self in HCI
Design. Special Topic Issue on Human Computer Interface. Journal of the American Society
for Information Science, 48, 1073-1081.
Winne, P.H. & Perry, N.E. (2000). Measuring self-regulated learning. In P. Pintrich, M.
Boekaerts, & M. Seidner (Eds.), Handbook of Self-Regulation (p. 531-566). Orlando, FL:
Academic Press.
Zimmerman, B. J. (1990). Self-regulated learning and academic achievement: An overview.
Educational Psychologist 25: 3-17.