Talk abstract
The Microscopic Representation method
strives to explain complex macroscopic phenomena
in terms of the emergent properties of collective
objects composed of many simple interacting agents.
This method has been applied in a wide range of systems, first in physical sciences but increasingly in biological, cognitive and social systems.
I will discuss the conceptual basis of the Microscopic Representation paradigm as well as a few examples in visual perception, cognitive development, cognitive immunology, artificial creativity, novelty propagation etc.
Talk abstract Speaker biographies for A. N. Meltzoff and R. Rao Talk abstract In the context of human cognition, implementing "artificial systems" as explicit
physical models of biological ones means implementing humanoid systems. Not with
the aim of building an "artificial human being" or more efficient robots but to
test assumptions and hypothesis more explicitly. In particular the use of humanoids
as tools to understand human cognition, is focused, in our lab, on trying to explain
how adaptation develops through interaction with the external environment. Our
reference framework is human sensorimotor and cognitive development and we approach
the problems by trying to implement motor and cognitive abilities in an artificial
system. Is it possible to "program" a system to "have cognition" as we program a
robot to assemble a car? Is cognition similar to motor control and sensorimotor
coordination? Do we know enough about our own cognitive abilities to transfer them
into an artificial being? How do we interact safely and "intelligently" with other
humans (and machines)? How do we predict the effects of our actions? How do we
adapt our behavior to unpredictable situations? How do we anticipate what other
humans are doing? Can all (or even some) of these abilities be hard coded into
a humanoid robot? Looking at natural systems it seems that pre-coding cognitive
and adaptive behaviors is not possible (we believe that adaptive behaviors cannot
be pre-programmed). In this talk I will claim that if future robots have to have cognitive abilities,
they will have to go through developmental phases similar to those found in human
babies. I will do that from a multidisciplinary perspective by presenting findings
derived from studies of human motor and cognitive development as well as a robotic
implementation of the first few months of "existence" of a robot cub (Babybot). In
doing so I will stress the consequences that this multidisciplinary approach has in
discovering new technologies and the relevance that robotics research will continue
to have as a research tool to understand human cognition. References Acknowledgements: Research decribed here is supported by the EU project COGVIS
(IST-2000-29375) and MIRROR (IST-2000-28159) and by the Italian Space Agency. Speaker biography for Giulio Sandini Prof. Giulio Sandini is a full professor of the Faculty of Engineering of the
University of Genova and founder of the LIRA-Lab (Laboratory for Integrated
Advanced Robotics). The leading theme of his research activity has been visual perception and
sensorimotor coordination from a biological and an artificial perspective. Talk abstract In this presentation, I shall address just one facet of this
intriguing problem. Through the study of typically developing infants and
children with autism, we may come to appreciate the significance of one
component of the developmental process: the ability for an infant to
identify (often through feelings) with the subjective orientation of other
people, both in one-on-one interactions and in relation to a shared external
world. I shall present some evidence that bears upon this issue. Speaker biography for Peter Hobson Prof Hobson is a psychiatrist, has a PhD in experimental psychology from
the University of Cambridge, and is a psychoanalyst. His principal
research interests are autism, early child development and adult personality
disorder. His recent book, The Cradle of Thought (MacMillan, 2002), attempts
to integrate these perspectives in a developmental account of the
development of symbolic thinking. Talk abstract Speaker biography for Yiannis Aloimonos The address http://www.cfar.umd.edu/~yiannis/research.html contains a description
of his research including a Socratic dialogue written at the level of a Scientific
American Article. His next monograph with C. Fermuller entitled
VISUAL SPACE -TIME GEOMETRY: A geometry of the mind, is expected at the end of the
year. Talk abstract
This talk introduces the concepts of
dynamic field theory, in which simple cognitive properties such as
decision making and working memory emerge from a mathematical
description of behavior that remains close to sensori-motor processes.
The mathematical framework is provided by dynamical systems theory.
Autonomous robots designed in terms of these concepts will be used to
exemplify the concepts. How these ideas can be used to analyze human
behavior will be illustrated drawing on examples from motor control and
the development of action planning.
Speaker biography for Gregor Schöner
Learning through imitation is a powerful and versatile method for
acquiring new behaviors. In humans, a wide range of behaviors, from styles
of social interaction to tool-use, are passed from one generation to another
through imitative learning. Although imitation evolved through Darwinian means,
it achieves Lamarckian ends: it is a mechanism for the `inheritance' of acquired
characteristics. Unlike conventional trial-and-error-based learning methods such
reinforcement learning, imitation leads to rapid learning. This potential for
rapid behavior acquisition through demonstration has made imitation learning an
increasingly attractive alternative to programming robots.
In this talk, we review recent results on how infants learn through imitation.
These results suggest a four stage progression of imitative abilities: (i) body
babbling, (ii) imitation of body movements, (iii) imitation of actions on objects,
and (iv) imitation based on inferring intentions of others. We formalize these four
stages within a probabilistic framework for learning and inference. The framework
acknowledges the role of internal models in sensorimotor control and is inspired
by recent ideas from machine learning on Bayesian inference in graphical models. We
discuss two main advantages of the probabilistic approach: (a) the development of
new algorithms for robotic imitation learning in noisy and uncertain environments,
and (b) the potential for using Bayesian methodologies (such as manipulation of
prior probabilities) and robotic technologies to obtain a deeper understanding
of imitation learning in human infants.
Dr. Andrew N. Meltzoff is a Professor of Psychology at the University of Washington and
Co-Director of the UW Center for Mind, Brain and Learning. A graduate of Harvard
University, with a PhD from Oxford University, he is an internationally recognized
expert on infant and child development. Prof Raj Rao is a prize-winning scientist
in the Computer Science and Engineering Department at the University of Washington.
Giulio Sandini: "Human Babies and Robot Cubs"
(G. Sandini, G. Metta, L. Natale, S. Rao, R. Manzotti)
The role of technology for the study of brain functions has always been fundamental
in providing new tools for the acquisition/analysis of biological data. However
the increasingly complex picture of brain functions emerging from neuroscience
research is now posing a new challenge: how to extend our knowledge beyond the
scope of specific experiments and methodologies? Is it possible to find new tools
enabling neuroscientists to verify new theories and to guide new experiments beyond
the, now established, methods of mathematical modeling and system's theory? The
scientific goal of the LIRA-Lab is to investigate if the implementation of artificial
systems through physical models is a useful tool to help understanding complex brain
functions. The reasons why we believe that this is indeed the case are, essentially,
two. The first stems from the very high complexity and non-homogeneity of our current
knowledge of brain functions. The second is that the physical world (in a general
sense) is far too complicated to be "simulated" realistically preventing
adequate testing of new theories and ideas to be performed.
1.Sandini, G., G. Metta, and J. Konczak.Human Sensori-motor Development and Artificial
Systems. in International Symposium on Artificial Intelligence, Robotics and
Intellectual Human Activity Support(AIR&IHAS '97). 1997. RIKEN - Japan.
2.Metta, G., G. Sandini, and J. Konczak, A Developmental Approach to Visually-Guided
Reaching in Artificial Systems. Neural Networks, 1999. 12(10): p. 1413-1427.
3.Natale, L., S. Rao, and G. Sandini. Learning To Act On Objects. in 2nd Workshop on
Biologically Motivated Computer Vision. 2002. Tuebingen, Germany.
The research activity of LIRA-Lab is in the field of Computational Neuroscience and
Neuro-IT with the objective of understanding the neural mechanisms of human
sensorimotor coordination and cognitive development by realizing anthropomorphic
artificial systems such as humanoids (Project Babybot). With our baby humanoid
"Babybot" we have contributed to the study of development of eye movements control,
visuo-inertial integration, eye-head coordination, visually guided reaching.
Peter Hobson: "The interpersonal origins of thinking: How humans achieve what
computers (so far) haven't"
There is something remarkable that happens in the course of the
first two years of life: infants leave infancy behind, and become
participants in human culture. They not only begin to talk and to play
symbolically, but they also become able to think about things and events and
people, taking up this and then another subjective perspective on the
"objects" of thought. These accomplishments prompt us to ask: How on earth
do they (and how on earth did we) accomplish such a feat? And what would it
take for computers to cross the rubicon into thought?
Peter Hobson is Tavistock Professor of Developmental Psychopathology in the
University of London, at the Tavistock Clinic and the Department of
Psychiatry and Behavioural Sciences, UCL.
Yiannis Aloimonos: "Visual space-time geometry: a geometry of
thought"
This talk advances a viewpoint that in general may be considered as a perceptual
theory of the mind. The basic thesis is that the content of the mind is organized in the
form of a model of the external world which contains objects, events (actions) and their
relationships.
That means that thinking is essentially a process of manipulating perceptual models
of the external world. Roughly speaking, a representation of an event, an act or an
object in our heads is a set of moving pictures, that is, video.
It is not a conventional
video but one that can be seen from any viewpoint, a sort of 3D video.
I will describe, in simple terms, a number of basic results that we
obtained over the past several years that make it possible to acquire
descriptions of the world using video cameras and computer power. Such
descriptions have many applications to today's technology, such as
virtual and augmented reality, teleimmersion, telepresence and the like.
In general, they constitute tools for both perception and imagination. The
availability of such models however makes it possible, for the first time
in the history of human knowledge, to work towards a computational theory
of the mind that is inherently perceptual. The second part of the talk
will provide steps towards such a theory and will concentrate on the
language problem. I will argue that a universal grammar is inherently
related to the geometric and statistical operators responsible for
segmentation in images. I will conclude with the outline of a research
program on the geometry of the mind that uses action as a quantum.
Yiannis Aloimonos studied Mathematics in Athens Greece (Dipl. 1982) and Computer Science at the University of Rochester, NY (PhD. 1987). He is currently the
Director of the Computer Vision Laboratory at the Univ. of Maryland and a Professor
of Computational Vision at the Dept. of Computer Science. His major interest is the
relationship of action to intelligence. He is known for his work in Active Vision and
Motion Analysis. He has authored and coauthored several books including one on
Artificial Intelligence, with Tom Dean and James Allen.
Gregor Schöner: "Dynamic field theory and embodied cognition"
Overt behavior is always based on a direct link between sensory and motor
surfaces. Moreover, behavior is typically adjusted to the perceived
environment, a current "task" setting, and to longer term goals. These
two boundary conditions of behavior are emphasized in the "embodied
cognition" approach to cognition.
Gregor Schöner is Professor of Neuroinformatics, Chair for Theoretical
Biology at Institut für Neuroinformatik, Ruhr-Universität Bochum,
in Germany. He has a PhD in theoretical physics from the Universität
Stuttgart.