Friday 14 March 2008

Notes on "Embodied Artificial Intelligence: Trends and Challenges"

"Embodied Artificial Intelligence: Trends and Challenges" is a paper by R. Pfeifer and F. Iida published in 2004. Pfeifer and Iida discuss how and why embodied AI came about. Pfeifer and Iida also discuss the diverse approaches and the major goals of embodied AI.

The paper is available at Google Books and SpringerLink

The Landscape

The paper begins with an introduction and an overview of the landscape of AI research. The landscape discussion begins with the successes and problems of the classical approach. I like that Pfeifer and Iida distinguished between AI and applied informatics. This is because applied informatics has such a grip on AI research whereas I see AI as being about so much more.

The following quotes are from the section "Problems of the Classical Approach."

"...the original intention of artificial intelligence was not only to develop clever algorithms, but also to understand natural forms of intelligence..."

"In the classical approach, common sense has been treated at the level of "semantic content" and has been taken to include knowledge such as "cars cannot become pregnant," "objects (normally) don't fly," "people have biological needs" (they get hungry and thirsty), etc. Building systems with this type of common-sense knowledge has been the goal of many classical natural language and problem solving systems like CYC (e.g. Lenat et al., 1986). [However], there is an important additional aspect of commonsense knowledge, which has to do with bodily sensations and feelings and this aspect has its origins in our embodiment. Take for example, the word "drinking" and freely associate what comes to mind. Perhaps being thirsty, liquid, cool drink, beer, and hot sunshine, the feeling of wetness in your mouth, on the lips and on your tongue when you are drinking, and the feeling of thirst disappearing as you drink, etc. It is this kind of common sense knowledge and common experience that everyone shares and that forms the basis of robust natural language communication, and [we] firmly [ground] it in our own specific embodiment. And to our knowledge, there are currently no artificial systems, capable of dealing with this kind of knowledge in a flexible and adaptive way."

The next section discusses embodied AI and its different forms.

I took away three points from the section on embodied AI. Firstly, Brooks, 1991, was one of the first papers to promote embodied AI. Secondly, computers and robots have important differences that affect AI research. Thirdly, in the context of learning to move, neural networks that utilise their dynamical properties outperform static feed forward networks.

The next section discusses forms of embodied AI research.

"...researchers working in the embodied approach no longer referred to themselves as working in artificial intelligence but more in robotics, engineering of adaptive systems, artificial life, adaptive locomotion, bio-inspired systems and neuroinformatics."

The following are conferences where embodied AI researchers do their research.

· Intelligent Autonomous Systems (IAS)

· Simulation of Adaptive Behaviour - From Animals to Animats (SAB)

· International Conference on Intelligent Robotics and Systems (IROS)

· Adaptive Motion in Animals and Machines (AMAM)

· European Conference on Artificial Life (ECAL)

· Artificial Life Conference (ALIFE)

· Artificial Life and Robotics (AROB)

· Evolutionary Robotics (ER)

· Others by IEEE

The following conferences did not focus on embodied AI. However, they have started to.

· International Joint Conference on Artificial Intelligence (IJCAI)

· European Conference on Artificial Intelligence (ECAI)

The paper then discusses biorobotics, developmental robotics, ubiquitous computing, artificial life and multi-agent systems and then summarises.

In the section on "Biorobotics", Pfeifer notes a view put forth by R. Brooks. The view concerns how low level behaviour is harder than high-level behaviour because it took evolution longer to develop. It is unclear what Brooks meant by harder, a closer review of Brooks' paper would be a good idea. The point I disagree with is that the longer evolution works, the faster the momentum of adaptation so that high level behaviour is just as difficult as low level behaviour if not more so.

In the section on "Developmental Robotics", I took away two points. Firstly, Brooks who is one of the pioneers in the direction of evolutionary communication, changes topics a bit too often for my liking. However, roughly ten years is a long time. Secondly, the following are conferences on the area of "Developmental Robotics."

· Emergence and Development of Embodied Cognition (EDEC)

· Epigenetic Robotics

· Development of Embodied Cognition (DECO)

· Internal Conference on Development and Learning

In the section on "Ubiquitous Computing," Pfeifer and Iida refer to the paper Pfeifer and Scheier, 1999, which looks like a good reference. The paper, Pfeifer and Scheier, 1999, is about design principles for embodied AI systems.

In the section on "Artificial Life and Multi-agent Systems," I took away a number of interesting points.

Firstly, artificial life researchers prefer the phrase "complex dynamical systems" over "multi-agent systems."

Secondly, artificial life researcher may focus on systems, which have "agent character", and systems that do not. "Agent character" refers to systems that have the capacity for self-direction.

Thirdly, Epstein and Axtell, 1996, a paper on agent-based economics is interesting because economics relates to communication.

Fourthly, some multi-agent researchers focus on centralised control which is more of a classical approach and some focus on emergence which is what I am interested in.

Fifthly, emergence of coordinate movement and robot soccer are still big challenges.

Lastly, in embodied AI, few researchers focus on thinking, reasoning and language. The exception is those working in evolution of communication and language. One of the most interesting works is the "Talking Heads experiment" described in Steels, 2001, 2003.

Luc Steels "has also been working on emergence of syntax but in these experiments many assumptions have to be made to bootstrap the process."

"Although fascinating and highly promising, the jury is still out on whether this approach will indeed lead to something resembling human natural language."

That last comment by Pfeifer and Iida scares me and challenges me.

In the summary section, the following quote was of interest to me.

"In spite of the multifaceted nature, there is a unifying principle and that is the actual agent be designed in the context of the synthetic methodology, be it physical in the real world, or simulated in a realistic physics-based simulation."

The quote suggests that simulations, especially those that have strong physics have potential for research as well as actual robots.

In the last part of the paper Pfeifer and Iida distinguishes between three time scales and four big challenges for the embodied AI field.

State-of-the-Art and Challenges

The three time scales are "here and now," developmental and evolutionary.

The grand challenges are theoretical understanding of behaviour, achieving higher-level intelligence, automated design methods and moving into the real world.

The following points are from the section on "Theoretical Understanding of Behaviour."

Firstly, systems still cannot compete with human perceptual and motion systems.

Secondly, an aspect ecological balance is the principle of match, between sensory, motor and control systems.

I challenge the usage of neural networks as researchers are currently using them.

Thirdly, the additional factors that are part of embodiment actually simplify development. An example is Ballard, 1991.

Fourthly, a good idea is to analyse the affect of the design principles quantitatively. Lichtensteiger, 2004, is a good example. Another good example is Gaussier et al., 2004. Lichtensteiger, 2004, demonstrated how to measure quantitatively the affect of the pre-processing function performed by the morphological arrangement of facets in an insect or robot eye. In addition, the paper demonstrated how a particular arrangement influences learning speed. Gaussier et al., 2004, developed a formalism based on the idea of perception-action coupling in autonomous agents. "They apply the formalism to demonstrate how facial expressions can be learned and that there is no need to postulate innate mechanisms." There are dangers to approach quantitative research and developing formalisms.

Fifthly, Haptics plays a big role in all this. I think researchers can try a lot of it out in simulations.

The following points are from the section on "Achieving Higher Level Intelligence."

Firstly, Thelan and Smith, 1994, hold that in natural systems, there is an intrinsic intertwining between brains and bodies. One cannot clearly separate them.

Secondly, manually designed agents have had only limited success.

Thirdly, the symbol grounding problem is as follows. "...How is it possible that humans can behave in ways that it makes sense to describe their behaviour as "symbolic," irrespective of the underlying mechanisms...?" How can organisms acquire meaning, how can they learn about the real world and how can they combine what they learned to generate symbolic behaviour?

Fourthly, the consensus is that learning with contribute towards a solution. Dreyfus, 1992, discusses what else embodied AI requires. One idea is to incorporate concepts about growth and maturation. Lungarella and Pfeifer, 2001, and Sporns and Pegors, 2004, describes how correlations within and between sensory channels greatly facilitate the problem of learning.

Fifthly, an important research area in developmental robotics is how to achieve general purpose, flexible and adaptive perception in the real world. There are a number of researchers looking at how to get robotics to imitate so that researchers do not have to program the robots. I do not see how the authors believe in these ideas but not the ideas underlying evolutionary intelligence and communication.

Sixthly, motivations are another important area of my research.

Seventhly, I dislike the call to wait for other technologies to come about.

The following points are from the section on "Automated Design Methods (Artificial Evolution and Morphogenesis)."

Firstly, I like the following quote. "Moreover, because they encode growth processes, they also, in some sense, contain the mechanisms for self-repair, an essential property of natural systems."

Secondly, the challenges include open-ended evolution, increasing the physics of simulations and improving the richness of task environments.

The goal is to evolve sophisticated agents capable of among other things, communication and high-level cognition.

The following points are from the section on "Moving into the Real World."

Firstly, Gomez et al., 2004, describes an experiment where they reduced sensor precision at first and then slowly increased it.

Secondly, there is so much speculation that it feels like the authors are clutching at straws.

Conclusions, the Future and Applications

The last part of the paper referred to some interesting papers.

Bellot et al., 2004, introduced a new method of Bayesian Programming.

If these ideas about symbol grounding and dynamical systems have been around for a while, why has so little come out of it?

Nunez, 2004, held that even abstract mathematical concepts have roots in embodiment.

Edelman, 1992, held that, "It is not enough to say that the mind is embodied: one has to say how."

My final thoughts

I especially like the example about a cold drink and commonsense.

Another good part of the paper was how Pfeifer and Iida described the symbol grounding problem.

There are a lot of other interesting points and references in this paper.

The work of Cangelosi, Harnad and Nolfi can contribute to how to scale up to cognition. These researchers are looking at how to set up an agent and an environment so that intelligence and language emerges. In their experiments, agents have been able to create their own language, their own vocabulary. The researchers analysed the agents' ability to categorise and combine categories. Powerful stuff, especially when combined with work by Pfeifer and Iida here.

No comments:

Post a Comment