I received a link from a friend today about researchers studying AI with Second Life. Selmer Bringsjord is the lead researcher on the project and Bringsjord has a number of good papers on CiteSeer.
One of the papers is "Could, How Could We Tell if, and Why Should---Androids Have Inner Lives?" I read this paper a few months back as part of the fantastic book "Thinking about Android Epistemology."
Overall, I think the project is great but I do have a few quibbles. One of the quibbles is that the project is taking a programming approach rather than a developmental approach. For example, the project is making declarative definitions of concepts including lying, betrayal and evil.
I would prefer an approach where such behaviours evolve out of agent interactions with the particular environment or else the agent cannot connect with such definitions.
Even though the experiment that the article mentioned worked, the agent is a disembodied agent. The agent is not actually viewing what happens, the agent is working through a series of formal logic calculations. In addition, the agent is pressing keystrokes rather than actually having to coordinate a physical body.
One of the things the project is doing right is using a powerful machine dedicated to AI and letting another machine dedicate processing to the simulation of the environment.
I must also admit that I am jealous of the exposure and backing that Bringsjord's team has received. I really feel like I need to hustle or else all the good research will finish. Then again, Bringsjord has had a long history of research. In addition, research such as this does confirm that my goal of studying evolutionary emergent intelligence and communication in simulated embodied agents is not on the too wrong track.
No comments:
Post a Comment