I was driving into work today when I realised that a lot of my challenges come from two of the questions I often get which are can we do this way and should we do it this way? It really boils down to these two questions. The problem is that progress towards answering one question gets hampered by the other question. As soon as you answer the question, should we do it this way, the other question pops up and your're at a standstill and also vice versa.
I'll continue my post today with a quick description of what I am trying to do. I have written about this a few times before such as when I wrote about "The Basic Idea." I'll also add that what I am trying to do is generally nothing new. Rather, what is new is are some of the particulars.
I am basically trying to set up an environment where a group of neural networks, each representing a separate agent develop intelligence and language and continue to do so indefinitely. I am holding that agents need a human like body and world to allow human like interaction to occur and for human like intelligence to emerge. I am holding that a good way to achieve the intelligence and language skills we are looking for is to encourage agent adaptation at the genetic, personal and cultural levels.
From my reading, agents need to interact with the world to gain the basic concepts that we take for granted. Take for example the ability to count to two. This might seem trivial but really consider what agents need to do in order to count to two. An agent needs to perceive the world, to form groups, to create the label two and then apply it to something. I know that we can neural networks or write some code or design some hardware to do something similar but those are in a sense not really understanding the concept of two, rather they are doing it out of reflex and missing out on a host of foundational concepts that counting to two requires. The foundational concepts are important because they allow agents to further apply the concept of two in their dealings with the world.
A human like body and world world creates situations where agents have the opportunity to gain human like concepts. A lot of current systems don't have basic human knowledge and as a result, developers need to carefully set up the problem for agents to solve. In a sense, developers need to translate problems into a form that suits the perspective of agents. The problem is that while this works, it is time consuming and agents are unable to to continue the learning process indefinitely.
Agents need continual interaction to constantly update and build upon their understanding. It is more effective if agents have continual exposure to their environment to train their brain. What a lot of other systems do is provide a set of data that has been preprocessed by a human being for a neural network to look through and then forecast some result. This is great for particular pattern finding problems but not so great for the type of intelligence we are looking for to emerge.
Imagine you have a virtual world where these agents roam. Each agent has a neural network. Now the question is why these agents would do anything at all. You could program a command but then you are in a catch 22 situation where agents don't understand the commands you are giving and any commands you give require you setting up the agents in such a way as to be able to process your commands. A better way is to emulate what happens in the real world. Have you ever wondered why parents care for children? Why do people care for each other? Why do we cooperate at times and not at other times? Why do we prefer the things that we do? Well, Richard Dawkins in "The Selfish Gene" proposed that we do so because we evolved in such a way as to keep our genetic makeup in the world. We behave in such a way that it is beneficial for us to survive and thus our genes to survive. With this basic idea, we have a means to get our agents to do something, anything.
However, it is not that useful because we want agents to possess intelligence and language skills. Well, Henry Plotkin in "Darwin Machines and the Nature of Knowledge" proposed that personal learning and cultural learning were extensions of the physical adaptation process. Sometimes, genetic changes are too slow so we learned to adapt by learning. However, there are times when personal learning is not enough, times when we need to cooperate and keep discoveries remembered. That's where cultural learning comes in, where knowledge is an adaptation that occurs outside of agents but which helps agents.
Back to the two of the questions I often get, can we do this way and should we do it this way? What I have written so far answers the should and now for the can.
Since I am doing a PhD which is supposed to be a one person venture, the scope of what I am laying out seems huge. Fortuanately, other researchers have also thought along similar lines and have provided software. Over the past few days I have been looking around at what is available. I want to do as little work as possible yet have as high a chance as possible of achieving what I am setting out to do. Larry Yaeger has written a program called Polyworld which creates pretty much exactly the world that I am looking for. There are a few things I would like to add of course but the program has a virutal world, neural net agents, life processes such as the drive for smart energy usage, mating, eating, fighting and so on.
I intend to add communication to the agents and also increase the level of embodiment of agents to encourage agent intelligence and language development.
Some dream features include physics simulation and also a MMO architecture where the world and agents are constantly online and each agent is powered by a separate PC.
So, I hope I have answer the question of should we do this and can we do this. Now, the challenge is well, to do it.
No comments:
Post a Comment