Thursday 18 November 2010

Developer Journal 10

Yesterday I cleaned up some code to reduce the amount of things I had to remember when creating entities and moving them around. I find that reducing the amount of things you have to keep in your head really helps and you can do it if you take some time to go back and redesign things that result in 'gotchas'.

Another thing I realized was that as you're programming your communicating a language with whoever comes along later. It's not just about commenting and naming varies and functions, it's establishing a convention of how to do things. As you program more and more you can start to appreciate situations where you can reuse a pattern you're already familiar with.

I've never been really careful with maintaining interfaces but as I was cleaning up old code I found that interfaces are more and more useful. The bad thing I was doing was using my interface within the class as well as outside. Do one at a time.

I left the population running overnight and it dropped to one agent. A starting group of 128 agents is too small for the population to get a foothold. Starting populations of 256 agents seem to do well. Running overnight, the population grew to about 300 agents, with 300 agents dying, so 600 agents overall.

On my wish list is working with all the variables so that it supports smaller populations of about 64 agents. I am worried that there is something wrong with the genetic crossover code because most of the child agents don't survive even though the parent agents are quite capable.

Today I'll be working on moving some OpenGL code from a qt class into my graphics class so that I can integrate Ogre code. I am worried that doing that will somehow come back and bite me. I had a lot of trouble with Boost Serialization that went beyond just getting it working. There were a lot of hidden problems that kept taking longer and longer to fix. Although I'm sure that if I didn't use Boost Serialization there would have been more problems if I had written my own stuff. Still, I was looking at some example code from an OpenGL book and I was tempted to just improve my current graphics code.

A commenter on a previous post brought up Euphoria by Natural Motion. I already know about it but thought I'd have a look around about it a little more so I had a read of an article about it by Alex J Champandard at AI Game Dev. Champandard makes interesting points about Euphoria and its place in games development. The comments section led me to another piece of software that looks interesting called DANCE (Dynamic Animation and Control Environment). DANCE looks amazing and I would love to go through it to see how they do all their physics simulation, the environment, the joints and constraints. The hard part is knowing when to change course. I've already decided to use Bullet Physics and that's where I'm going for the time being.

The comments section also led me to a talk on a piece of software called ACTOR which is about autonomous digital actors. What caught my eye was the involvement of respected computer graphics developer, Ken Perlin. It looks exciting but does not seem available for me to experiment with. I'm currently watching the talk and it is fascinating.

I was worried I wouldn't have anything to write about today.

Something I've been thinking about is writing a clear, one page description of what I'm working on and why. The idea is to create a group of virtual humans in an environment who have to adapt by not only acquiring food, mating  but also form social groups and communicating. There is research demonstrates that agents can develop language and my goal is to take this further. One unique things I'm doing is that my agents have neural networks and are subject to similar forces that we are. They have to adapt in order for their genetic line to stick around. Their neural networks change during their lifetime like ours do unlike experiments that only apply changes to neural networks when spawning a new generation.

The purpose of all this is to develop an AI that can keep going on. Each AI has the potential for their neural network to keep growing in size and complexity. If we can develop an AI that can walk and talk from scratch, that would be a huge step forward. The point is to have it and then work out how it works. We have the basic pieces, genomes, neural networks, bodies, environments.

One of the scary things about working on all this is that I could be going down completely the wrong track. At the same time my populations are persisting. I create a population, each agent has a body, a neural network and a genome. They're in a challenging environment. There are no fitness functions. They persist. If I can increase the complexity of their body, make their bodies more human like, because I want to encourage human like behavior, give them more advanced abilities to communicate, make the environment tougher so that they have to cooperate, yet also have compete amongst themselves...if if if.

If this produces agents with behaviors that can become increasingly human like by adapting to increased realism in the environment and agent bodies, then this would be all worthwhile. The key idea is that there is no tinkering in the actual brain of the agents. The brain and genome adapts to the increasingly realistic bodies and environments.

What happens when we have an AI's that have human like behaviors and that can adapt to human like environments? And I do mean environments and not just scenarios. Aside from taking a fair bit of time to discuss the ethical issues, we would have an amazing artifact to study and from that basis perhaps we'll discover more and more principles for practical application. Imagine a world without work.

Yeah, I know there are groups reverse engineering the brain and countless other projects. We also had a history of people reverse engineering the ability of birds to fly but it wasn't until someone built an artificial device that flew without understanding the principles behind it mind you, that things really took off. Pun intended.

No comments:

Post a Comment