Wednesday 26 January 2011

Developer Journal 45

http://randomdistraction.com/wp-content/uploads/2009/11/c3po.jpg

I used Thursday, Friday, Saturday, Sunday, Monday, and Tuesday to polish the genome and neural network code. I worked at home and started getting cabin fever.

A lot of the process involved making functions more general.

For example, function

genome::eelr( int toGroup, int fromGroup )

retrieves the learning rate of a synapse that goes from an excitatory neuron to another excitatory neuron.

I changed the function to

Genome::GetLearingRate( const GenomeEnum::NeuronType& fromType, int fromGroup, const GenomeEnum::NeuronType& toType, int toGroup) const

and this way I could make higher up functions shorter and powerful.

The neural network code went from about 5000 lines to about 1000 which I'm pretty happy with.

My first motivation for polishing the genome and neural network code was so that I could test whether things were working properly. My second motivation was to move the genome and neural architecture from being hard-coded to being more flexible. Previously, when I wanted to add an input or output neuron group, I had to make modifications in a number different functions and this was hard to figure out each time and to keep track of.

A few more programming rules I picked up:

+ Class functions should not call public functions.

+ Push logic down to lower classes and functions where possible.

+ Prefer having more function parameters to reduce code duplication at both the lower and higher levels.

+ When you have a hierarchical lists, you will also need flat lists.

For example I had neuron groups containing neurons containing synapses. I also needed a flat list of neurons and another flat list of synapses. Another example is where I had a spatial data structure of robots, I also needed a flat list of robots.

+ Doing things in the opposite way to your model might not match how things work in the world but can significantly simplify code.

For example, in my head I have an image of neurons sending signals to other neurons. However, in my code, having neurons retrieve signals from source neurons is easier.

1:06 PM

Just finished polishing the neural network code. Now to write some more test code.

2:55 PM

Programming can be boring at times. Lucky I have a mountain of podcasts.

Setting genes which have a range of [0.0,1.0] all to 1.0.

random 1
energy 1
red 16 * 4 = 64
green 16 * 4 = 64
blue 16 * 4 = 64
inComm = 16

There should be 210 excitatory input neurons. Yep.
There should be 0 inhibitatory input neurons. Yep.

eat 1
mate 1
fight 1
speed 1
yaw 1
light 1
focus 1
outComm 16

There are 4 internal groups, each with 4 neurons of each type.

There should be 16 exitatory internal neurons. Yep.
There should be 16 inhibitatory internal neurons. Yep.

There should be 23 excitatory output neurons. Yep.
There should be 0 inhibitatory output neurons. Yep.

I suspect that there might be something bad with having so many input neurons relative to the number of internal and output neurons.

excitatory input to excitatory internal = 210 * 16 = 3360
excitatory input to inhibitatory internal = 210 * 16 = 3360
excitatory input to excitatory output = 210 * 23 = 4830
number of synapses coming out of excitatory input neurons:
3360 + 3360 + 4830 = 11550. Yep.

That's way more neurons than I was thinking of!

// don't connect to self
excitatory internal to excitatory interal = 16 * ( 16 - 1 ) = 240
excitatory internal to inhibitatory internal = 16 * 16 = 256
excitatory internal to excitatory output = 16 * 23 = 368
240 + 256 + 368 = 864. Yep.

// don't connect to self
inhibitatory internal to inhibitatory internal = 16 * ( 16 -1 ) = 240
inhibitatory internal to excitatory internal = 16 * 16 = 256
inhibitatory internal to excitatory output = 16 * 23 = 368
240 + 256 + 368 = 864. Yep.

5:34 PM

excitatory output to excitatory internal = 23 * 16 = 368
excitatory output to inhibitatory internal = 23 * 16 = 368
excitatory output to excitatory output = 23 * ( 23 - 1 ) = 506
368 + 368 + 506 = 1242. Yep.

Tested number of groups. Passed.
Tested number of neurons. Passed.
Tested number of synapses coming out of neurons. Passed.
Now to test the number of synapses coming into neurons. Skip.

6:39 PM

Okay, the next action is to add food to the world again.

I'm thinking of adding food as a green block and there's a function Bullet that provides a callback function call to occur when a collision occurs.

Have to improve my maths and physics...

So what I want is a function to that the physics engine calls when my robot bumps into food.

The easiest way seems to be to iterate over contact manifolds during a simulation tick. Okay...

From the manifolds I can get btRigidBody but I don't have access to my code.

8:00 PM

Will have to go through the Bullet forums tomorrow. I can think of a few ways to access my code from the Bullet code but they feel messy and wrong.

8:30 PM

Will look around the Bullet forum for more information on collision callbacks and triggers tomorrow.

RELATED

2 comments:

  1. Hi Binh,

    Ah, good, it looks as it is supposed to. I have just been having some trouble with the CAPTCHA on your post page -- it often shows up as a small red dot unless I hit post, preview, then post -- and I knew that you had updated the theme a few weeks ago, so I was concerned that a CSS file wasn't loading (heh trust a developer to be wary of a white BG). It looks good though!

    Anyway, re: your other post, that's very interesting. It seems that we are taking different approaches regarding the number of units, and I must admit that I see the advantages to your solution. A very large population would likely force organisms to adapt due to the much greater complexity, which would hopefully make up for a lack of behavioral complexity early on. Also, I agree about keeping them alive longer. I think that the cultural developments you are looking for would be much more likely to develop if the organisms were spending less time learning the world and more time interacting with each other, at least up to a certain point.

    The trouble with running the neural networks through OpenCL/CUDA is the ram<->vram bottleneck. When the kernel is running and all of the data is in vram, you can achieve extremely fast processing speeds. However, when data needs to be copied to/from ram, it really kills performance (or so I hear). If an array of video/Tesla cards did not provide enough vram to store enough of the neural networks, a distributed approach like the one you described might be a viable option.

    Has the move to OGRE/Bullet proved to be a good decision so far?

    ReplyDelete
  2. Hi Anonymous,

    Thanks for letting me know about the problems with the CAPTCHA. I'll look into it.

    Could you tell me more about your work and your ideas? I'd love to hear more about them.

    My central idea is to use environmental and embodiment constraints that shaped the development our behavior to shape the development of robot behavior.

    For example, a group of robots shaped like us and who have to go through a similar growing up process, would face pressures to form social groups in order to survive.

    Further, social groups create pressures to increase intelligence in order for robots to cooperate and compete.

    Performance and parallel programming! I wish I could bring in people to help me with new technologies as the project moves forward.

    Moving to OGRE and Bullet has been good for the simulation and bad for my PhD schedule. I was hoping to have transitioned over to OGRE and Bullet by the end of last year. My goal was to to have at least the functionality I had before the transition. The end of January is fast approaching and I've got a fair bit yet before I get back to having a population of self sustaining robots :)

    However, the change was important. I have a strong hunch that a more physically and graphically realistic environment is important to development of evolutionary robotic AI. Having simple things like no longer being able to walk through each other or objects in the world is vital.

    My goal of giving the robots arms and legs requires more powerful physics than what I can write on my own in the time that I have. The benefits of having a good scene graph to improve graphics rendering speed is also a great plus.

    Overall, the decision has been a good one even though I'm putting my PhD a little more at risk.

    ReplyDelete