Saturday 5 March 2011

Developer Journal 65 - On Visualising Neural Networks

Being able to see a robot and its neural network at the same time is cool.

Strange. When I use to visualizer to look at input neuron activation levels, they have the value 0.5.

Yesterday, I estimated that fixing up the neural network will take 8 hours. 1.5 hours down ...

Getting closer. I was being silly in my code. I was calculating the activation level of the input neuron levels when I should not have been. The activation levels of the input neurons come input to the neural network.

I was using the inputs to the input neurons to calculate the activation but input neurons don't have any inputs so the value was always 0 which becomes 0.5 after going through a logistic function.

4:27 PM

The activation levels of all the input neurons are 0 ...

Ooh. Spotted another mistake. When I activate the neural network, I have to do it in a way that is not intuitive. Every neuron has a current activation level. I then calculate the new activation level for every neuron. Then I copy the new action level into the current activation level variable.

The problem is that input neurons do not go through the activation process so their new activation value is always 0. Then when I copy that to the current activation level, the current activation level is always always 0. That shouldn't affect the rest of the neural network value though because of the 2 stage pass.

I'd love to explore things like having neurons activate and then the activation fades down over time. Another thing I'd like to explore is activation fatigue, where neurons that activate for a long time get tired.

2 hours down.

The neuron values of the internal and output groups area always 1 which is not right.

For a typical internal neuron, the activation level is about 200. When this goes through a logistic function, the value becomes 1. Something doesn't seem right.

5:21 PM

In the legacy neural network code, it got calculated the output activation first and then applied a logistic function. The result is that applying a logistic function does not affect the output activation, only the later synapse weight updating.

I suspect there are errors in my synapse weight update code as well.

5:41 PM

Here's the problem I'm facing. A neuron, Alpha, in the internal layer receives activation values from about 200 neurons. The activation value of Alpha is the sum of weighted inputs, each input to Alpha is the value of the input neuron times the input synapse.

This works out to an activation of about 100 to 200. The input neurons have weights from 0 to 1. The weights are about 0 to 8.

When I pass Alpha through a logistic function, I always get 1. When I calculate the output activations, they're always 1.

Do I set a threshold value for Alpha, so that if it receives enough input, it fires? What should a threshold value be? Should all neurons have the same threshold value?

What does it fire? Some value between 0 and 1?

6:24 PM

If I just pump the value of the internal neurons into the output neurons, the output neurons could have values in the range of a few thousand.

Wish I could somehow model a biological neural network more closely.

3.5 of 8 hours down.

RELATED

No comments:

Post a Comment