So after yesterday's reading, I learned two things. First, neuron threshold levels doesn't matter too much, when a neuron receives enough input to exceed its threshold, the neuron outputs a fixed output. The higher the input,the more often the output occurs, the output doesn't get larger.
I couldn't find out information about whether different neurons have different thresholds. I'm going to try having the same threshold for all neurons until I can get some more information. I'm also considering using random thresholds.
This kind of overcame the problem of trying to figure out threshold levels depending on the number of inputs.
Second, I learned that neurons generated outputs with fixed values. This overcomes the problem of trying to scale inputs to generate outputs between 0 and 1.
Another thing I'm not sure of. I know that the soma sums up inputs. If a neuron receives a positive input of 0.5 and a negative input of 0.1, the result is 0.4. The problem is the implementation. Consider an excitatory neuron with one excitatory input and one inhibitory neuron input. That should work like in the example. What happens if the receiving neuron is inhibitory?
Is it the synapse or the neuron that is inhibitory or excitatory?
I think the only thing that is different if a neuron is inhibitory is that it outputs a negative signal.
What about synapse weights?
"... a synapse will change its strength if that synapse is active (i.e., releases transmitter) and at the same time, the post-synaptic cell is active."
Do inhibitory neurons have the same positive threshold?
I'm not sure but I'm going to assume yes for now.
7.5 of my original 8 hour estimate for fixing the neural network is down.
Back to checking my code.
Things to check
1) whether the synapse efficacy is always greater than or equal to 0
2) whether neuron activation is greater than or equal to 0
3) what the neuron threshold level is
4) whether neuron outputs are fixed
5) whether inhibitory neurons are having an effect
Synapse efficacy is always greater than or equal to 0.
Neuron activation is always greater than or equal to 0.
The output neurons will always be 1 because the input neurons connect to internal and output neurons.
Should I disconnect the synapses that go from input to output? These synapses bypass the internal neurons so there is no balance between inhibitory and excitatory neurons.
I disconnected input excitatory neurons from internal excitatory neurons. I then disconnected enough of the synapses so that only the inhibitory neurons are in play.
The current logistic function returns 0.5 if you give it 0. Not what I want.
None of my internal inhibitory neurons are connected to my excitatory output neurons.
Things I want to do
+ Make efficacy smaller
+ Simplify neural network code
+ Make activations push rather than be pulled
+ Print a list of genome structure
+ give robots ability to close eyes
+ check that the inhibitory neurons are working
+ change neural network structure, instead of a neuron group having two lists, one for excitatory neurons and another for inhibitory neurons, a neuron group has one list of neurons and each neuron has a property signaling whether they are excitatory or inhibitory.
+ have inhibitory neurons in the output groups