Reduced viewport height to 1.
Screen capture of viewport should have some white pixels but is just a black line.
Increased viewport height to 8.
Screen capture of viewport should have some white and black pixels.
W W W W B B B B W W W W B B B B W W W W B B B B W W W W B B B B W W W W B B B B W W W W B B B B W W W W B B B B W W W W B B B B
B B B B B B B B B B B B B B B B B B B B B B B B B B B B B B B B W W W W B B B B W W W W B B B B W W W W B B B B W W W W B B B B
Something's off with my maths. I was placing the viewport at the wrong height in my viewport frame. For example, if my viewport frame height is 100 and my viewport height is 10, I should put my viewport at height 45 and not 50. Then again, it shouldn't matter because I'm drawing everything in my viewport...
My maths could be right and my assumptions wrong.
I increased viewport height to 50 pixels. Looks good but the aspect ratio is off.
I adjusted the aspect ratio. The performance with viewport height of 50 pixels is good.
Viewport height 25 pixels
Viewport height 16 pixels. Seeing objects in the distance gets a bit harder but things close up are okay.
Due to perspective, the ground rises up in the distance.
Viewport height 8 pixels. Making things out gets harder and harder.
I played around with changing the camera from perspective to orthographic mode but I don't quite get the units for Ogre::Camera::setOrthoWindow( Real w, Real h ). The documentation says that I should use world units.
Increased turn speed from 0.0625 to 0.125.
Each robot has a pointer to an image that represents what they see in the world. Time to put that image into the robot neural network.
The neural network as originally designed to only accept 1 row of pixels.
I'm trying out 64 pixels by 16 pixels. I'd love to write a few functions so that each neuron takes in a group of say 4 pixels instead of 1.
32 by 32 pixels is 1024 pixels. What if I had 1024 neurons?
The program has trouble assigning enough memory.
Trying 16 by 4 pixels. You can just make out the world if you zoom in heaps and the program can assign enough memory. The world is very blurry.
I'm getting a strange segmentation fault when I exit.
Trying a clean and rebuild.
If I have no robots then there is no segmentation fault when I exit.
Trying skipping brain update.
I looked through the brain initialisation code. Could be due to brain::Grow().
brain::Grow() calls brain::SetupSynapse()
The error has something to do with using a vector of boolean variables.
I changed to a vector of integers with 0 representing false and 1 representing true.
Now I've got a segmentation fault elsewhere.
After spending a fair bit of my day, I found that the segmentation fault was due to having a smart pointer that was a global.
* Changed observer control scheme to avoid changing elevation when moving backwards and forwards.
* Lot's of little things.
* Fix debug code
* Convert image into format that the neural network can use.
* Connect view port to agent vision input.
* Round to the nearest integer in the CaptureViewport() code.
* Re-examine angular and linear deceleration code.
* Consider always applying a deceleration force.
* Enable displaying the mouse pointer when running the simulation.
* Add food
* Re-enable serialization.
* Get agent populations surviving again.
* Have a neuron for rotating left and another neuron for rotating right.
* Have a neuron for moving forwards and another neuron for moving backwards.
* Add code to reset a cube that is not on its "feet".
* Increase heights of walls.
* Get shadows working properly.
* Implement command controls.
* Implement FPS controls.
* Re-enable communication.
* Conduct gender experiments.
* Conduct maturation experiment.
* Experiment with joints and more advanced body shapes.
* Make the graphics look more like the demos in Bullet.
* Display frames per second
* Implement a way to use the mouse to re-size the viewports.
* Implement a way to use the mouse to re-position the viewports.
* Implement a way to use the mouse to move a screen like on an iPhone.
* Implement off-screen rendering.