Sunday 16 January 2011

Completed and Next Actions

I, Robot (2004)
source

9/6/2011

I've moved this over to Google Docs.

8 comments:

  1. Idea regarding vision: two eyes?

    ReplyDelete
  2. Hi Anonymous,

    Vision with two eyes would be ideal. I'm currently keep it at one eye now to reducing processing costs. As I get my hands on more powerful hardware, two eyes will be the way to go for sure.

    An interesting question is do I render each view with perspective or without?

    ReplyDelete
  3. Ah, I see -- the perennial problem. Luckily we are beginning into the time of massively parallel computing. Are you going to be using OpenCL/CUDA for your neural networks once you pick up some new hardware?

    I would go with perspective. Parallel projection markedly reduces the amount of information available to your organisms, at least in more complex environments. Do you mind if I ask why you are considering glOrtho?

    ReplyDelete
  4. Hi Anonymous,

    It would be great to use OpenCL/CUDA. The tricky part is balancing the speed of development to reach demonstrable milestones with learning new technologies.

    I was thinking about using orthogonal projection for each eye because we don't have perspective when we look out of one eye. Instead we combine the orthogonal projection from each eye to create a perspective projection.

    Thanks for stopping by a leaving comments :)

    ReplyDelete
  5. Hmm. I am not sure that is the case. I could be wrong, so hear me out and let me know what you think.

    As I understand things, all points in the view volume are projected orthogonally onto its back wall in a parallel projection. It is as though the camera is infinitely far away from the objects but also has an infinite ability to zoom. This has two important results: 1. objects in a line orthogonal to the view volume's back wall will be occluded by the first object; and 2. objects appear the same size regardless of their distance from the camera.

    Whether this is the case for each eye is something that we can verify experimentally by looking at a table straight-on, i.e. so that you are normal to the plane created between its front two legs. If you look at the table with only one eye you can see all four legs and the table appears to get smaller as you walk backward. Neither of these results would be possible if each eye employed parallel projection because each eye does view the world from a certain perspective.

    External sources may also agree. The OpenGL docs, in a section comparing two perspective functions (9.080), indicate that glFrustum ought to be used in creating stereo views to achieve the correct visual results. The Wikipedia article on Perspective Projection also states that "[w]hen the human eye looks at a scene, objects in the distance appear smaller than objects close by - this is known as perspective".

    For a really good example of perspective vs. parallel projections, check slide 3 of the PDF available by googling: "Computer Graphics Projections and Transformations in OpenGL" Freiburg. All of the balls in the parallel projection look the same size regardless of their distance.

    Having said all of this, I am no expert when it comes to OpenGL, etc., so I might not be understanding everything properly.

    Also, yes, that is always problematic. Hopefully OpenCL will be around to stay so that developers can start taking advantage of it with some reliability.

    ReplyDelete
  6. Hi Anonymous,

    Thanks for clearing that up. I was walking around my office with one eye closed, trying to figure out why things looked liked they still had perspective. Two eyes, each with a perspective projection, sounds like the way to go.

    Thanks also for the references. I'll check them out soon.

    In regards to scaling things up, another path I was considering was using a client server type approach like in an massively multi-player games. Each client could process a few simulated robots and the server could process the world. One of the things I'm unsure about is how all the physics would scale up.

    ReplyDelete
  7. Hi Binh,

    That is a really interesting idea. One of my major concerns with running the neural networks through CUDA/OpenCL has been that the latency would become problematic for large numbers of organisms; a distributed approach would easily solve that. Then, the physics would be a perfect candidate for GPGPU on the server, which would allow the world to scale. I am not sure if Bullet would support it but, if not, it might be worth switching engines over.

    ... That's all assuming you are wanting to go for a fairly large system. How many units are you thinking?

    ReplyDelete
  8. Hi Anonymous,

    Could you elaborate a little more on your concerns about latency when running neural networks through CUDA/OpenCL? Do you mean that processing would still be slow even though things are being done in parallel?

    I like your idea about using GPGPU on the server. I hadn't thought about that at all. Will be adding that to the list.

    I've been using about 300 to 600 units before moving the Polyworld code to OGRE and Bullet. At the lower end, there's an update a second. At the upper end, things slow to a crawl with an update every 10 seconds or so.

    Now that I've move Polyworld to OGRE and Bullet, I'm running about 32 units with much a larger neural network due to the extra neurons for vision. I should be increasing the number of units but I've been putting that off a little because I don't want to face that just yet.

    Ideally, there I want thousands of units. I might be able to get away with a few hundred units. With the larger neural networks, OGRE graphics, and Bullet physics, I may have to change things because I might be stuck with less than a hundred units.

    Things I can change include making it so that agents live longer individually and reproduce less often so I can have smaller populations of more hardy robots.

    ReplyDelete