Wednesday 3 December 2008

Wednesday

There was a discussion at Slashdot recently about good neural network references. As always, I'm impressed by the quality and volume of feedback that the Slashdot community gives.

Artificial Intelligence Links:
  • Reading Guide To AI Design & Neural Networks?

    "I'm a PhD student in theoretical physics who's recently gotten quite interested in AI design. ... Due to my background, I figure that the 'abstract' theory would be mostly suited for me, so I would like to ask for a few book suggestions or other directions."

  • AI != design brain

    There is a very big difference between AI - which is based on guesses about how "intelligence" works, and studies of brain function. I'm going to make a totally unjustified sweeping generalisation and suggest that one reason that AI has generally been a failure is because we have had quite wrong ideas about how the brain actually works. That's to say, the focus has been on how the brain seems to be like a distributed computer (neurons and the axons that relay their output) because up till now nobody has really understood how the brain stores and organises memory in parallel- which seems to be the key to it all, and is all about the software.

    So my feeling is that the first people really to get anywhere with AI will either work for Google or be the neurobiologists who finally crack what is actually going on in there. If I wasn't close to retirement, and wanted to build a career in AI, I'd be looking at how mapreduce works, and the work being done building on that, rather than robotics. I'd also be looking as seriously parallel processing.

    So my initial suggestion is nothing to do with conventional AI at all - look at Programming Erlang, and anything you can find about how Google does its stuff.

  • MapReduce: Simplified Data Processing on Large Clusters

    MapReduce is a programming model and an associated implementation for processing and generating large data sets. Users specify a map function that processes a key/value pair to generate a set of intermediate key/value pairs, and a reduce function that merges all intermediate values associated with the same intermediate key. Many real world tasks are expressible in this model, as shown in the paper.

    Programs written in this functional style are automatically parallelized and executed on a large cluster of commodity machines. The run-time system takes care of the details of partitioning the input data, scheduling the program's execution across a set of machines, handling machine failures, and managing the required inter-machine communication. This allows programmers without any experience with parallel and distributed systems to easily utilize the resources of a large distributed system.

    Our implementation of MapReduce runs on a large cluster of commodity machines and is highly scalable: a typical MapReduce computation processes many terabytes of data on thousands of machines. Programmers find the system easy to use: hundreds of MapReduce programs have been implemented and upwards of one thousand MapReduce jobs are executed on Google's clusters every day.

  • ?

    As both educators and researchers, we are amazed at the hype that the MapReduce proponents have spread about how it represents a paradigm shift in the development of scalable, data-intensive applications. MapReduce may be a good idea for writing certain types of general-purpose computations, but to the database community, it is:

    1. A giant step backward in the programming paradigm for large-scale data intensive applications

    2. A sub-optimal implementation, in that it uses brute force instead of indexing

    3. Not novel at all -- it represents a specific implementation of well known techniques developed nearly 25 years ago

    4. Missing most of the features that are routinely included in current DBMS


    5. Incompatible with all of the tools DBMS users have come to depend on

  • Bots Get Smart

    Because a high fun factor is what sells, the video-game industry has become increasingly keen to make use of developments in AI research—and computer scientists have taken notice. A watershed came in 2000, when John E. Laird, a professor of engineering at the University of Michigan, and Michael van Lent, now chief scientist at Soar Technology, in Ann Arbor, Mich., published a call to arms that described commercial video games as “AI’s killer application.” Their point was that research to improve AI for such games would create spin-offs in many other spheres.

  • Neural Networks

    its an introduction to the neyral networks and how it works.....

  • AI : Neural Network for beginners (Part 1 of 3) by Sacha Barber

    Synapses can be excitatory or inhibitory.

    Spikes (signals) arriving at an excitatory synapse tend to cause the receiving neuron to fire. Spikes (signals) arriving at an inhibitory synapse tend to inhibit the receiving neuron from firing.

    The cell body and synapses essentially compute (by a complicated chemical/electrical process) the difference between the incoming excitatory and inhibitory inputs (spatial and temporal summation).

    When this difference is large enough (compared to the neuron's threshold) then the neuron will fire.

    Roughly speaking, the faster excitatory spikes arrive at its synapses the faster it will fire (similarly for inhibitory spikes).
Emacs and GDB Links:
  • Using GDB under GNU Emacs

    A special interface allows you to use GNU Emacs to view (and edit) the source files for the program you are debugging with GDB.

  • A guided tour of Emacs

    The GNU Emacs Manual calls Emacs the extensible, customizable, self-documenting real-time display editor, but this description tells beginners little about what Emacs is capable of. To give you an idea, here is a sampling of the things you can do with Emacs:

  • Emacs FAQ on Screen Display Issues

    How to show the cursor's column position?

    “Alt+x column-number-mode”. To always have it on, put the following code in your “~/.emacs” file: “(column-number-mode t)”.


No comments:

Post a Comment