Science Projects And Inventions

Artificial Neural Network

The study of human memory was greatly changed in 1943, when Warren McCulloch and Walter Pitts wrote a paper on how neurons might work. (Neurons are the cells that make up tissue in the part of the nervous system involved in learning and recognition.) Six years later, D. O. Hebb described the strengthening of neural connections that occurred each time they were used.
In the early days of artificial intelligence research, Frank Rosenblatt (1928-1971)—a computer scientist at the Cornell Aeronautical Laboratory in New York—was studying how the eyes of a fly work. Rosenblatt observed that when a fly perceives danger, its reaction occurs faster than the information can be processed by its brain. He then produced the perceptron, the first computer to learn new skills using a neural network mimicking human thought processes. The perceptron had a layer of interconnected input and output nodes. Each connection is "weighted" to make it more or less likely to stimulate another node. Rosenblatt's Mark 1 perceptron followed in 1960, the first machine to "learn" to recognize and identify optical patterns.
Practicality grew with the advent of "backprop" perceptrons, which include "hidden" layers that greatly increase the complexity. John Hopfield then introduced his model of neural networks, which can store memories of patterns so that when the network is presented with even partial information, it can retrieve the full pattern—rather like humans.
Today, neural networks are the foundation for the optical character recognition employed in scanners, weather forecasts, bomb detectors, and even financial market predictions. 


Archive



You need to login to perform this action.
You will be redirected in 3 sec spinner