A computer scientist friend said years ago, "Whenever someone says 'artificial intelligence,' instead, substitute the phrase, 'statistical analysis system.'" And this is that, too—they are just admitting they're willing to draw conclusions that are way more daring than some might consider "normal.”
The McCulloch-Pitt Neuron
Back in 1943, at the peak of the Second World War, as Alan Turing and team were fighting hard at Bletchley Park to mechanize attacks against the Nazis’ Enigma code, in the United States Warren McCulloch (neuroscientist) and Walter Pitts (mathematician) devised a mathematical model to describe the processing that takes place in the brain when the brain deals with the recognition of highly complex patterns. The model was designed by connecting many basic cells together in the same topological way that neurons are connected in the physical brain. Not coincidentally, those processing units are just called artificial neurons or MCP (McCulloch-Pitts) neurons.
In the paper “A Logical Calculus of the Ideas Immanent in Nervous Activity,” published in the Bulletin of Mathematical Biophysics, McCulloch and Pitts also gave an elementary but functional model of an artificial neuron. Theirs was only a mathematical model, though, with no concrete mapping to anything physical such as valves, diodes, and resistors.
After that, the next major development in neural networks was the concept of a perceptron that Frank Rosenblatt introduced 15 years later, in 1958. Essentially, the perceptron is an evolution of the MCP neuron with an additional preprocessing layer responsible for the detection of patterns. Today, the perceptron is considered the simplest form of artificial neuron and is