Do we know if the brain does something similar to the Backpropagation algorithm?
The reason why it takes immense computing for the computer to learn is due to an algorithm called Backpropagation, which has been at the heart of success in deep learning and AI. This algorithm was discovered in 1980s and is used to figure out how internal neurons should change to make the bigger neural net behave. It uses this algorithm to learn about errors during learning process by propagating error information backwards while „re-structuring“ itself to become better. Mathematically, we can model this idea through a clever technique of calculus and derivatives.
Now, do we know if the brain does something similar to Backpropogation algorithm?
We have yet to have a full picture of how the brain works. Many observations by neuroscientists but no connections. However, it has been suggested that the brain is doing something similar to Backpropogation, if not, superior than what modern neural nets are using [1].
Ofc. this connects back to 20W of power needed for the brain. There’s obviously an algorithm in the brain that’s superior than what computers are using now (backpropogation) for learning. If we can pinpoint what the brain does, model it mathematically, then apply it in neural nets (as replacement of Backpropogation algorithm), exciting things can happen. [faster learning progress - or, learning rate equivalent to brain]
This is a very exciting research problem in AI and Brain sciences. One that can help further progress AI and neurosciences research. And help pave the path for the quest of AGI technology
———————————————————
[1] Ford, M. Architectures of Intelligence. p.24