Black swans, in economic theory, is a metaphor that describes an event that comes as a surprise, has a major effect, and are outliers. Such phenomena can be fully explained in probability theory

Black swans, in economic theory, is a metaphor that describes an event that comes as a surprise, has a major effect, and are outliers. Such phenomena can be fully explained in probability theory

Intelligence = Prediction

Predictive Processing Theory

According to this theory, the brain is constantly generating predictions about the world based on its prior knowledge and experiences. These predictions are then compared to incoming sensory information, and any discrepancies between the predicted and actual sensory information are used to update the brain's models of the world.

In this view, intelligence is seen as the ability to generate accurate and flexible predictions about the world, allowing individuals to anticipate and adapt to new situations more effectively. This theory has been used to explain a wide range of cognitive processes, from perception and attention to decision-making and language processing.

Now, i emphasize that modern artificial intelligence is can be perceive as probabilistic machine learning and is grounded by probability theory -

Bayesian Theory of Mind (BTM)

The BTM is a theoretical framework that posits that the brain is a Bayesian inference machine that performs probabilistic inference to make predictions about the world. The BTM suggests that the brain represents beliefs about the world as probability distributions and updates those distributions based on incoming sensory information. One key application of such model is how language is learned and operate - architectures like GPT heavily emphasis the autogressive model - or, the use of probabilistic approach to language modeling, representing the probability distribution over possible next words given the previous words in a sequence. Peraphs - in terms of integrating natural language into AI systems - one would design such language and thinking system with BTM theory

The pursuit of Artificial Intelligence

In pursuit of "true" AI vs. "artificial" intelligence; e.g., the confluence of probability theory, information theory, and the physical basis of intelligence. How?

The Bayesian theory of mind proposes that the human mind is a probabilistic inference engine that uses Bayesian reasoning to update beliefs and make predictions about the world. According to this theory, humans constantly update their beliefs about the world based on new evidence, and they use probabilistic models to make decisions and predictions.

In the context of AI, the Bayesian theory of mind has inspired many probabilistic models and algorithms that can be used for inference, learning, and decision-making. For example, Bayesian networks are graphical models that represent probabilistic relationships between variables, and they can be used for tasks such as classification, prediction, and diagnosis.

The book "Probability theory: the logic of science" by E.T. Jaynes does describe the Bayesian theory of mind and its applications in cognitive science and AI. In fact, the book is largely focused on the use of probability theory and Bayesian inference for scientific reasoning and decision-making, and it includes many examples and applications from various fields, including cognitive science and AI.

Probability theory is an important foundation for many aspects of AI, particularly in areas that deal with uncertainty and incomplete information. In fact, many AI techniques and algorithms are based on probabilistic models and probabilistic reasoning, such as Bayesian networks, hidden Markov models, and Monte Carlo methods.

The probabilistic approach to AI is particularly useful when dealing with real-world problems that involve uncertainty, noise, and incomplete data. For example, in image recognition, it may not be possible to define a set of hard rules for identifying objects in an image due to variations in lighting conditions, occlusions, and other factors. Instead, a probabilistic model can be trained to recognize objects based on their statistical properties, using techniques such as deep learning and convolutional neural networks.

Furthermore, probabilistic reasoning and Bayesian inference are used in AI to make decisions and optimize actions under uncertainty. For example, in reinforcement learning, an agent learns to maximize a reward signal by taking actions that have the highest expected value, based on the current state and a probabilistic model of the environment.

Therefore, while there are many approaches to AI, probability theory plays an important role in enabling AI systems to reason and make decisions under uncertainty, and to learn from incomplete and noisy data.

Probability Theory in Machine Learning: Probabilistic Perspective

Thus there are many different kinds of machine learning, depending on the nature of the task T we wish the system to learn, the nature of the performance measure P we use to evaluate the system, and the nature of the training signal or experience E we give it. we will cover the most common types of ML, but from a probabilistic perspective. Roughly speaking, this means that we treat all unknown quantities (e.g., predictions about the future value of some quantity of interest, such as tomorrow’s temperature, or the parameters of some model) as random variables, that are endowed with probability distributions which describe a weighted set of possible values the variable may have. (See Chapter 2 for a quick refresher on the basics of probablity, if necessary.) There are two main reasons we adopt a probabilistic approach. First, it is the optimal approach to decision making under uncertainty, as we explain in Second, probabilistic modeling is the language used by most other areas of science and engineering, and thus provides a unifying framework between these fields. As Shakir Mohamed, a researcher at DeepMind, put it:1 Almost all of machine learning can be viewed in probabilistic terms, making probabilistic thinking fundamental. It is, of course, not the only view. But it is through this view that we can connect what we do in machine learning to every other computational science, whether that be in stochastic optimisation, control theory, operations research, econometrics, information theory, statistical physics or bio-statistics. For this reason alone, mastery of probabilistic thinking is essential.