What exactly is a machine? What is computation? What is a computer? Straightforward as it may seem, theoretical computer science provides us with principles or rules that define specific terminologies. For an application, consider the concept of BEHOLD! An Artificial Intelligent Rock which demonstrates how we can transform an ordinary rock into a sophisticated computer capable of running modern AI systems. While theoretical computer science is the foundation of computation across all computer science fields, it remains, by nature, a theoretical discipline. One doesn't need to study theoretical computer science to understand the practical engineering of hardware and software, and vice versa. Interestingly, computers existed long before theoretical computer science was formally defined. Moreover, many theories that have emerged are often derived from practical engineering insights. The practical and theoretical aspects of computer science have since influenced and informed each other in a symbiotic relationship, and the distinct relation is merely how we interact with such fields. Often times, theoretical computer sciences are mostly a pen and paper activity, involving mathematical and proof oriented process, whereas practical computer sciences are implementation-centric and fine-tuned to practical and tangible real-world problems.

Though, what Theoretical computer sciences stands out, is the ability to allow us further bound how such “computation” is limited in some ways, and innovative in the other. Analogously, practical engineering can only go so far without theoretical computer sciences, but just enough to do typical computational mundane tasks we’ve confronted. What gets really interesting is when we’re at the edge of THS, and the only way to progress forward is to be at the leading edge of computation (i.e, Theoretical Computer Sciences). I talk about this more in On the Nature of Sciences and its direct ties on the nature of sciences (theory, practice, application) and meta-philosophy. How do we define and measure information? Are there physical constraints on computation? How do the laws of the universe dictate what is and isn't computable? Can we find a unifying theory for data structures? Just as we have complexity classes for problems, can we have a similar classification for data structures based on their operations' efficiency? Is there a limit to algorithmic self-improvement? Can an algorithm continually optimize itself, or is there an upper bound to how efficient it can become (throwback to Machine Super Intelligence). Such questions about computations directly innovates practical computer sciences, designing theoretical ‘blueprints’ for applied CS to play with.


What is Theoretical Computer Sciences

Theoretical computer science (TCS) to AI, in a philosophical sense, can be likened to the relationship between fundamental physics and engineering. Just as understanding the fundamental laws of physics allows us to grasp why bridges stand or planes fly, understanding the theoretical underpinnings of computation and information gives us insight into the potential and limitations of AI.

In terms of philosophy and deep understanding:

  1. Nature of Computation: At its core, TCS asks: "What does it mean to compute?" This is foundational. When we talk about an AI "thinking" or "learning," we're talking about a series of computations. Understanding the nature of these computations, their limits, and their capabilities, provides insight into what AI can and cannot achieve.
  2. Definition of Intelligence: AI's ultimate goal is often framed as mimicking or replicating human intelligence. But what is intelligence? TCS provides frameworks for understanding problem-solving, learning, and decision-making in rigorous, algorithmic terms. This can help refine our definitions of intelligence and how it might be recreated artificially.
  3. Exploring Possibilities: Just as philosophy often pushes the boundaries of what we consider "possible," TCS can expand our understanding of possible algorithms, computations, and solutions. For example, the study of non-classical computing models, like quantum computing or biological computing, falls under TCS and challenges our conventional notions of computation.
  4. Understanding Limitations: Philosophically, understanding our limits is as essential as understanding our capabilities. TCS offers insights into problems that are provably unsolvable (like the Halting Problem) or intractably hard (like many NP-complete problems). This gives a sobering counterbalance to the often hyperbolic claims about AI's potential.
  5. Ethical Implications: TCS, especially in areas related to cryptography and information theory, intersects with topics of privacy, security, and data integrity. These are not just technical challenges but philosophical and ethical ones. What does it mean for an AI to have access to "private" data, and how do we mathematically define and ensure that privacy?
  6. Model of Reality: Much like philosophy strives to understand the nature of reality, TCS provides models (like automata, Turing machines, or lambda calculus) to understand the "reality" of computation. How closely these models match the real-world functioning of AI systems can provide deep insights into the system's behavior.
  7. Structuralism vs. Functionalism: In philosophy of mind, there's a debate between structuralism (the idea that mental processes are defined by their structure) and functionalism (the idea that mental processes are defined by their function). Similar debates exist in AI and TCS. For example, are neural networks defined by their architecture (structure) or by their training and operational procedures (function)? TCS provides tools to investigate these questions formally.

The “Ghosts” in a Machine

  1. Logic and Boolean Algebra (from Discrete Mathematics): The binary nature of Boolean algebra (true/false, 0/1) maps naturally to electronic circuits (on/off, high voltage/low voltage). Logic gates, built on the principles of Boolean algebra, are the fundamental building blocks of computer hardware.
  2. Algorithms: At its core, an algorithm is a step-by-step procedure for solving a problem. Without the notion of algorithms, the idea of systematic computation (which computers are designed to do) wouldn't exist.
  3. Automata Theory: This provides a foundation for understanding how different machines or systems can process inputs and generate outputs. For example:
  4. Computability Theory: Understanding what problems can and cannot be solved algorithmically is foundational. If one appreciates that certain problems are solvable through mechanical processes, it naturally leads to the question: "How can we build machines to execute these processes?"