
Can a rock become artificial Intelligent? Yes.
Thinking in theoretical computer sciences, any machine can become a computational model given its clever mechanism. Many of our learnings about computers comes from a particular niche in computing, which is digital computers. Turning a system of tiny but millions of electrical switches into a computing machine is the definition of what a turing complete system is. Yet, the idea of computation spans beyond such mechanism. We can, with a clever design, turn any particular machine or physical systems into a computer! We can convert animals into a computer, given the clever rules and how such animal behaves.
For a system to be considered Turing complete, at a minimum, it needs:
- A way to read and write symbols: The system must be able to manipulate symbols based on rules. In a rock-based system, for instance, different positions or markings on the rock could denote different symbols.
- A form of memory or state: There needs to be some way to remember past actions or store data. This could be done by carving symbols into the rock or arranging a series of rocks in particular patterns.
- A set of deterministic rules to transition from one state to another: This could be a predefined set of carving rules or rock-arranging rules.
- A way to conditionally branch based on a symbol or state: The system needs some method of making decisions based on the current state and symbol.
And of course, once these are satisfied, it unlocks the technology of any algorithms known to mankind, and to give you an example of what kind of algorithms are unlocked -
Turing Machine (TM):
- Search Algorithms: Most search algorithms, especially those that search within data structures like trees or graphs, would require the more comprehensive computational capabilities of a Turing machine. For instance, Depth-First Search (DFS) or Breadth-First Search (BFS) on a graph requires memory to store visited nodes.
- Sort Algorithms: Algorithms like QuickSort or MergeSort would need Turing machine capabilities due to the need for recursive calls and more extensive memory manipulation.
- Graph Algorithms: Complex graph algorithms, such as Dijkstra's algorithm or the Floyd-Warshall algorithm, would also fit into the Turing machine category due to their computational and memory requirements.
- Learning Algorithms: These algorithms typically involve multiple iterative procedures, optimization processes, and often the manipulation of large datasets. Neural network training, decision tree creation, clustering, etc., would all require the capabilities of a Turing machine. Many aspects of machine learning algorithms, especially modern deep learning, would be computationally intensive and require significant memory.
Yet of course, though in theory, it can, in practice such machines are constrainted by its physical systems. In theory, if Charles Babbage’s Analytical Engine were built and given the right set of instructions (in the form of punched cards), it should be able to perform the computations required by learning algorithms. However, there are some practical challenges:
- Speed: The Analytical Engine operates mechanically, meaning its operations per second would be many orders of magnitude slower than even the earliest electronic computers. Modern learning algorithms, especially things like training deep neural networks, require billions or even trillions of operations. Training a model on Babbage's engine might take millennia or longer!
- Memory: While the Analytical Engine had storage (in the form of the "store" for holding numbers), the capacity was limited. Modern learning algorithms, especially those dealing with big data, would quickly exceed its storage capacity.
- Complexity: Modern learning algorithms use complex operations, data structures, and algorithms. Implementing these on the Analytical Engine would require a vast number of punched cards and a very intricate setup.