In this post we examine a universal model of computation, the Turing machine. We employ this foundational concept in computer science as a means for contemplating the prospects of achieving “shared understanding” between humans and machines. We suggest that the Turing machine model of computation invites a non-anthropocentric perspective that is aligned with the Buddhist notion of dependent arising.
The Universal Turing Machine
At the 20th century beginnings of computer science, Alan Turing proposed a powerfully simple, universally applicable model for understanding algorithms. Turing brilliantly developed a computer concept, later to be known as the Turing machine, that is capable of realizing any computational function performed by an algorithm. Any computational system that can be reformulated along the lines of a Turing machine is now said to be “Turing complete” (TC).
So, if we’re looking for an optimistic perspective, Turing’s successful theory may give us a reason to think that we all—humans, other animals, machines, or whatever else there may be—should be able to understand one another quite well. Let’s think of two agents, Mimi and Momo, and let us assume that Mimi is able to simulate all of Momo’s functions and behaviors without exception. Let us also imagine that Momo, to the same extent, is perfectly capable of all that Mimi can do. If on top of that the two also know of each other’s existence, would it not seem natural to conclude that they possess a good understanding of each other? Or even that they, potentially at least, should understand each other perfectly? At any rate, it would seem distinctly odd to suggest that despite their ability to match each other completely, the two might still lack any real sense of mutual understanding. So, rather than misunderstanding and misfit, should we perhaps instead expect seamless communication, and genuine sympathy, among all us TC agents?
Dependent Arising
In Buddhist terminology, the world of interconnected differences is referred to as “dependent arising” (Pali: paṭiccasamuppāda, Sanskrit: pratītyasamutpāda). As a doctrine, the principle of dependent arising (DA) is typically explained with the simple statement “when this is, that will be; when that occurs, this will occur.” Before we begin to bring this principle to bear on AI, let us here briefly notice a few of its salient features. Certain appearances notwithstanding, DA is not doctrine of natural laws that underpin a causally determined world. Rather, DA can be seen as a way of doctrinally accommodating two seemingly conflicting but equally undeniable perceptions: difference and sameness. Despite the potential for infinite differentiation that we noticed before, it is still, argues the teaching of DA, the case that, “when this is, that will be; when that occurs, this will occur”—because a sprout, for example, can be seen to grow, quite specifically, from its seed, and a boat is rightly qualified as long when there is something else that is short in comparison. Whether we think in terms of substances or concepts, our perception of dependencies cannot be explained away and must be acknowledged as they appear. That is, arguably, the motivation behind the doctrine of DA.
Now according to the Buddhist analysis, especially as developed by the highly influential teacher Nāgārjuna, the very reason that we can differentiate and notice relations is that nothing exists independently. Whether we examine substances or concepts, their apparent identities emerge only in relation to other things, and so nothing is anything in and of itself. All emerge in causal dependence, and looking for the essences of things in separation from the environments within which they manifest is therefore futile at best, even though they may appear as different and separate. Keeping in mind these Buddhist thoughts on emergence, difference, and relation, let us now return to computing organisms and machines, all of them connected by their shared property of TC. Specifically, we will ask how TC and DA might together inform our perspective on AI.
The idea of AI seems to entail an expectation that genuine intelligence or a replica of it may emerge from a substrate that itself lacks intelligence. Otherwise, the sense of “artificial” in AI would have little meaning. If AI does not emerge from a substrate that lacks intelligence, the project of developing AI might better be thought of as akin to agriculture—AI would be a science of cultivating seeds of intelligence for the sake of optimizing growth and output. And there’s obviously no parallel to the so-called “hard problem” of consciousness in agricultural science. The challenges that arise in the pursuit of efficient and sustainable farming methods are daunting and undeniable, but they pose no metaphysical “hardness.” Again, thinking in terms of the way both humans and machines are “Turing complete,” and so in principle capable of the same functions, there seems to be no a priori reason for qualifying non-organic intelligence as “artificial.”