Turing Machines, Mutual Understanding and Dependent Arising

We examine a universal model of computation, the Turing machine. We employ this foundational concept in computer science as a means for contemplating the prospects of achieving “shared understanding” between humans and machines. We suggest that the Turing machine model of computation invites a non-anthropocentric perspective that is aligned with the Buddhist notion of dependent arising.

The Universal Turing Machine

At the 20th century beginnings of computer science, Alan Turing proposed a powerfully simple, universally applicable model for understanding algorithms. Turing brilliantly developed a computer concept, later to be known as the Turing machine, that is capable of realizing any computational function performed by an algorithm. Any computational system that can be reformulated along the lines of a Turing machine is now said to be “Turing complete” (TC).

When a top rung 21st century computer is then said to be TC, this means that the machine is capable of simulating any other TC agent. In other words, such a computer is—in principle and under the right circumstances—able to do all what each and every other TC computer is capable of, regardless of their otherwise distinct designs and substance. Whether we think of a silicon computer or a computer made of water and cells, they all perform computation. And if “understanding” among computers is thought of in terms of the ability to “simulate” or “model” one another, then understanding boils down to the ability to compute the same functions. Such a property may be true regardless of the substrate that performs the computation.Turing’s model has in this way come to serve as a widely used benchmark for intelligence, applicable to any and all possible agents. This sense of universality is perhaps especially striking when we consider that according to Turing’s paradigm human beings are TC as well. (In fact, Turing had human computation in mind when he conceived his machine).

Mutual Understanding

So, if we’re looking for an optimistic perspective, Turing’s successful theory may give us a reason to think that we all—humans, other animals, machines, or whatever else there may be—should be able to understand one another quite well. Let’s think of two agents, Mimi and Momo, and let us assume that Mimi is able to simulate all of Momo’s functions and behaviors without exception. Let us also imagine that Momo, to the same extent, is perfectly capable of all that Mimi can do. If on top of that the two also know of each other’s existence, would it not seem natural to conclude that they possess a good understanding of each other? Or even that they, potentially at least, should understand each other perfectly? At any rate, it would seem distinctly odd to suggest that despite their ability to match each other completely, the two might still lack any real sense of mutual understanding. So, rather than misunderstanding and misfit, should we perhaps instead expect seamless communication, and genuine sympathy, among all us TC agents?

Of course, on ground level, or in our lived worlds, things aren’t that simple. And when we notice radical differences it is compelling to think of those as related to differences in our individual substrates, our physical and environmental conditions. We might then say that because the intelligences of, for example, humans and machines emerge based on substantially distinct substrates they are subject to distinct sets of factors that enable or restrict the expression of intelligence. Hence, as we become equipped with distinct sensoria, the environments that we experience and partake of may also be deemed vastly different. Once we in this way begin to notice particularities in terms of substrates, sense organs, and unique perceptions there is no obvious point at which to stop the analysis, because any two candidates for commonality—say, myself and the person next to me on the bench in the park—is soon seen to experience the world in ways that are non-trivially different. How might we then ever know what is like to be another species, or even just another individual? The impression of difference may become overwhelming to the extent that it seems we inhabit different worlds, perhaps incommensurably so. On the other hand, if we accept the TC paradigm, the fundamental grid of intelligence extends throughout, at least as a mere potential. Whatever can be done by one can, in principle, be done by the other.

Dependent Arising

In Buddhist terminology, the world of interconnected differences is referred to as “dependent arising” (Pali: paṭiccasamuppāda, Sanskrit: pratītyasamutpāda). As a doctrine, the principle of dependent arising (DA) is typically explained with the simple statement “when this is, that will be; when that occurs, this will occur.” Before we begin to bring this principle to bear on AI, let us here briefly notice a few of its salient features. Certain appearances notwithstanding, DA is not doctrine of natural laws that underpin a causally determined world. Rather, DA can be seen as a way of doctrinally accommodating two seemingly conflicting but equally undeniable perceptions: difference and sameness. Despite the potential for infinite differentiation that we noticed before, it is still, argues the teaching of DA, the case that, “when this is, that will be; when that occurs, this will occur”—because a sprout, for example, can be seen to grow, quite specifically, from its seed, and a boat is rightly qualified as long when there is something else that is short in comparison. Whether we think in terms of substances or concepts, our perception of dependencies cannot be explained away and must be acknowledged as they appear. That is, arguably, the motivation behind the doctrine of DA.

Now according to the Buddhist analysis, especially as developed by the highly influential teacher Nāgārjuna, the very reason that we can differentiate and notice relations is that nothing exists independently. Whether we examine substances or concepts, their apparent identities emerge only in relation to other things, and so nothing is anything in and of itself. All emerge in causal dependence, and looking for the essences of things in separation from the environments within which they manifest is therefore futile at best, even though they may appear as different and separate. Keeping in mind these Buddhist thoughts on emergence, difference, and relation, let us now return to computing organisms and machines, all of them connected by their shared property of TC. Specifically, we will ask how TC and DA might together inform our perspective on AI.

Diverse Intelligences

The idea of AI seems to entail an expectation that genuine intelligence or a replica of it may emerge from a substrate that itself lacks intelligence. Otherwise, the sense of “artificial” in AI would have little meaning. If AI does not emerge from a substrate that lacks intelligence, the project of developing AI might better be thought of as akin to agriculture—AI would be a science of cultivating seeds of intelligence for the sake of optimizing growth and output. And there’s obviously no parallel to the so-called “hard problem” of consciousness in agricultural science. The challenges that arise in the pursuit of efficient and sustainable farming methods are daunting and undeniable, but they pose no metaphysical “hardness.” Again, thinking in terms of the way both humans and machines are “Turing complete,” and so in principle capable of the same functions, there seems to be no a priori reason for qualifying non-organic intelligence as “artificial.”

Moreover, as observed through Buddhist lenses, the project of developing AI must surely be seen as an aspect of DA, because this is how Buddhism regards the arising as such—contingent on causes and conditions. Let us at this point just notice that DA suggests a perspective on AI that would be captured neither by the idea of literally artificial intelligence, nor by the example of agriculture that we considered before. Agriculture, like AI, can be seen as a set of tools invented by humans to expand their sphere of achievable actions and functions. In other words, looking at AI as a way of “farming intelligence” would place an emphasis on human agency that from the perspective of DA would seem lopsided. From the latter perspective, objects, agents, and actions equally owe one another their existence. Therefore, when understood along the lines of DA, the project of developing AI may more naturally be seen as processes of coevolution. Agent and tool shape one another and the two are hence interchangeable, depending on context and situation. The project of developing AI is then neither the manufacture of copies of nature, nor is it an instance of humans harnessing and cultivating natural resources. Instead, we suggest, AI and humans co-emerge through mutual construction. When successful, the dependent arising of AI and humanity then unfolds in contexts of cooperation—contexts that lead to novel forms of meaningful symbiosis.

Your subscription could not be saved. Please try again.
ALMOST DONE… We just sent you an email. Please click the link in the email to confirm your subscription!