What if care was demonstrated to be a key driver of intelligence across animals, humans and even machines?
Could a system of infinite care as outlined in the Buddhist concept of a Bodhisattva provide a testable method for increasing intelligence in humans and AI and beyond?
For the last two years, the Center for the Understanding of Apparent Selves has been asking these questions, funded by a grant from the Templeton World Charity Diverse Intelligences Initiative. We’re happy to announce a major milestone in this work, the recent publication of our paper (co-authored with the estimable Dr. Michael Levin) called Biology, Buddhism, and AI: Care as the Driver of Intelligence.
In this paper we examine the idea of intelligence through the lenses of Biology, Buddhist Studies, Cognitive Science and Computer Science and we came to a conclusion that some may find surprising: that Care (concern for the alleviation of stress) is both a measure and a driver of intelligence across types of beings and that infinite Care, as described in the Bodhisattva Vow, has the possibility to create systems of increasing intelligence and care.
Intelligence is not knowledge accumulation (in which case we should call a good encyclopedia “intelligent”), nor is it heightened perception (in which case a space telescope would be intelligent). In our recently published article we propose that intelligence is the ability to identify problems and seek their solution. So, to be intelligent is to have engaged concern. To be intelligent is to care.
All intelligent systems (organic, machine, or hybrid) appear to have natural limits on their sphere of concern. If we think in terms of biological organisms, a bacterium, for example, can try to manage local sugar concentrations, with a bit of memory and a bit of predictive power. A dog has a larger area of concern, significant memory and predictive capacity in the short term, but it is probably impossible for a dog to care about something that will happen 100 miles away, 2 months from now. Humans have a huge cognitive envelope, but there’s still a limit to what we can genuinely care about, and so also on the scope of our intelligence. Of course, we can expand our scope to some extent, and sometimes dramatically, but there is, it seems, always a limit.
What if we now said that there is a type of engaged concern, a type of intelligence, that has no limits? Sounds outrageous, far-fetched? At least suspicious?
Imagine that we now also say that limitless intelligence can be induced in intelligent beings like us, and that the method for this is taught in the concept of the Bodhisattva, a being committed to the pursuit of cognitive perfection (“awakening,” Skt. bodhi) for the benefit of all sentient beings throughout time and space
But if intelligence is defined by engaged concern for problem solving then the apparent limits on a system’s intelligence can be broken by extending its sphere of concern. Buddhism teaches that an emerging bodhisattva makes this promise: “I shall achieve insight in order to care and provide for all beings, throughout space and time.” What happens to the sphere of concern of someone or something that accepts this pledge?
We invite your thoughts. Can the bodhisattva perspective suggest a new way of modeling intelligence? If so, could the cultivation of care provide a path to AGI, and beyond? In the simplest terms possible, what would it mean to accept responsibility for the flourishing of all forms of life? How might this hypothesis be practically tested?
In the coming weeks we’ll be hosting a Q&A on the paper where we develop workshops on the potential of the model, please watch this space for info and join us!