When Peter Reiner co-founded The National Core for Neuroethics in 2007, advances in neuroscience were starting to raise expectations for giant leaps in cognitive enhancement. For more than a decade, Reiner researched how our brains operate and watched biotech companies search for a pharmaceutical solution to unlock the potential of the human mind.
“Some drugs that had been prescribed for ADHD became popular as cognitive enhancers on college campuses,” he recalls. “It seemed like we were going to have a tsunami of use of these drugs. The press picked it up. The movie Limitless got a lot of people interested. At the same time this was happening on college campuses, it was also happening in Silicon Valley, where coders were talking about using an array of drugs for cognitive enhancement.”
Distinguishing between hope and hype
Then one of Reiner’s students alerted him to a sobering discovery: When you bring people into the lab for tests, the cognitive enhancement drugs didn’t actually improve much.
“I'm kind of a data slave, so I kept paying attention to this,” he recalls. After digging into the data, Reiner and his students noticed that cognitive enhancers seemed to have a greater effect on people who scored lower on tests than on the people with average scores. It became evident that juicing up the dopamine system produced only a short-term boost in cognitive skill. Not only that, but non-pharmacological strategies for achieving cognitive enhancement, in particular trans-cranial direct current stimulation, showed the same effect.
“None of the ways devised to cognitively enhance ourselves are doing much because our brains are already working well,” he explains. “Most people I know feel like their brain is working flat out. At the end of the day, they're tired not because they’d been pushing stones uphill, but because their brains had been working hard all day, especially making decisions.”
That was when Reiner and his team had a revelation. Instead of focusing on making our brains work better, we should focus on improving the relationships our brains have with our devices.
The prediction that took 10 years to come true
“I’d known about the extended mind hypothesis for quite a while,” says Reiner. “In 1998, when Clark and Chalmers finally published that hypothesis—they had a hard time publishing it because it was so wild—their version of the extended mind hypothesis was pencil and paper. That made it hard to appreciate. Once smartphones came out, suddenly the implications of the hypothesis became intuitively obvious, because we very clearly use our phones as extensions of our minds.”
Of course, a lot of has changed since 1998, when we weren’t yet living through our phones all day. While phones aren’t a literal extension of our minds, research suggests that our attachment to them is more than habitual. They’re a powerful source of help, but also can be a mental drain that affects our work.
The question is, how do you make devices more helpful and less of a drain? And if you want to offload work to your devices, how do you decide which tasks to delegate?
“It’s been known for probably 20 years that biological memory is just not reliable,” he says. “We’ve already moved down the road to not making the mistake of using our brains as massive memory storage devices. But my smartphone can hold vast amounts of information easily.”
Reiner says the key is learning how to access that information. “The skill we need is not to know many things, but to know how to find those things. Fortunately, the search engines make it easy. Finding information in all the vast repositories that exist online in an efficient manner, that's a super important skill in the modern era.”
Which decisions do we really want to delegate?
Because our brains are cognitive misers, they take shortcuts, wherever they can. “If you can take a shortcut, you don't use as much energy. Evolution would like that to be the case. In theory, if we don't have to remember as many things, we don't have to clog our brains with as many things.”
In addition to helping us offload the task of remembering so many details, technology could become helpful with our decision making—especially because decision making is so fatiguing for our brains. But Reiner cautions that delegating decisions to our devices brings up a host of interesting ethical and philosophy questions, including ‘Are we masters of our own destiny?’
“The skill we need is not to know many things, but to know how to find those things.”
“If you offload decision making to your device or some AI assistant, where does that leave us? But if we could offload trivial decisions, we might be better off,” says Reiner. “Obama was a famous example of someone who bought into this idea. He had two kinds of suits, so he could just get up in the morning and didn't have to think about what to wear that day. That was one less decision.”
Researchers have been investigating whether cognitive offloading actually frees up cognitive space for other things for a while, but so far, they haven’t seen evidence that it does.
“It doesn't do it in a natural way, but it doesn't mean that it couldn't do it if we were intentional about what we did with the fruits of cognitive offloading,” Reiner explains.“We don't teach children how to do that. They know how to cognitively offload because they're all over their devices, and know what to do. But we don't have a rational program yet developed to teach them. So now that you are collaborating with this device, what else can you use your brain for that the device is not good for? The things that come to mind are creativity, abstract thought, things like that.”
The mind is a concept
If you’re new to the theory of technologies of the extended mind, it might help to take a step back and define what “mind” means in this context. As someone who began his career a neurobiologist, Reiner considers the brain the most important part of who we are.
“The brain is a thing, the mind is a concept,” he explains. “We define mind as the full array of tools that you bring to the act of thinking. Then you think about these technologies that have become extensions of our minds, and what does it require to become an extension of your mind. These algorithmic tools become extensions of your mind when you use them in the same seamless way as if they were your biological brain. You don't really think about it. You just go to it, and absorb that information. It's a seamless part of your cognitive toolkit.”
“These algorithmic tools become extensions of your mind when you use them in the same seamless way as if they were your biological brain.”
He predicts that people who enjoy using Alexa, Siri, and Google Assistant are likely to have a head start when it comes to adopting technologies of the extended mind. “All of those are going to be great tools that we use, ultimately, to enhance our cognitive capabilities, writ large, in a collaborative way,” says Reiner. “Then the question becomes who's in control?”
That’s what many have been asking—especially skeptics who aren’t sure technology can solve problems that technology helped create. Overcoming the growing wariness presents a looming challenge for developers. But Reiner believes it might also be an opportunity for the ones who are working on alternatives.
“We have a paper where we introduced this concept of AI loyalty,” he says. “The idea being that what you really want in an advanced AI assistant of any sort is for it to be loyal to the user. I think it has the potential to be a marketing advantage to a company that says, ‘Okay, we've done this, and we're doing it in a way where your interests are first and foremost.’”