A conversation with neuroscientist Karl Friston on why the explosion of free energy at work is precisely the opposite of what we now need.
What do we know about the brain? It weighs about three pounds, has 86 billion neurons, controls the movements of our bodies, and produces consciousness. And although it only accounts for about 2% of our body weight, it uses 20% of our body’s energy.
Helping us understand the function of the brain, and particularly the surprisingly dynamic ways that it uses all of that energy, has been a life-long fascination of neuroscientist Karl Friston of Wellcome Trust Centre for Neuroimaging and University College London. He’s best known for his inventions and innovations in fMRI brain imaging, which have made him the most highly cited neuroscientist in the world, and on the short list for a Nobel Prize. All of this is prelude to the larger ambition, still far from realized, of fusing the exquisite spatial resolution of fMRI with the exquisite temporal resolution of EEG—the oldest form of measuring brain activity— to give us a truly high-fidelity view of the brain in action.
As often happens in science, foundational work in one field can have much wider implications. Friston has proposed a theory—the free energy principle—that describes with mathematical precision how the brain conserves energy by minimizing surprise. This simple idea has very far ranging consequences for the way we work and organize ourselves socially, as he described in a recent conversation.
“You could look at this as a physics of sentient systems,” he says. “Cells, organs, brains, people, societies, eco-niches, banks—anything that self-assembles and maintains its structural and functional integrity must be subject to this principle. Institutions, even cultures could at some level be accounted for by the underlying mechanics.”
Things that persist in the world, whether brains, bacteria, or banks, operate within what Friston terms “a circular causality.” These things not only make sense of their worlds, they actively try to influence them for their own survival. This two-way exchange between the inside and the outside is where the free energy principle operates. Simply put, we make sense of the world by either updating our assumptions or by changing the world to make our assumptions true. And both of these arise spontaneously from our drive to make things more predictable.
But how does this abstract principle translate into our everyday lives? Contrary to our intuitive experience, the brain is not a passive receiver of stimulus from its environment, but is continuously engaged in an act of interpretation Friston calls active inference. This is a corollary of the free energy principle that explains how we actively forage in the world for evidence that best satisfies our expectations. “It means that you are in charge,” he says, “and you can choose which data you sample in order to make inferences about the causes of those data.”
As we become familiar with some aspect of the world, all we need is the barest hint of constancy to be satisfied that all is well. We infer that everyone in the meeting agrees with us by actively looking for people nodding their heads. This behavior is so automatic we don’t even notice ourselves doing it. Suffice to say, our quest to minimize surprise is energy efficient, but can lead to the kinds of biases and blunders popularized by Daniel Kahneman in his famous book, Thinking, Fast and Slow.
Active inference is the process through which we build models of our environment that we update with evidence we actively collect. Those familiar with statistics will recognize this description of the brain as particularly Bayesian. At work, you can decide to walk over to a co-worker’s desk to ask a question instead of sending an email. But when you get there, your darting eye movements search for clues, like headphones or the expression on their face, to help predict whether it’s a good moment to interrupt or not. Thinking the better of it and going back to your desk to send an email is an example of updating your model of the world. All of these adaptive actions take on added significance when you consider how miraculous it is that we can do them at all.
Schizophrenia, which was Friston’s first area of study after medical school, shows the tragic consequences of inference gone wrong. This pathology has taken on a wider cultural meaning in the context of fake news and other forms of social manipulation on the internet. “If we don't assign the right confidence to different sources of information that we've engineered for ourselves with technology,” he says, “then we could end up making false inferences very much along the same lines as people with schizophrenia might make false inferences about the causes of certain things they're witnessing.”
“Technology is the natural extension of active inference beyond the single person.”—Karl Friston
In many ways, our technological environment is just another way we’ve attempted to overcome uncertainty. “All our technology that we have created around ourselves is simply an expression of the way that we expect the world to be,” he continues. “The things that we do collectively in our world are all in the service of making the world a more learnable, predictable place. Technology is the natural extension of active inference beyond the single person.”
But technology has also created a crisis of attention that we haven’t successfully adapted to. “It may well be that the rapid explosion of available data that could be sampled may have overwhelmed our attentional capacities,” Friston says. Technology, according to Friston’s colleague Andy Clark, a well-known philosopher at University of Edinburgh, is a form of extended cognition. When we offload phone numbers to our iPhones, we are in fact extending our minds to include what is contained on our devices. This insight leads to an obvious but surprising fact: we store most of our information about the world in the world. So minimizing surprise in the world is mainly a matter of knowing where to look.
And this is where Friston’s notion of active inference connects to some groundbreaking ideas in artificial intelligence. The leading paradigm in machine learning pushes massive amounts of data through very deep networks and has successfully been able to achieve human-level precision at a variety of tasks. But Friston thinks “the big data aspect of machine learning and deep learning is going in exactly the wrong direction.” Instead, he thinks we need to build systems that are more reliant on the selection of data.
He gives the example of human vision, which operates very differently than artificial deep neural networks. “You may have the impression that you have access to an extremely high-dimensional representation of the visual field. In fact, you don't. You're actually just experiencing one little central pinprick of all the things that you could sample, and you stitch them together by very carefully selecting little bits of data.”
What he’s describing is very far from the the old notion of the brain as a giant mainframe computer and closer to our experiences with our smartphones. Think about it. Your phone moves around with you. You choose what to take pictures of. You can only have one thing on the screen at a time, You have limited battery life before you have to recharge. But for all of these limitations, the experience of using a phone is much more agile and dynamic than a supercomputer in a data center. Friston hopes that this new conception of the brain will lead to “machine learning that is orders of magnitude quicker than it currently is.”
AI is still far from a pressing concern in most people’s day jobs, but Friston’s research does have a lot to tell us about what it’s like to do knowledge work in the twenty-first century. Let’s start with his own reclusive way of working. He is, by admission, the archetypical deep worker. He does not own a mobile phone, limits other forms of electronic access, and generally does not utter a word before noon. He’s very active on email within certain hours of the day, but if people want to talk to him, that usually means coming to London and meeting in person along with members of his research group. “I try to minimize one-to-one meetings because it's much more efficient for me to unpack ideas in conversation with a group of people so we're all on the same page,” he says.
He protects his time by “sticking rigidly to a diary, and making sure that there's somebody else in charge of the diary who knows the formula and can regiment it.” This regimentation has allowed him to be an author on more than 1,200 research papers. His output has earned him a very high h-rating, the equivalent of a sports statistic for academics. But the surprising aspect of his deep work is his intellectual sociability. “Collaboration is 99 percent of what I do,” he says. He has to limit the exploration of his own theoretical framework to the occasional Sunday when his backlog of papers to review from his generally younger colleagues is manageable. “Usually the most interesting next steps are questions that are brought to you by exposing yourself to the next generation of theoreticians,” he admits, “so I'm quite happy being basically in response mode 99 percent of the time.”
Regimented routine is his way of creating a cocoon around himself despite all these demands on his time—a Markov blanket, he calls it—in which to minimize the free energy of distractions, complexity, and avoidable uncertainty. Most of us are not so lucky to have such well-established boundaries, but active inference shows that these boundaries are largely within our control. We cannot know the external states of the world directly, but only through our senses and our actions (see diagram above). Ultimately it is our internal states—how we feel—that determine how we interpret the evidence of our senses and how we use our bodies to collect more evidence. These three components, our feelings, our perceptions, and our behavior, comprise the Markov blanket that protects us from the causal complexity of the world.
Prolonged uncertainty is a prime cause of workplace stress, which left unchecked can lead to burnout. Our growing impatience and distractibility are failed adaptations to the growing uncertainty of a world filled with more information than we can metabolize. But there are things we can all do to handle tech stress better.
The first, and most important, is to choose what to pay attention to. “In the past, all of these choices would have been specified by cultural evolution, by the physical limits of travel, the number of people that we can physically converse with,” says Friston. “But now, those constraints are gone. We have to relearn how to attend, and what to attend to, and what to ignore.”
The second is to be aware of how much of the knowledge you need—to resolve your uncertainty— already exists out in the world, and make use of it in an active way. “I could not do, and many people in my role could not do what we do,” says Friston, “without access to Wikipedia. The rapid pace of technological development in my field is exactly due to this informational niche construction, which I think is exactly this notion of downloading a lot of our cognitive processing into our devices, and extending our cognition into a space beyond our own brains.”
Finally, we can try to reframe tech stress in more positive terms. Friston is living the growing pains of this cultural moment: “It is exactly those scientists who are making a difference who will look as if they're struggling—trying to integrate multiple fields—because they are looking for principles, explanations, hypotheses, models that have the broadest explanatory power. We're all aspiring collectively to the simplest explanations for why we are here and what we are doing.” Workers in all fields can meaningfully participate in this struggle. “That movement is a collective movement,” affirms Friston, “and it does depend upon a culture of digital exchange and stored knowledge.”