Ohio State’s Angus Fletcher: AI will never replace us
The bestselling author of “Primal Intelligence” shares how emotion, imagination and narrative thinking give humans the upper hand.
Angus Fletcher brings a one-of-a-kind perspective to the artificial intelligence discussion. The Ohio State professor spent years in a neurophysiology lab studying how the human brain works, before getting his PhD in literature from Yale. His principal course of study is examining how our brains are affected by storytelling, a journey that took him to Stanford and Hollywood, where he collaborated with the writers and producers at Pixar. He now teaches as part of the university’s Project Narrative, a think tank for the study of how stories work.
In his bestselling book Primal Intelligence, Fletcher deflates some of the hype surrounding AI. While he recognizes the technology’s data-processing power, he argues that AI can’t replicate such human qualities as imagination, emotional intelligence and narrative thinking. Here, Fletcher discusses the promise and limitations of thinking machines—and why he considers himself an AI realist, not an AI skeptic.
-
Your particular expertise—pairing neurophysiology with English literature—gives you a unique perspective on artificial intelligence. How does AI differ from human intelligence?
Neurophysiology is a branch of neuroscience that studies living neurons. At my lab, I observed ways that neurons operate differently from a computer. A computer gets smarter the more data you give it. Which is great, except that 99 percent of the time in life, there isn’t enough data. Neurons evolved to address the problem of low data, allowing the brain to make effective plans in situations where there’s less information than a computer needs. Computers excel in stable and transparent environments, but dealing with volatility and uncertainty isn’t their strength. If AI has to operate in those conditions, it does so by manufacturing synthetic data, via random processes like divergent thinking, which is what causes hallucination and model collapse.
How do human brains act smart in volatility? Unlike a transistor, which works by processing information, a neuron can function without inputs. It does that by initiating actions. So, a fundamental process of the human brain is action, as opposed to information, allowing us to launch nonrandom plans when we don’t know everything.
-
So, what do you tell the person who is terrified AI will take over the world?
First of all, AI doesn’t have the capacity to take over and run things. It has no desires of its own and is incapable of initiating actions. If you don’t want AI to do something, just don’t prompt it.
Because AI can’t think in narratives, it can’t make plans: Plans are plots are stories. You can order AI to execute a task, but it’s not going to initiate a grand diabolical plan to put itself in charge. We humans have the monopoly on diabolical plans.
AI’s main contributions will be to logistics and information search and synthesis. It’s not going to replace entrepreneurs, or salesforces, or leaders, or scientists. It’s definitely not as big a technological breakthrough as electricity, so if your ancestors could handle the invention of the light bulb, you can handle artificial intelligence.
-
What were your thoughts when you heard President Carter announce he wanted Ohio State to lead on teaching AI?
I thought the announcement was very wise because AI will have a disruptive effect, and we at Ohio State need to be ahead of that. We need to know what AI is actually good at. We need to know how to equip our students for a world where AI is part of the corporate landscape. And we need to be honest about the fact that AI will replace a lot of traditional education, because AI is better than humans at repeating past knowledge.
But at the same time that AI is replacing some educational processes, it is also proving its incompetence at others. Not just its incompetence now; permanent incompetence. Even if AI is neurosymbolic, has world models, or goes quantum, there’s a hardware limit to logic gates.
Logic gates can’t replicate the narrative processes—like imagination, commonsense and emotion—that form the physical basis of low-data intelligence. To strengthen those processes in students, we need to build new experiential courses where students aren’t just sitting in a classroom memorizing stuff off a board. We need to use classrooms to involve students in dynamic experiences that cultivate resilience, leadership, innovation and decision making in volatility.
Ohio State’s mandate must be to become the best in the world at AI, while also being the best in the world at training human intelligence. Then the important third piece becomes, how do we connect the two?
-
I was surprised to read in your book, Primal Intelligence, about your extensive work with U.S. Army Special Operations. How has that shaped your views on artificial intelligence?
Army special operators constantly encounter low information environments, or as the Army calls them, VUCA: Volatility, Uncertainty, Complexity, Ambiguity. Special operators deal with those environments by making original plans—and testing them experimentally. From special operators, you can see the power of partnerships between humans and AI. AI works well in environments you can optimize, while humans operate well in environments where you have to innovate—where there is not a lot of information, and all you know is that you need to do something new.
Why would Army Special Operations never send AI on a solo mission? Because every mission is full of volatility and uncertainty. On the other hand, if the Army or the Navy wants to search an existing database to identify probabilities or patterns, they would absolutely use AI as opposed to a human.
-
So, would you say you’re not an AI skeptic, but maybe an AI realist?
Yes, that’s exactly right. I have a reputation as an AI skeptic, but I’ve spent years working with AI, so I’ve learned its physical mechanics. And once you know how a lawnmower works, you understand that a lawnmower is never going to make toast.
Here’s a mechanical explanation for why AI isn’t going to replace humans. As humans, we have a tool called a leg, and a leg can do a bunch of different things. One thing a leg can do is move forward, and if I were to build a machine to optimize moving forward, I would build a wheel. Now, if I replaced every human leg with a wheel, we’d all move forward faster but we’d lose the ability to climb stairs, or kick, or jump. It’s the same with AI. AI takes one function of the human neuron and optimizes it. So it’s much better at symbolic logic, but all the other capacities are eliminated. It’s like if you accelerated 10 percent of the human brain, then lobotomized the other 90 percent.
-
You work with the study of narrative, and how the human brain is built for storytelling. As a storyteller, I can create things that don’t need to be proven. Is that an area AI is unable to replicate?
You’re talking about counterfactual thinking. Counterfactual is the academic term for what-if. Counterfactual thinking allows you to imagine alternate ways that the world could work. In comedy and entertainment, counterfactual thinking is fun because it lets you explore different realities. And in real life, counterfactual thinking allows you to say “That mission didn’t go well; what if we did it differently?”
To invent counterfactuals, you need to think in possibilities. The human brain can think in possibilities, because possibilities are narrative, while computers can think only in probabilities. Probabilities are based on prior data, while possibilities are novel actions—actions that disrupt old patterns because they have never been attempted before. AI uses probability to come up with complex algorithmic correlations, where you as a human can use possibility to invent simple—and therefore high-leverage—theories of causation. That human method allows for deep insights—and incisive communication, as distinct from AI slop.
-
AI has real world considerations, especially in relation to power and water usage. What do you consider responsible AI use?
Energy and water use are problems that human engineers will solve over time. Until we solve those problems, we shouldn’t be scaling AI, but those problems are solvable.
Then there’s the more fundamental ethical question of who monitors AI systems. In the same way we can’t have pets running loose and biting people at the park, we have to make sure that AI coexists within society. But ultimately, that’s also something that we should be able to figure out—as long as we keep developing our own intelligence.
My main point is this: AI is going to clarify what is special about human intelligence and push us to devise better ways to cultivate human intelligence—which in the long run will improve life for everyone.