Murderbot, the titular character of a new television show on Apple TV+, doesn’t do much murdering. Instead it enjoys the freedom of having hacked its governor module, the internal control system that punished it for disobeying orders from its corporate owners. Now you could say it’s “quiet quitting”: slacking off by watching its favorite soap operas on the job and trying to keep the humans under its protection from realizing that it’s gone rogue.
The show, based on a book series by Martha Wells called The Murderbot Diaries, features a main character made of cloned human tissue and robot hardware. It isn’t a human, and it isn’t a bot; it’s something in-between called a “construct.” It can see with security cameras or with its eyes; it can talk to computer systems with code and humans with language; its digital memory can be wiped by its creators, but its biological memory clings to traumatic flashes that can’t be purged. It does not always understand human emotions, yet it feels, deeply.
Exactly how this integration of cloned human neural tissue and computer circuitry works, we don’t know—and the creators of the TV version told me that they don’t know either. Wells, the books’ author and a consulting producer on the show, keeps it vague. “[Wells] likes playing with the possibilities, but her world creation isn’t so intensely detailed that we don’t get to find our way through it ourselves,” says Chris Weitz, who adapted the series for TV along with his brother, Paul Weitz.
On supporting science journalism
If you’re enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.
For me, a Murderbot fan and brain science nerd, “finding my way through it” involved talking to neuroscientists to understand how this seamless integration of brain and computer might work—because mixing brains and computer circuitry is not only science fiction. “It’s a very cool idea that we’re moving toward in many ways,” says Alexander Huth, a neuroscientist at the University of Texas at Austin (and fellow Murderbot fan). And as neuroscientists get better at linking up our minds with computers, they’re revealing some of what is so unique, and confounding, about the human brain and how we consciously experience the world.
The Electrical Brain
On the surface, it seems like brains and computers should be compatible enough—both work using electricity. Scientists have been using electricity to interact with the brain since 1924, when the psychiatrist Hans Berger first used electrodes to measure brainwaves. Fifty years later the first brain-computer interfaces used electrical readouts to affect the outside world—by controlling a cursor on a computer screen or, in the case of one avant-garde composer, converting brainwaves into music.
Today brain-computer interfaces are far more advanced. Electrodes implanted inside the brain (or in some cases, simply sitting outside the scalp) pick up subtle patterns of neuron activation in the parts of the brain that generate movement and speech to guide prosthetic limbs or allow people with amyotrophic lateral sclerosis (ALS) to communicate, respectively. Some researchers are working on devices to bypass spinal cord injuries to send signals from the brain to paralyzed limbs.
And, increasingly, researchers are feeding the brain sensory information with computers, too, by developing prosthetic limbs that send physical sensations of touch back to the brain. Retinal implants for vision, similar to cochlear implants for hearing, are being developed to send visual information directly to the brain for some people who have lost their sight as a result of a condition called retinitis pigmentosa. And some groups are developing brain prosthetics to restore vision by directly stimulating its visual processing centers. It’s still pretty low-resolution, Huth says, but “this is happening.”
The Brain’s Big Principle
None of these advances will allow scientists to create a bot-human construct like Murderbot anytime soon. In fact, the deeper you get into this research, the more it becomes clear why: although both run on electricity, human brains and computers have entirely different strategies for processing information.
Take a classic sci-fi trope that happens to be one of Murderbot’s core abilities: seeing a digital display in its mind’s eye. What would it take to beam an episode of TV into someone’s head?
“We don’t really know,” Huth says. The most obvious method involves sticking electrodes into the region at the back of the brain, called the primary visual cortex, which first processes visual information from the eyes. But there’s a problem. “You’d need millions and millions and millions of electrodes to be able to read in a high-resolution image into your brain. And that’s not plausible, at least in the near term,” Huth says. Some researchers are getting around this problem by forgoing the high-resolution details entirely. They’re experimenting with stimulating higher-level visual regions that process more abstract information, such as faces. “You would have the experience that there’s a face” even without seeing all the details, Huth explains.
The problem, says neuroscientist Rodrigo Quian Quiroga, is that the brain is an abstraction machine. Unlike a computer, it isn’t set up to care about the details. We forget the details of most of what we experience—in fact, we never commit them to memory at all. “The human brain doesn’t want to remember. It wants to understand, which is very different,” says Quiroga, who studies visual perception and memory at the University of Leicester in England. He explains that most of what we remember perceiving is a construction rebuilt from a few pillars of meaning that we determine are important. Vision, memory and consciousness are all built from sparse details. “The big principle of brain function, for me, is that it’s all a construction,” Quiroga says.
Computers, on the other hand, encode every single bit of information. Unlike a human, who would likely remember only the gist of what they’ve experienced, “a computer can play [Blade Runner] from beginning to end without any errors,” Quiroga says.
Might a brain-computer interface one day exist that can augment your brain to play back the entirety of Blade Runner (or Murderbot’s favorite soap opera, The Rise and Fall of Sanctuary Moon) in your mind’s eye? Maybe. “Imagine that this is possible. Do you want that—because one of the key features of how the brain works is that we forget a lot of things,” Quiroga says. This prevents us from getting lost in endless unimportant details. “If it works like that, it’s because of millions of years of evolution. So there might be a reason for it,” he warns.
The Age of AI
Though brains may never work like computers, computers are increasingly working more like brains. “ChatGPT is much more brainlike than a laptop,” says Huth, who studies the human language system. Artificial intelligence large language models are “a really good match to how our brains represent information and language—the best that we have,” he says. Scientists are also developing computer hardware that mimics neuronal circuitry. And some have tried hooking AI hardware up to brain organoids, or cultured clumps of neurons grown in a lab, to process information.
It’s no wonder, then, that Murderbot strikes a chord now. All the way back to Frankenstein, science fiction has reflected our deep cultural fears about the technology we’re currently birthing. “On a certain level [Murderbot’s story is] topical because people are preoccupied with AI,” show co-creator Paul Weitz says.
But Murderbot caught his attention because “it felt like a great literary character more than anything else.” Chris Weitz adds: “[Murderbot] sort of flips the trope—we’re so used to the idea of this artificial person who wants to be human and wants to experience human emotions. And [Wells’s] character, which is brilliant, doesn’t want to do that.”
Murderbot isn’t human. That much it makes very clear to everyone who projects that desire onto it. But it is nonetheless a person, and that reality can’t be changed even by those who seek to control it. It’s that “irreducibility of personhood,” Paul Weitz says, that drives the story. “That to me was the huge, beautiful lesson in it.”