Artificial intelligence has become the pet anxiety of luminaries like Elon Musk, Bill Gates, and Stephen Hawking.They have all expressed concerns about our Promethean quest to develop machine intelligence, and those concerns seem to be spreading every day.
But there's another dimension of technological change that ought to worry us every bit as much as AI, if not more so.
Bioengineering has already allowed human beings to take control of their own evolution. Whether it's emergent cloning technologies or advanced gene therapy, we're quickly approaching a world in which humans can — and will — change the way they live and die.
More from Vox:
'It's just an embarrassing spectacle at this point': Matt Taibbi on Trump's America
Genetically engineered humans will arrive sooner than you think. And we're not ready.
China is perfecting a new method for suppressing dissent on the internet
Just this week, in fact, as Vox's Eliza Barclay reported, scientists made a major breakthrough in gene editing technology. In short, researchers were able to tinker with embryos in order to repair DNA and help fend off disease-causing mutations.
And this is likely just the beginning of what's possible.
Michael Bess is a historian of science at Vanderbilt University and the author of a fascinating new book, Our Grandchildren Redesigned: Life in a Bioengineered Society.Bess's book offers a sweeping look at our genetically modified future, a future as terrifying as it is promising.
"We're going to give ourselves a power that we may not have the wisdom to control very well," he told me. But that won't stop us from developing it, and Bess's book is an attempt to wrestle with the implications of this.
I spoke with Bess about his new book and about the technological challenges that lie ahead.
You say in this book that bioengineering will be the next great technological wave to wash over humanity, and that it will cut more deeply than any of the industrial revolutions of the past.
That's a pretty strong statement. Can you explain what you mean?
We single out the industrial revolutions of the past as major turning points in human history because they marked major ways in which we changed our surroundings to make our lives easier, better, longer, healthier.
So the switch from hunter-gathering and nomadism to settled agriculture meant that we were able to feed much larger populations and develop great cities and all the urban civilization that goes along with it.
And then, another great turning point, the Industrial Revolution, late 1700s, early 1800s, and what it has unleashed in the succeeding 200 years, going from animal power to machine power to electrical power — that has completely transformed our relationship to our environment, to the control that we have over the Earth, how we feed ourselves, how we work. And once again, that went along with a tremendous increase in human population and the complexity of modern society.
So these are just great landmarks, and I'm comparing this to those big turning points because now the technology, instead of being applied to our surroundings — how we get food for ourselves, how we transport things, how we shelter ourselves, how we communicate with each other — now those technologies are being turned directly on our own biology, on our own bodies and minds.
And so, instead of transforming the world around ourselves to make it more what we wanted it to be, now it's becoming possible to transform ourselves into whatever it is that we want to be. And there's both power and danger in that, because people can make terrible miscalculations, and they can alter themselves, maybe in ways that are irreversible, that do irreversible harm to the things that really make their lives worth living.
That's the concern — we've given ourselves, or we're starting now to give ourselves a power that we may not have the wisdom to control very well.
And this revolution in biotechnology, in the ability to tinker with the human genome and alter our own biology, is coming whether we want it to or not, right?
It is, but I'm always careful about saying that, because I don't want to fall into technological determinism. Some of the writers like Ray Kurzweil, the American inventor and futurist, have tended to do that. They say it's coming whether we like it or not, and we need to adapt ourselves to it.
But I don't see technology that way, and I think most historians of technology don't see it that way either. They see technology and society as co-constructing each other over time, which gives human beings a much greater space for having a say in which technologies will be pursued and what direction we will take, and how much we choose to have them come into our lives and in what ways.
And I think that is important to emphasize — that we still have agency. We may not be able to stop the river from flowing, but we can channel it down pathways that are more or less aligned with our values. I think that's a very important point to make when we talk about this.
What's happening is bigger than any one of us, but as we communicate with each other, we can assert our values and shape it as it unfolds over time, and channel it on a course that we'd prefer.
Whatever shape it does take, we're not talking about some distant future here — we're talking about the middle years of this century, right?
Well, before I hurl a bunch of alarmist questions at you, let's pause for a second and talk about the positive aspects of this technology.
How will human life improve as a result of this revolution?
I think it's going to improve in countless ways. These are going to be technologies that are hard to resist because they're going to be so awesome. They're going to make us live longer, healthier lives, and they're going to make us feel younger.
So some of the scientists and doctors are talking about rejuvenation technologies so that people can live — have a longer, not only life span, but health span — which would mean that you could be 100 years old but feel like a 45-year-old, and your mind and body would still be young and vigorous and clear. So one aspect has to do with just quality of basic health and having that for a longer period of time.
Some of these chemicals — maybe some of the new bioelectronic devices — will allow us to improve our cognitive capacities. So we'll be able to have probably augmented memory, maybe greater insight, maybe we'll be able to boost some of the analytical functions that we have with our minds. And, in other words, sort of in a broad-spectrum way, make ourselves smarter than we have tended to be.
There will also be a tendency for us to merge our daily lives, our daily activities, ever more seamlessly with informatic machines. It's science fiction now to talk about Google being accessible by thought, but that's not as farfetched as many people think. In 30 or 40 years, it's possible to envision brain-machine interfaces that you can wear, maybe fitted to the outside of your skull in a sort of nonintrusive way, that'll allow you to connect directly with all kinds of machines and control them at a distance, so your sphere of power over the world around you could be greatly expanded.
And then there's genetic technologies. I imagine that some of them will be a resistance to cancer — or perhaps to certain forms of cancer — that could be engineered into our DNA at the time of conception. What's more exciting to me is going beyond the whole concept of designer babies and this whole new field of epigenetics that is coming out.
What I see there as a possibility is that you'll be able to tinker with the genetic component of what makes us who we are at any point in your life. One of the most awful aspects of designer babies is somebody's shaping you before you're born — there's a loss of autonomy that's deeply morally troubling to many people. But if you're 21 years old and you decide, okay, now I'm going to inform myself and make these choices very thoughtfully, and I'm going to shape the genetic component of my being in precise, targeted ways.
The way it's looking with epigenetics is we're going to have tools that allow us to modify our character, the way our body works, the way our mental processes work, in very profound ways at any point in our lives, so we become a genetic work in progress.
What you're describing is utterly transformative, and in many ways terrifying. You point out in the book that social systems have always had time to adapt to these technological watersheds and to develop new habits and new values.
But that won't be the case this time, will it?
No. That's one of the things that worries me. Humans need time to adjust, and I'm not sure we'll have enough. I don't agree with people like Kurzweil who say there will be an exponential acceleration of biotechnologies. There are some aspects of our world that do advance exponentially, like computer processing power, but the fact that there has been an acceleration in the last century seems to be undeniable. And the rate of acceleration seems to be increasing.
So even if it's not exponential, it's very impressive, and it means that drastic changes can come about much more quickly than they have in the past in human history. And it takes time for humans to consensually devise new habits, new practices, new attitudes, to arrange their lives in a way that makes those lives fulfilling and stable.
And there are institutional networks all around us that allow us to continue to have a sort of predictable structure to our lives from one day to the next, from one year to the next, and so forth. All these structures — you build them gradually, slowly, and it takes time to adapt.
So it's safe to say we're mostly unprepared for what's coming?
I think we've always been unprepared for these changes, but it's different when the changes come about over a century or two versus a decade or two.
A concrete example is the advent of cellphones and the internet. I see what it does to my students. There are these brilliant young students, and 95 percent of them are glued to their phones — always and everywhere. They're walking around to their next class or to wherever they're going, and compulsively looking at their phones. And it could be a beautiful day outside — the sky, the trees, other people — and they're locked into these little worlds.
Now, you might say that's a positive thing: They're communicating with other people, and the reach of their communication and richness of it have been expanded. But on the other side of it, they're not grounded in the here and now, in the concrete present. And there's something lost.
Are they even aware of what they're losing?
I doubt any of us are aware of what we're losing until it's lost.
In the book, you seem to imply that the scientists and the researchers authoring this revolution have a kind of tunnel vision — they're focused on the incremental advancements but blind to the big picture, to the potential transmogrification of our species.
Is that right?
I wouldn't call them blind. Many of them are very thoughtful people who do stand back. Talking to the robot designers and the scientists in these fields, I've been struck by how thoughtful they are about the implications of their work.
But meanwhile, their job is to make some particular advance that's going to be able to give us a new capacity to heal people, or to give us some more efficient capacity to govern our lives. And it's just harder for all of us to stand back and say, what happens when millions of people are doing this, and what are the hidden costs? What are we losing by this?
I talked to a robot designer, for example, and I said to him: "You're designing these robots that are ever more efficient and effective — what do you see us doing with them?" He said, "Well, you know, when I really step back, I'm very pessimistic about the future of humans to get along with each other. I think we're either going to wreck our planet ecologically or destroy ourselves in some kind of war — and I'm afraid that we're going to wipe ourselves out. What I'm hoping is that my robots will embody a form of intelligence that will survive the collapse of humankind and go out into the universe with intelligence. And, therefore, this creation of intelligence that has emerged with us humans will not die with our species."
I found that astonishing. I walked out of there thinking this was an amazing way to frame one's daily work. So I was impressed with the breadth of vision that lay behind this person who was designing robots.
Meanwhile, we're building these robots, and the rise of automation is going to throw our society into gross imbalance, because we're going to be facing a crisis of jobs. Millions of jobs are going to be automated out of existence. How will people live? What will be the political stresses put on our society when you have chronic mass unemployment?
(Author note: I recently had a discussion with Andy Stern about automation and the future of work. Click here to read it.)
I'm always amazed at how little technologists tend to think about the moral and political implications of their work. For example, it's hard to imagine how disruptive this kind of biotechnology will be to our sense of fairness and equity.
We should be very concerned about the societal risks that would emerge alongside these bioenhancement technologies. Because presumably, in the beginning at least, only rich people will have access to this technology, and I wonder what kind of disorder that could spawn.
Well, let's put it this way: If only rich people have access to these technologies, then we have a very big problem, because it's going to take the kinds of inequalities that have been getting worse over recent decades, even in a rich country like ours, and make them much worse, and inscribe those inequalities into our very biology.
So it's going to be very hard for somebody to be born poor and bootstrap themselves up into a higher position in society when the upper echelons of society are not only enjoying the privileges of health and education and housing and all that, but are bioenhancing themselves to unprecedented levels of performance. That's going to render permanent and intractable the separation between rich and poor.
For me, then, one of the imperatives that's going to arise out of bioenhancement is we're going to have to, in a sense, become Sweden. We're going to have to find a way to socialize the benefits of these technologies and offer them, at least as an option, to all citizens.
Doing this in a rich country like ours is hard enough — the challenge of doing this on a planetary scale is far more daunting.
I agree with all of that, but it seems unlikely that we'll be able to socialize these technologies.
The fact that the Western Europeans have done it with education and health care gives me some hope. I mean, there's still inequality in Western Europe; you can still go to Paris and see homeless people. But by and large, there's a safety net in that society that works quite well, and is consistent with a free market and civil liberties in a very democratic system.
So the fact that these large countries in Western Europe, a large part of the planet, have done this pretty effectively, I see that as a source of optimism, that we could do it as well if we want to.
And my thinking in my book is that this may be the event that forces us to truly implement a welfare state like they have in Western Europe. Because the alternative would be so horrifying; you'd basically have a fragmentation of our population into two castes — two biologically very different castes. And I think that would completely undermine the premise of equality that is central to our democratic system.
So the threat to our democracy would be so great that we would, in a sense, be compelled to accept a European model, where through taxation and redistribution of wealth we give subsidized access to these technologies, at least a basic package of these technologies, to all citizens.
You used the word "caste" just now, but in the book you use the phrase "biologically bifurcated humankind" to describe a broader inequality between enhanced citizens of affluent nations and unenhanced citizens of poor nations.
How worried are you about this?
This is my biggest worry. I give talks about this stuff in high schools, and it's readily apparent to high school students that if these technologies come into being in a society that is competitively based and has such great disparities of wealth and privileges as our global society has today, you put those two things together and you have a bifurcation of humankind. Only in this case, the inequalities are inscribed into our very biology, and therefore much more difficult to get beyond.
I believe we'll need some sort of global Marshall Plan that offers these technologies on a subsidized basis to people all over the world. Now, is that completely pie in the sky? Perhaps. In my book, I talk about other instances in which the world's nations have come together.
On climate change, for instance, the rich nations basically said, look, we understand we're all in the same boat. There's going to be a catastrophe with the ozone layer if humankind as a whole doesn't stop using CFCs. So we rich nations will pay for the poor nations to move beyond CFC-based technology, and they did.
And as a result, we bent the curve, and the ozone layer is no longer the pressing problem that it looked like it was going to be if we stayed on the trajectory we were on in the 1980s. Now, this is probably going to be far more difficult and expensive. What I'm really talking about, I guess, is ending global poverty.
Now, that may be impossible for us to do. If it is impossible for us to do, we will end up with a biologically bifurcated global caste system. Those are the stakes that are involved here.
Reading your book, I thought a lot about the psychological impact of these technologies. It seems almost certain that these bioenhancements will encourage an obsession with perfectibility that would be very bad for us, both at the individual and the societal level.
I assume that's your view as well?
Yes. There's a wonderful scholar at Harvard, Michael Sandel, who's written a beautiful book titled The Case Against Perfection. And he worries about what this is going to mean in terms of accepting our own flaws and, even more importantly, the flaws in other people if we become malleable enough that people will be seen as having flaws either through pigheadedness or by choice, rather than having been born as they are.
But I'm not as worried about this pursuit of perfection argument as some of the bioethicists who write about this are, because I think we're certainly not perfect today and we're never going to be perfect. Perfection is a mirage that constantly dances away from us. We'll have augmented powers, but we're going to be nowhere near perfect.
Well, I'm not especially sanguine about human nature, so I tend to think we can't be trusted with instruments of this power.
At some point, though, it seems we'll have to redefine or reconsider what it means to be human. Part of what makes us human is our finitude, our creaturely vulnerabilities. But we may well engineer a way around these problems, and that's a dramatic shift.
I don't see us ever succeeding at engineering our way around all of our vulnerabilities, because we may give ourselves all these powers that I was describing when you asked me to describe some of the positive effects of these technologies. We're still going to have a broad array of vulnerabilities. We're still not going to know why we were put here. We may live longer lives, but people will still die. We'll still be disappointed by things that happen to us. We'll still get frustrated. We'll still fight with each other.
Like you say, human nature will sort of just broaden out further, and I think a lot of the obstacles that we face in trying to live a good life, a meaningful life — they'll still be present, even in a world of bioenhancements.
But what's scary about it is that the human beings will have much greater powers over not just their surrounding environment, not just their ability to engage in warfare and things like this, but they'll have power to reach directly down into the biological basis of their mind and spirit and soul.
If we transform ourselves beyond a certain point, which seems inevitable, we'll have to ask what it means to be a human in a world of walking gods. And I don't know what the answer to that is.
Right. And that's why I worry about not having enough time, because I think, given enough time, we would probably be able to work our way through the very profound questions that these technologies will raise for us. And which, exactly as you say, all come back down to what does it mean to be a human being? What does it mean to live a good life? What are the things that really matter? What are the things that I should not worry about and leave aside?
All these very profound, difficult questions that in the past have been associated with spirituality or religion or morality, we probably will be able to address them in constructive ways.
But it takes time.
I know you're not exactly in the prediction business, but let me ask you this final question: Are we rushing headlong into a dystopian hellscape, or will this be a net positive for human life?
I think we are rushing headlong into a series of choices. Each of us as individuals will have to make decisions about which of these technologies we choose to adopt for ourselves. As parents, we'll have to choose for our children. And as a society, we'll have to decide if some of these technologies need to be banned outright.
A good candidate for a technology that should be banned globally would be a mind-reading technology that allows you to point the device at somebody's head and intrude on their privacy and read their thoughts without their consent or knowledge. That type of technology is an unmitigated evil, and nobody should be allowed to develop or use it.
So we're going to have to be very critical about these technologies, and I think that's going to fall on the basic education systems we have, and the family socialization process. I think people are going to have to be educated so that, as consumers, we do this intelligently rather than self-destructively.
So one possibility is the dystopian nightmare scenario of a bifurcated humankind, a society in which people are transmogrifying themselves beyond all recognition. The other side is a society where we have managed to preserve many of the things that make our lives worth living today: love, friendship, kindness to strangers, civility, cooperation, compromise — all the things that are the hallmarks of what makes our lives fruitful and worth living today.
There's no reason, in my view, why these technologies are inherently incompatible with a fairly decent world. But the choice is going to be up to us as individuals, as families, as communities, as nations, and as humankind. At all those levels, we're going to face a series of tough choices about how we educate ourselves to prepare ourselves for this very swift change.