Mark Walker recently wrote an interesting piece over at The Global Spiral suggesting that when it comes to preventing the extinction of civilization, transhumanism is the best of the bad options we have. He frames the problem in a familiar way: the democratization of existential risks. As things are going now, more and more people will become capable of doing greater and greater harm, particularly via biotechnology. But if business as usual is in effect the problem, relinquishment of the knowledge and tools to do such harm would require draconian measures that hardly seem plausible. Transhumanism, while risky, is less risky than either of these courses of action because “posthumans probably won’t have much more capacity for evil than we have, or are likely to have shortly.” That is to say, once you can already destroy civilization, how much worse can it get? Creating beings who are “smarter and more virtuous than we are” has a greater chance for an upside, as “the brightest and most virtuous” would be “the best candidates amongst us to lead civilization through such perilous times.”
At one level, Walker’s essay might appear as mere tautology. If the transhumanist project works out as advertised (smarter and more virtuous beings), then the transhumanist project will have worked out as advertised (smarter and more virtuous beings will do smarter and more virtuous things). But more interestingly, Walker nicely encapsulates a number of issues that transhumanists regularly seek to avoid thinking seriously about. For example:
1) What is the relationship between human and posthuman civilization? If proponents of “the Singularity” are correct, then the rise of posthumans would likely be just another way of destroying human civilization. Our civilization will not be “led through perilous times,” it will be replaced by something new and radically different. One could say that at least then human civilization would have led to something better, rather than simply lying in ruins. But then the next question arises.
2) What makes Walker think that posthuman wisdom and virtue will look like wisdom and virtue to humans? Leaving aside the fact that humans already don’t always agree about what virtue is, we label the things we label virtues because we are the kinds of beings we are. By definition, posthumans will be different kinds of beings. At the very least, why should we expect that we will understand their beneficent intent as such any better than my cat understands I am doing her a favor by not feeding her as much as she would like?
3) Walker suggests we have “almost hit the wall in our capacity for evil.” I hope he is right, but I fear he simply lacks imagination. The existing trajectory of neuroscience, not to speak of how it might be redirected by deliberate efforts to create posthumans, seems to me to open exciting new avenues for pain and degradation along with its helping hand. But be that as it may, I wonder if “destruction of human civilization” is really as bad as it gets. As is clear from discussions that have taken place on Futurisms, for some transhumanists that would hardly be enough: nature itself will have to come under the knife. That kind of deliberate ambition makes an accidental oil spill, or knocking down a few redwood groves, look like shoplifting from a dollar store.
So: human beings have made a hash of things, but since we can imagine godlike beings who might save us we should go ahead and try to create them. We might make a hash of that project, but doing anything else would be as bad or worse. That’s what you call doubling down.
2 Comments
Comments are closed.
Walker's essay is basically an inflated rehash of arguments I've encountered any number of times in on- and offline transhumanist bull sessions: We need to colonize space or create superintelligent (and wise and good) AI or superhumans or superpowerful technology or defensive weapons before [insert doomsday scenario here] happens, to save humanity (in some form)….
Yeah, so Walker is a phil prof and writes in a stuffy academic style with a few footnotes. The mediocrity of his thinking is illustrated by his choice of doomsday scenario: that some malevolent biohacker is going to create a virus that wipes out humanity (or two to make sure of it). Really. I see no evidence Walker knows much about or has made any effort to assess (e.g. by consulting a range of experts) the seriousness of such a presumed danger, or to think seriously about what should be done about it if the danger is real. Nor does he explain how creating "posthumans" would serve to deflect this threat, other than that they might be wise and good.
As you point out, the posthumans might just as well (and more likely, I would think) not be particularly wise or good, even if they were more intelligent, which is also dubious. Walker invokes another familiar argument, that clandestine efforts to engineer humanity would be the inevitable outcome of attempts to supress such efforts, and would be all the more dangerous. He seems unaware of how unlikely it is that anyone will be able to engineer human flesh to create a significantly more intelligent (let alone wise and good) living creature, any time in the foreseeable future. Or that, in the mean time, we will certainly will have increasingly intelligent machines, and will be able to use those machines to expand the reach and effectiveness of our own intelligence. He also doesn't address why it is that the most intelligent, wise and good people don't actually run the world today, or why they would be more likely to if they were technologically created.
Walker's argument, stripped down to a more credible form, is that we will need the power of emerging technologies to deal with many problems and dangers that, ironically, exist in large part due to existing technologies. Thus the "relinquishment" of future technologies as tools of humanity is generally not a good idea.
I tend to agree with that, but it is completely orthogonal to the question of whether it is desirable to attempt the engineering of humanity itself at the organic level, or whether that is something we should avoid — for fundamental reasons which do not seem to be found within the scope of Walker's philosophy.
Incidentally, I just published an article over at the IEET that also discusses Walker's article. If anyone's interested, the URL is: http://ieet.org/index.php/IEET/more/verdoux20100819.