What a difference a day makes! On Tuesday, Michael Anissimov posted a plea to his readers to aid the Existential Risk Reduction Career Network — either by “[joining] an elite group of far-sighted individuals by contributing at least 5% of your income” or, “for those who wish to make their lives actually mean something,” by finding a job through the network. Who’d have thought you could make your life mean something by becoming an existentialist?At any rate, he took something a beating in the comments (“Harold Camping called, he wants his crazy back,” said one), but I think people might as well put their money where their mouths are. That’s how interest-group politics works in American liberal democracy; it’s part of the give and take of public debate and the way in which decisions get made. Why existential risk reduction would not include a healthy dose of criticism of transhumanism is another matter, but I was happy to see Mr. Anissimov seeming to be sensible with respect to one of the routes for how the transhumanist cause is going to have to get ahead in the public arena.Just shows how wrong a guy can be. On Wednesday, Mr. Anissimov published a brief critique of a rather thoughtful essay by Charles Stross, one of the great writers of Singularity-themed science fiction. Mr. Stross expresses some skepticism about the possibility of the Singularity, but Mr. Anissimov would have none of it, particularly when Mr. Stross dares to suggest that there might be reasons to heavily regulate AI research. Mr. Anissimov thunders:
We are willing to do whatever it takes, within reason, to get a positive Singularity. Governments are not going to stop us. If one country shuts us down, we go to another country.
(Now I understand why Bond movie villains end up somewhere in mid-ocean.) He continues:
WE want AIs that do “try to bootstrap [themselves]” to a “higher level”. Just because you don’t want it doesn’t mean that we won’t build it. [Emphases in original.]
Take that, Charles Stross: just you try to stop us!! Mr. Anissimov makes the Singularity look a lot like Marx’s communism. We don’t know quite what it’s going to look like, but we know we have to get there. And we will do anything “within reason” to get there. Of course, what defines the parameters of “within reason” is the alleged necessity of reaching the goal; as the Communists found out, under this assumption “within reason” quickly comes to signify “by any means necessary.” Welcome to the logic of crusading totalitarianism.
1 Comments
Comments are closed.
"Why existential risk reduction would not include a healthy dose of criticism of transhumanism is another matter…"
This is a great point. I think existential risk reduction is as worthy a cause as Mr. Anissimov suggests. But unlike him, I plan on donating some money to The Center for Bioethics & Human Dignity and the ETC Group. (I'd be open to suggestions if anyone has any.) If transhumanism succeeds then the resulting disaster will be almost as terrible as physical human extinction. And there is every reason to take the threat seriously and not dismiss it, which I think some on this blog may unfortunately not quite get.