CNN has a rather silly (what else?) piece up called “Why games will take over our lives,” interviewing Carnegie Mellon professor Jesse Schell. Among other things, it speculates that within the next five years, “toothbrushes will be hooked-up with Wi-Fi Internet connections,” so that when others know how often we brush, we will have an incentive to brush more often. From this, the piece moves in short order to the central thesis:
Schell says dental hygiene — and, really, just about everything else — will become a game. He thinks the “gamepocalypse,” the moment when everything in our lives becomes a game, is coming soon — if it’s not already here.
The article usefully illustrates what seem to be two recurrent features of futurism. The first is one of the most basic moves of futurists celebratory and alarmist alike: take some techno-social trend, blur its boundaries to near-dilution, and thereby extrapolate to everything, so that in just a few short years all of society will be defined by it. (Speaking of which, remind me to write about the looming portmantocalypse.)
This move is particularly evident in fictional futurism of the self-consciously “cautionary what-if” variety: think
Repo Men and
Surrogates, just to name two recent cinematic examples. And all transhumanists seem to have some rapturous vision of the future as defined by their favorite technology. But this move is also evident in much futurism of less extreme varieties, both contemporary and historical.
All this is obvious enough, but there is something of an equal but opposite problem that is much more subtle: it seems that the combined popularity, fervency, and specificity of futuristic speculation winds up blinding us to how basically correct much of it turns out to be. When we (legitimately) dismiss the “gamepocalypse” scenario, it becomes that much easier to shrug off the extent to which gaming, virtuality, and digital immersion really are altering our lives. The extreme predictions end up functioning like a disinformation campaign.
The problem of futuristic specificity is particularly acute in fiction (which
all speculation is to some extent). This is because much of the power of fiction is aesthetic: For example, we aren’t just repelled by Orwell’s
1984 as a philosophical response to its narrative, but also because we are drawn into its world, imagining ourselves in it and experiencing the dread of what it
feels like to live in its dystopia. But the moral repulsion that
1984 teaches us to recognize then becomes linked to our aesthetic sense of it. Rather counterintuitively, because our own world still
feels like our plain old everyday world and not like what we read,
1984 remains emotionally hypothetical, numbing us to how our society has come to resemble that of the novel in some ways (e.g., especially, surveillance). Even if we might recognize it on an intellectual level, it’s hard to find the resemblance nearly as worrisome with reference to the book — not because we’re unfamiliar with the book, but rather, in a way, because we’re
too familiar with it.
(hat tip: Ann Kilzer)
Very interesting. The phenomenon you mention at the end seems to be rather like inoculation, but with the result that we do not avoid the disease, but rather ignore it.
While reading this, I thought of the young man in the feature film, Children of Men, who is absorbed in a game of some kind while Theo and his minister cousin talk. That novel (by P.D. James, and quite different from the film) seems to bring together Orwell and Huxley in a strange way, as the people have become obsessed with protection, comfort, and pleasure (foils to fear, want, and boredom).
I also thought of Neil Postman's caution that our culture has more to fear from Huxley's vision of the future than from Orwell's. But I wonder if it will not be some strange convergence of the twain that sinks us.
Thanks, Jim. Well put. (Incidentally, in case you're interested, a couple of years ago we published in The New Atlantis a joint review of both the book and the novel versions of The Children of Men.)
I find fears of tyranny through information need to be viewed through the prism of:
1. What do the courts say about it?
2. Is there sophisticated enough software to put all the piece together? Human labor is to expensive to be monitoring your tooth brushing habits. Even if someone could.
I am seeing troubling signs, since software is getting better.
Walmart computers automatically send more pop-tarts to areas that have suffered hurricanes.
Disney and Vegas use sophisticated facial recognition software to track people.
Credit card companies know certain buying habits make for good borrowers (yes, birdseed).
Using bank/credit card/IRS documents a sophisticated program could spit out "people of interest" for the state to watch.
Just as the Germans used IBM census information to round up the unwanted in WW2 (it is one thing to know a town has Roma, it is another to have a list of names and family relations), computers could be used as a tool of tyranny.
However, here in the US I see a strong court system that protects the individual and their "right to privacy".
I see this type of tyranny happening in Europe much quicker.
The United Kingdom already has enough cameras to track every car's license plate in the country….