Interstellar Spacecraft, Positronic Brains, Personal Forcefields… Microfilms?!
It is 2061, and Multivac is retiring. The city-sized, self-aware computer has surpassed the designs of its human engineers and taken charge of its own maintenance and upgrades. It can flawlessly predict the outcome of democratic elections and foresee and prevent any looming social or economic crises. It has designed spacecraft to explore the solar system, and contrived a method to perfectly harness the sun’s energy eliminating any need for fossil or nuclear fuels. But existing in this utopia of technology, in a story written by Isaac Asimov in 1956, Multivac runs on vacuum tubes and communicates with its technicians through punch cards and ticker-tape print-outs. Likewise, in his masterpiece Foundation (1951), we’re taken approximately 22,000 years forward where humanity has spread its wretched genes to every habitable planet in the galaxy, but the most advanced form of data storage is microfilm. Asimov was noted as an imaginative and realistic author, but his visions of the future were always anchored in the values and technology of his time, which tells us something quite profound about the nature of speculative fiction itself.
How science-fiction authors introduce technology into their narratives largely falls within three broad categories. They’re either contemporary technologies which have since become obsolete wrapped in cellophane and aluminum foil (think video pay-phones here); technologies which are still well beyond our grasp but seemed in the offing at the time of writing (force fields and cold fusion); or technologies which didn’t exist then and do today. When it falls into the last category we immediately gush about the author’s prescient mind and repeat of them, in hushed tones, that inane mantra, “They were so far ahead of their time.”
It’s all bollocks, of course. I don’t say this to denigrate science-fiction authors: their job is not to predict the future but to extrapolate from the present to generate formidable and serious satire. Day of the Triffids (1951), for example, is not survival horror but a deconstruction of the Cold War arms race. Mining a similar vein, Watchmen (1987) is not just an alternate history with costumed heroes but is, amongst other things, an examination of nationalistic and ideological antipathy inherent in the US-USSR conflict. Future settings just allow us to take similar contemporary issues and exaggerate them, which science-fiction authors do with savvy and finesse, to take The Left Hand of Darkness (1969) and gender issues or “Judgment Day” (Weird Fantasy 18, March-April 1952) and racial segregation. In this capacity they weave timeless narratives, even as the tech portrayed dates rapidly. It is a questionable practice, however, to find instances where an author has described a piece of technology in the fifties which looks like an iPod, or the internet, and marvel at their clarity of vision. Science-fiction authors are rampant speculators, and as such even their crazier guesses are going to be correct from time to time, and remarking on such a coincidence is indulging in a confirmation bias.
Confirmation bias refers to how we place undue importance to events that are striking to us somehow, such as when we recall seeing a raven on the day bad news was received and make the connection that the incidents were related. Having noted something once, we are biased to look for more examples of it and discard contrary evidence, until we develop quite heavily ingrained beliefs completely informed by anecdote. When we read an account of the future published in 1901 we look for what it got right and disregard what it got wrong, giving it a prophetic mystique. It’s what allows us to give credence to Nostradamus or Mayan calendars, by finding, in hindsight, parallels between their predictions and major disasters or devastating conflicts.
To demonstrate the point I’ll be examining three works of fiction which have, at one time or another, been described as anticipating the World Wide Web with remarkable clairvoyance: E.M. Forster’s “The Machine Stops” (The Oxford and Cambridge Review, November 1909), Murray Leinster’s “A Logic Named Joe” (Astounding Science Fiction, March 1946), and Orson Scott Card’s Ender’s Game (1985).
Forster’s Welles-inspired dystopia finds the world’s population living beneath the planet’s surface in individualized cells, wasting away in front of video screens that allow instantaneous communication across the network. They occupy themselves by videoconferencing and sharing hackneyed observations, rarely visiting the surface or one another, while their earthly needs are catered for by the Machine. Inevitably, it all goes to shit when the Machine starts malfunctioning, but the people, being supreme beings of leisure, are content even as life support fails and humanity perishes. Squinting hard enough, some of the technology may seem extra modern for its time, with networked computer terminals, Skype, and StumbleUpon. But the technological leap wasn’t as great as we might imagine: telegraphs and telephones had been networking the planet through the nineteenth century and the concept of a television screen was already decades old by the time the fiction was written. Conceptually, it’s an even more fundamental miss: we may spend six hours a day refreshing Facebook but the internet has not stifled human curiosity or innovation, and unlike the Machine there is nothing centralized about the World Wide Web. Rather what Forster gave us was an updated Moloch narrative, with a god machine replacing the cannibalistic workers. It’s an allegory that still has traction today, but it does not serve as a very good illustration of modern communications technology.
Leinster’s comical “A Logic Named Joe” hits closer to the mark. Decades before Microsoft’s ‘every desk, every home’ vision, when there were only a dozen computers in existence, Leinster imagined everyone owning a ‘logic’ (computer terminal).
You got a logic in your house. It looks like a vision receiver used to, only it’s got keys instead of dials and you punch the keys for what you wanna get. It’s hooked in to the tank, which has the Carson Circuit all fixed up with relays. Say you punch “Station SNAFU” on your logic. Relays in the tank take over an’ whatever vision-program SNAFU is telecastin’ comes on your logic’s screen. Or you punch “Sally Hancock’s Phone” an’ the screen blinks an’ sputters an’ you’re hooked up with the logic in her house an’ if somebody answers you got a vision-phone connection. But besides that, if you punch for the weather forecast or who won today’s race at Hialeah or who was mistress of the White House durin’ Garfield’s administration or what is PDQ and R sellin’ for today, that comes on the screen too. The relays in the tank do it. The tank is a big buildin’ full of all the facts in creation an’ all the recorded telecasts that ever was made—an’ it’s hooked in with all the other tanks all over the country—an’ everything you wanna know or see or hear, you punch for it an’ you get it. Very convenient. Also it does math for you, an’ keeps books, an’ acts as consultin’ chemist, physicist, astronomer, an’ tea-leaf reader, with a “Advice to the Lovelorn” thrown in. The only thing it won’t do is tell you exactly what your wife meant when she said, “Oh, you think so, do you?” in that peculiar kinda voice. Logics don’t work good on women. Only on things that make sense.
Ender’s Game is set in a future where humanity is participating in a war of annihilation with an alien species, desperately training cadets from childhood to be the perfect tacticians. The highly intelligent Ender Wiggin is selected for the program and forced to leave behind his elder siblings Peter and Valentine, who are equally gifted but don’t make for officer material. Instead, they get on to the net, hoping to influence the world as civilians under the pseudonyms Locke and Demosthenes, and some of the actions they take would seem curiously familiar to us today. They teach themselves argumentative skills by making inflammatory comments on forums (flamebaiting), use their anonymity to masquerade as adults, adopt other user names to use as foils (sockpuppeting), and find international celebrity status under pseudonyms (as did m00t). But at the time of writing internet communities and online forums already existed, if not the World Wide Web, and in every sense Card conceptually mishandles the modern internet. Close as he was to the inception of the World Wide Web, he imagines people logging on to read essays rather than tweets. He perceives its structure as hierarchal rather than collaborative, as an extension of the traditional media rather than a people’s forum capable of completely subverting all the old channels of information. And as Randy Munroe illustrates, the signal-to-noise ratio is absolutely ridiculous.
Even though Card was writing at the dawn of the internet he could not imagine its properties, any more than Forster, Leinster, or Asimov could, because the nature of the World Wide Web allowed for a fundamental shift in the way information is propagated, and for the way people network among one another. Speculation may have told you the future may involve personal computers and worldwide networks, but the nature of massive collaborative projects such as Wikipedia, new forms of media distribution such as BitTorrent, and social networking sites are little to do with technological advances and all about how humans embrace, reject, and process it. It’s this human interaction which defines the present. Science-fiction, as speculative satire and serious absurdism, exaggerates the social concerns of its own times by distorting the setting, and when we focus on the setting too much we miss the point. We can’t look to past predictions of the future for insight into our present. We need to, instead, create our own narratives.