Fourteen years ago in Wired magazine, Sun Microsystems founder Bill Joy wrote a long, brilliant thought-piece titled “Why The Future Doesn’t Need Us.” Three technologies, Joy suggested — robotics, nanotech and genetic engineering — threaten humanity by the increasing odds they will spiral out of our control.
Joy’s dark argument was new scalable technologies can be launched by small groups, even individuals, to trash the planet, even if that trashing was a simple error. Imagine a high school science lab that, with a little online research, creates a new class of bacteria that replicates easily and outcompetes all other bacteria. A petri dish goes home in a pocket, oops, and the freed bacteria soon turns the biological world into gray goop. Nanotech could do the same with mini robots whizzing through your bloodstream. And artificial intelligence (AI), which once sounded like fiction, is now built into your iPhone. Can we control all this?
You can sense the growing unease across all of humanity with these potential technology mistakes. Films on artificial intelligence, zombie takeovers, rogue comets and uncontrollable earthquakes are everywhere. We now entertain ourselves by anticipating our species’ demise.
The world is changing at a much larger level
The mistake most of these scenarios make is they look at outcomes on our own human level, as in what would happen to me, you, and the couple living next door. The real answer is much bigger — that our little species may not matter, that evolution may rapidly move beyond us even if it feels like a local, temporal “disaster” to humans — but first, let’s examine the AI issue at our personal viewpoint.
Artificial intelligence (AI) debates often take an anthropomorphic, human-centric approach, with three questions: will machines ever (a) get smarter than humans, (b) become self-aware, and (c) then destroy us poor souls? World War II code-breaker Alan Turing suggested in his famous “Turing test” that (a), machines will get smarter than people as technology advances until it can respond to any question in a way that mirrors human thought. (You already see glimmers of this with Apple’s Siri on iPhones.) Researcher John Searle challenged (b), the question of whether seemingly smart machines could ever become conscious and thus truly intelligent, in his 1980 paper describing a “Chinese room” thought experiment; Searle described how an English-speaking person locked in a room, if given supremely detailed instructions on how to answer questions in Mandarin with appropriate written responses, could respond intelligently to any question slid under the door without ever really understanding what the Chinese conversation was about; so too, he suggested, machines will grow faster at emulating human thought but will never truly be cognizant.
These first two debates — will machines get smarter?, and if so, will machines become self-aware? — don’t really matter, because the answer to the third question may be coming the wrong way anyway. Eventually, a smart/simulated-smart program may cause humans harm.
Here’s a simple test: If you built a computer program charged with protecting life on planet Earth, might that program wipe out an invasive species that threatened all other species? Well, if that’s the case, humans spreading across the planet might be seen as a threat worth eradicating to save the coral reefs and fledgling birds now being destroyed everywhere.
But what if the clouds are intelligent above us?
Now, on to the larger level of AI: What if artificial intelligence has already arisen in a form beyond anthropomorphic mirrors? The best way to visualize this is to imagine you are in space, looking down God-like at planet Earth for the past 10,000 years, and could count every technology gadget from wooden wheel to computer chip as it spreads across the globe. Shivering a bit from the cold, you see: 5,000 years ago, a few thousand wheels and chariots. 200 years ago, suddenly, vast increases in metal weapons and steam-engine-thingies. 100 years ago, electromagnetic TV and radio pulses begin emanating from everywhere on the planet. 60 years ago, metal rocket probes shoot out from the planet to go look at other nearby orbs. 20 years ago, electrical pulses begin connecting millions of fixed silicon/aluminum/plastic machines. Five years ago, tiny mobile devices begin moving around on the bodies of billions of people, all pulsing with networked data. Today, humans begin implanting technology sensors into shelters (walls), rolling exoskeletons (cars), and even mammal bodies, to improve or track vision, heart rates, communication, and the whereabouts of cats and dogs.
Technology, if viewed as a life species, is taking over our planet. It is advancing beyond human control.
Gadgets, networks, and data transmissions are expanding in waves that appear unstoppable, creating what Kevin Kelly calls a “technium” that is pulsing over the entire globe. The entire universe of technology, this technium, is beginning to act as an autonomous, growing, unstoppable organism. Just as biological evolution takes different paths and cannot be repeated, but still converges upward along predictable lines (of carbon-based lifeforms with mirrored body symmetry and digestive systems pointing down toward gravity), the technological inventions may also be inevitable. Someone, somewhere was destined to invent coded thought that became computer code that developed into cell phones that build communication networks that store vast reams of data. Silicon-based intelligence just may be the inevitable next ladder up in evolutionary fate; and this intelligence is manifest not in a single computer speaking to you like Hal in “2001,” but rather a vast new ecosystem of connected devices communicating to each other that individual humans may not perceive.
The irony of our quest for AI is that it may be here already in a form of crowdsourced intelligence, built partly from individual human minds and partly from the technology networked connections that accelerate our group action. You can see some of this today in prediction markets that show who will win future elections, in stock markets that react instantly to news of economic shifts, in Internet networks that break up data packages to send them seamlessly across the globe in the fastest-possible directions. The pulse of technology to improve itself has become an innate force, yes, fueled by individual human ants who compete to build the next killer app, but leading to a world where faster data transfers and analytic models are always the next outcome.
At some point our planet will shine with a new intelligence to guide markets, production, information, weather, temperature, and habitats for the biological creatures who built the original tech things. Like self-driving cars, eventually the globe will become a self-automated environment, with AI layers making billions of decisions each second for how to optimize the world to its desired goals. Eventually, these goals may escape our Earth to send information and intelligence outward to look at deeper issues, the connections between the black hole at the center of our galaxy and the dark matter around it, of the patterns of creation and entropy that may need to be reborn to stop the eventual heat death of our universe.
The ultimate evolution may be information escaping from the bounds of physical objects, of pulsing nodes that might live in the clouds of our own world or the far-off gas realms of our galaxy. As religions have predicted, the spirit is willing and the flesh is weak.
The questions are: will our little human minds be guiding that technium; will we recognize new intelligence for what it is; and will that emerging, smarter, technological being any longer need us?
Posted by Ben Kunz.