Many of us, owing to an intuitive sense of where technological and social progress are taking us, have a preconceived notion of what the future will look like. But as history has continually shown, the future doesn't always go according to plan. Here are 11 ways the world of tomorrow may not unfold the way we expect.
Illustration by Tara Jacoby
1. There Could Be a Resurgence in Authoritarian Rule
Many of us believe that democratic values and institutions will still be around in the future. But as we head deeper into the 21st century, the continuing presence and increased accessibility of weapons of mass destruction could severely upset the political status quo.
As I noted back in 2008 at the IEET's' Symposium on Global Catastrophic Risks, technologies that threaten human existence stand to significantly perturb contemporary sensibilities about social control and civil liberties. As we saw after the 9/11 attacks, our governments are more than willing and able to curtail our rights and impose restrictive laws in response to a crisis. Imagine what would happen in the event of something far worse.
Looking ahead, institutions that have served for centuries to protect democratic values — and which we now take for granted — could be suppressed out of fear and desperation. As I noted at the Symposium:
What makes these WMDs different [i.e. bioweapons (such as deliberately engineered pathogens), dirty bombs, weaponized nanotechnology, robotics, misused artificial intelligence, and so on] is the growing ease of acquisition and implementation by those who might actually use them. We live in an increasingly wired and globalized world where access to resources and information has never been easier. Compounding these problems is the rise and empowerment of non-traditional political forces, namely weak-states, non-state actors and disgruntled individuals. In the past, entire armadas were required to inflict catastrophic damage ; today, all that's required are small groups of motivated individuals .
We may be entering into a period of sociopolitical disequilibrium that will result in the resurgence of authoritarian rule. Sadly, it may turn out that democracy was a temporary historical convenience.
2. We Could Have Even Less Privacy
On a related note, we also stand to lose many of our rights to privacy. The advent of ever-more powerful surveillance technologies and the need for proactive intelligence will serve as a potent driver of change over the coming years and decades.
As noted by futurist Timothy C. Mack, the explosive growth of surveillance capabilities will encompass more than just terrorism and crime prevention. It will also be used
for such work as epidemiological oversight by the Centers for Disease Control and Prevention, for example. Construction sites, warehouses, commercial office buildings, and parking lots also commonly have surveillance technology installed for the protection of property.
Not to mention the ongoing use of surveillance in tracking our consumption habits.
Interestingly, many of us are coming to accept the diminishment of privacy in our lives. Since the 9/11 attacks, the approval rate among Americans for public surveillance cameras is at 70% and rising. We could we be heading toward the "transparent society" predicted by scifi novelist and futurist David Brin.
3. Our Future Could Be In Inner Space – Not Interstellar Space
Many of us assume, quite naturally, that human destiny lies in the stars. But according to futurist John Smart, the accelerating complexification we've seen in universal history to date has been primarily a journey not to outer space but inward, into physical and virtual "inner space."
Here's what he told me:
In terms of physics, we've seen accelerating spatial, temporal, energetic, and material (STEM) density and efficiency — taken together, STEM compression — of our universe's most complex, adaptive, and rapidly-improving systems. Consider how top complexity has graduated from universally distributed simple matter, to large scale structure, to galaxies, to special solar systems, to prokaryotic life on Goldilocks planets, to eukaryotic life inhabiting a tiny slice of the bacterial range, to humans living in a vastly smaller and briefer domain, and soon to intelligent technology that may move into nanotechnological and quantum realms.
In terms of information, these systems have also entered into virtual inner space the more advanced they get. They've gotten fantastically better at virtualization, ephemeralization, dematerialization, simulation, intelligence, or "mind". They increasingly substitute thinking over acting, as their ever-improving simulations allow them to explore, discover, create, and compete far faster, better, and more efficiently in mental realms than they could in slow, simple, boring, expensive, and dangerous physical space. This accelerating journey of Earth's intelligence into physical and virtual inner space may rapidly guide us toward black-hole-like , or if you like, hyperspatial domains. Our civilization hasn't been growing into the universe as it develops but rather growing out of it, in an accelerating manner, like an awakening baby, as our universe itself ages and decays.
Interestingly, Smart says that entering inner space may turn out to be the fastest and most ethical way to communicate with and learn from alien civilizations. If so, this could explain why we have yet to make contact with, and why we don't see any signs of, advanced alien life. Smart also speculates that a "super-ethical machine intelligence," in order to manage the process and ensure memetic diversity, may enforce a kind of Prime Directive to constrain the migration of advanced intelligence to physical and virtual inner space.
You can explore more of Smart's ideas in this paper and tell him if and how he's wrong over at his blog.
It's also fair to assume that the vast majority of humans will never go to space. It's very likely that we'll establish a robotic and human presence in many parts of the solar system. But let's be honest — space will, in all likelihood, be reserved for a precious few. At least for the foreseeable future.
Image: ZA architects envision an underground colony on Mars
As futurist Ramez Naam told me, "In 2050, there still won't be any more than a trivial number of people outside the orbit of this planet — if any."
4. Bugginess May Be Seen as a Feature
(Google)
As futurist Jamais Cascio told me, "bugginess" will be increasingly seen as an expected condition of technologies:
We often have a vision of Tomorrow's Technology (™) as being clean and sleek and working essentially perfectly (unless there's a plot-required failure). But AIs will crash, spontaneously reboot, and go through weird looping pauses. Nanotech will be filled with spam viruses and DRM. Self-Driving Cars will occasionally kill their passengers to avoid killing even more people (see Trolley Problem). All of the kinds of things we bitch about our technologies today will still be with us, in novel forms.
5. We May Never Solve the 'Hard Problem' of Consciousness
Image: 3D reconstruction of the brain and eyes from CT scanned DICOM images. via Dale Mahalko
Cognitive scientists and neuroscientists still aren't sure what to make of what Australian National University philosopher David Chalmers calls the "hard problem" of consciousness. We still have no explanation for how and why we have phenomenal experiences – what scientists call qualia.
Related: 8 Things We Simply Don't Understand About the Human Brain
Let's assume this problem is outright impossible, as opposed to difficult. Without a proper model of cognitive phenomenology, we'll never be able to develop fully self-aware robots or AI, nor will we ever upload our brains into a computer (well, we could — but the resulting "minds" would have no modicum of self-awareness). We'll still experience technological and biomedical progress in many areas, but in terms of transcending the hard limits of the biological substrate brain, we could hit a wall.
6. Human Enhancements May Never Be Allowed
Most transhumanists and technoprogressives assume that human enhancement awaits us in the future. Indeed, we generally take it for granted that we'll eventually use the panoply of emerging biotechnologies to make ourselves smarter, stronger, and longer-lived.
But if current sensibilities about this being a eugenic-like initiative hold sway, human enhancement may never happen. As it stands, virtually every country in the world restricts the way genomic technologies can be used. Only therapeutic interventions, such as the treatment of genetic disorders, are sanctioned (and even this is controversial, as witnessed by the recent three-parent IVF technique). In addition, most countries have outlawed transgenic therapies (i.e. the introduction of non-human animal DNA into human DNA) and there's currently a UN-sponsored ban on human cloning.
Part of the fear is that enhancement-tech will facilitate an arms race among humans, compelling parents to enhance their children so they'll fit in and be able to compete with all the other enhanced kids. Problematically, some parents might enhance their children in very precise and narrow ways (e.g. readying their child for a career in football). Because it won't be universally accessible at first, many people will argue that no one should allowed to have it. Lastly, it could also be dangerous. As I've argued before, human super-intelligence could be a bad idea.
7. Advanced AI Could Always Be One Step Ahead of Us
The development of AI could come back to haunt us. Big time. Such is the concern of Dr. Susan Schneider, a philosopher at the University of Connecticut and author of The Language of Thought: A New Philosophical Direction . In an email to me, she wrote:
If AI becomes superintelligent it will think differently than we do, and it may not act in the interest of our species. Philosophers and scientists are currently working on principles for creating benevolent AI, and keeping superintelligent AI "in the box" so it would be unable to act contrary to our interests. The challenge with benevolence principles is that a sophisticated system may override them, or the principles could create a programming conflict. Discussions of the AI Box Problem voice the worry that superintelligent AI would be so clever that it could convince even the most cautious human to let it out. Frankly, I wouldn't be surprised if the creators of the first forms of superintelligent AI did not even try to keep it in the box. For instance, it could be seamlessly integrated into the internet, being developed by Google, whose chief engineer is currently Ray Kurzweil.
To mitigate this threat, Schneider says philosophers (especially philosophers of mind), metaphysicians, and ethicists, could help by developing principles for identifying and defining superintelligence, developing principles for determining whether intelligent computers can be conscious, determine under what conditions one can survive radical brain enhancements, and under what conditions AI and humanoid robots should have rights.
8. A Third World War
The First World War was called "the war to end all wars." Then, to everyone's disgust and dismay, the world decided to do it all over again 21 years later, showing that's it's impossible to predict the future of global-scale conflicts.
(Politicana)
After the Second World War, the geopolitical situation stabilized around a cold war fought by two nuclear-capable superpowers. The fall of the Soviet Union, and the subsequent decline of U.S. hegemony, resulted in a world of geopolitical uncertainty. Where it was once "obvious" that a third global-scale conflict was inconceivable, today it doesn't seem so outlandish.
At the risk of sounding presentist, a serious civil war is currently raging in Syria, a radical Islamist insurgency is claiming large swaths of territory in the Middle East, Russia is meddling in Ukraine, and parts of Africa are absolutely set to explode (including Boko Haram-ravaged Nigeria and a brewing civil war in Libya). As military and economic alliances shift, and as countries get (often unwillingly) pulled into conflicts, a domino effect could happen in which another global conflagration could erupt.
The effects of climate change could further work to complicate international dynamics (water wars, rising sea levels, desertification, and the exodus of climate refugees being some examples).
Needless to say, World War III would irrevocably change things. As Einstein famously quipped, the Fourth World War will most assuredly be fought with rocks.
9. We May Grow to Hate Virtual Reality
In the future, the more time people spend in fully immersive virtual environments, the more aware they may become of the ways in which those environments fail to duplicate the physical world.
Here's what New Mexico State University philosopher Michael LaTorra told me:
Cravings for solid experience in the primary reality of physical sensations will send people into nature, and into urban facilities in which specific senses will be stimulated in an aesthetically pleasing way. These spaces will feature curated sensory excursions into realms of fragrance, textures, sequences of changing ambient illumination, and surprising but not disturbing soundscapes. This will be done in ways that harmonize with the human relaxation response and basic aesthetic sensibilities. In other words, it will not have the "edginess" of current art that seeks to challenge and confront the viewer. It will be more in alignment with the "mindfulness" movement. This will apply with particular force for people who have become committed to others in online-based relationships with only infrequent or even non-existent face-to-face contact.
So, the more virtualized our work and other pursuits become, the more we will want to "keep it real" by acquiring pleasurable experiences in base reality.
10. Ten Billion Humans by 2100 Could Be Seen As A Success
Neo-Malthusian fears are all the rage, but as Jamais Cascio told me, a world with 10 billion people will be perceived as an achievement rather than a failure (emphasis added):
Current UN projections have the population hitting 9.5 billion people (or so) by late-mid century. But nearly every ecoscientist will tell you that the planet cannot support that many people at anything approaching a viable standard of living. So in order for 10 billion people to still be here a generation or three after we hit peak population, we would have to have figured out some way of making that situation viable. In other words, a world in which 10 billion people live successfully on Earth is a world in which we've solved nearly all of the environmental and resource problems we're wrestling with now. It would have its own set of dilemmas and crises, of course, but the problems we consider essentially existential today would be considered solved.
11. Utopia May Not Look Like Anything We've Imagined
Broadacre City, Frank Lloyd Wright (1932)
We have all but lost our utopian sensibility. A century of world wars, genocide, and fanatical totalitarianism will do that. Today, any hints of utopianism — either in our daily lives or in scifi — is met with scorn and accusations of extreme naïveté. Part of the problem is that one person's utopia is another person's hell; it's difficult, if not impossible, to sketch out the most ideal condition for humanity, and it doesn't help that most people associate utopian dreams with repression and radical political ideologies.
But that's not to suggest we shouldn't put our faith in perpetual progress to see where it takes us. This is after all, the mission of the European Enlightenment. The future could look quite utopian by contemporary standards, similar to the way parts of our world today might appear preferable to the people of the past. As noted by University of Manchester cultural theorist Terry Eagleton, "The future may or may not turn out to be a place of justice and freedom; but it will certainly disprove the conservatives by turning out to be profoundly different from the present."
So what might a future utopia look like? For starters, how about a world in which suffering, both in the human and animal realms, has been radically diminished, and where everyone's basic material needs have been met? But to get there, and again in the words of Eagleton, "We must indeed beware of arid blueprints; but the truth is that conservatives dislike utopia because they find the whole idea of social engineering distasteful, in contrast to spontaneous social growth; and leftists need to insist that social engineering can undoubtedly be progressive."
Follow George on Twitter and friend him on Facebook.
from Gizmodo http://io9.com/11-ways-the-future-could-turn-out-differently-than-we-e-1682524805
No comments:
Post a Comment