To Philosophize on the Future

0
224

Predicting the future is messy business. “Futurology” is the art of speculating about the future and, in practice, it’s an optimist’s game. Indeed, few want to think poorly of their own—or humanity’s—future.

There will doubtless be innumerable innovations in the coming years. Today’s vision of the future incorporates the idea of a continued advance of technology. It’s second nature to assume that futuristic societies will have technologies unfathomable to our present selves.

But the pessimist’s vision is what often concerns us the most—they’re what make up the most striking headlines: “climate change to become irreversible,” “war and devastation, rising tensions between nuclear superpowers,” and so on.

So why not try to predict? Few people have examined this as much as philosopher Nick Bostrom, whose work has been fundamental to this post. He puts it elegantly, “A capacity to learn from experience is not useful for preparing for the future unless we can correctly assume (predict) that the lessons we derive from the past will be applicable to future situations.”

Thinking deeply about the future is not about supposing a fantastical vision on our expectations for what is to come. In fact, it is quite the opposite: technocentric philosophy aims to derive the most probable outcomes for humanity as a whole, following current trends.

So let us consider the future for just a bit, as imagined by Bostrom. And perhaps that will shed more light on our present circumstances.

Four Paradigms of the Future

If we imagine the progress of humankind over the past ten thousands years or so to be in the forward direction, then these first two possibilities take us backward.

Extinction

The first paradigm is that of near-future extinction. This essentially means that humans die out before any other of these futures can be completely realized. (Also note that humans can go extinct if—in the long term—we evolve into a separate species, but that’s separated into paradigm four).

The foremost of extinction risks is not easily defined, but they could potentially appear due to human activity. These include the possibility of lethal and infectious viruses and bacteria, advanced bioweapons, and other destructive technologies (pernicious artificial intelligence, perhaps) that could wipe out the entire human population (an unfathomable and unlikely outcome).

In his original paper, Bostrom does not give enough credit to natural extinction risks, which—although relatively minor—are still significant enough to be important (e.g., large volcanic eruptions that could destroy global agricultural output). However, he carefully defines this paradigm to be only the types of disasters that are unrecoverable for humankind.

Even with dangerous infectious diseases, if only a very small population survives with immunity, the species does not necessarily go extinct, which brings us to the second paradigm he identifies: the notion of recurrent collapses.

Recurrent Collapse

Here, humankind is able to survive in the long term, but technological and societal development could become cyclical. Some large disaster could occur and set the population back tens of thousands of years. Supposing that such an event is recoverable (i.e., society can redevelop back to its original state before the collapse), there is again the potential for another collapse.

An important criticism to note, of course, with Bostrom’s model is that these well-defined “families of scenarios” are limited by the flexible nature of society when dealing with these timescales. It may very well be possible that, over the course of hundreds of thousands—perhaps millions—of years, humans develop, regress, develop, regress, and so on, until at one point, we go extinct.

A reason to focus on space exploration over the long term is that—assuming its success—by spreading out human populations across many planets, one major event is less likely to cause the extinction of the entire civilization.

Further, note that the common “looming” or growing disasters that we point to today, notably climate change, most likely fall under this category if it were ever to extensively damage the Earth. If climate change is not adequately managed, the most probable scenario is one where many people suffer, but many others are still able to live and adapt to changing circumstances. (In other words, climate change, although terrible and destructive, is likely not a species-ending disaster). Nuclear proliferation and mutual destruction also fall under this category (ever read The Chrysalids?).

Plateau

This is an interesting paradigm to discuss, but Bostrom rightly assigns this category a low probability. This essentially means humans will hit some sort of ceiling on innovation before we become “posthuman” (more to come). It’s difficult to say, but he outlines two trajectories.

One: we progress a bit further (maybe even for a tens of thousands of years), but progress eventually stops. Two: we no longer progress in terms of technology.

The latter trajectory feels rather unintuitive. After all, humans have been progressing for thousands of years—why would we stop now? But it’s important to recognize that, despite having a very high probability, there is no “law of intelligent life” that guarantees innovation.

Some more food for thought.

Posthumanity

Finally, perhaps the most intriguing possibility: becoming “posthuman.”

Before we consider Bostrom’s definition of posthuman, we can look at what “prehuman” means in the same context. Prehuman does not identify when we were not of Homo sapiens. Rather, it describes a state where we are lacking in at least one of three capacities (there are certainly more, but these ones are particularly notable):

  1. “Healthspan”—a combination of lifespan and health;
  2. Cognition—ability to reason, understand, remember, learn;
  3. Emotion—to feel and enjoy life.

By lacking, we mean not to the extent we (humans) have it today. When exactly we turned from “prehuman” to human is not clear, but it does not actually matter.

The same idea goes for posthumanism, where our successors exceed the current human maximum in at least one of the three capacities. Perhaps, posthumans will have significantly longer healthspans or be able to process information at a much faster rate or human suffering in the world is eliminated, by our present definitions of suffering.

These, according to Bostrom, are most likely to come about through technological advances, such as widespread bioengineering, medical capacity, or entirely mechanized labour. The definition is intentionally vague.

Essentially, this is where we’ll end up if technological progress does not halt. Although it may be hundreds of thousands of year away, we can still be optimistic about these prospects for humanity. Furthermore, whether progress will continue indefinitely (whatever that looks like!) or halt later, it is impossible to ascertain.

Ultimately

We currently suppose the existence of the heat death of the universe, where energy is in perfect thermodynamic equilibrium. Nothing is possible and nothing will exist in the end. Whether that will be the ultimate end to everything is a question of philosophy at present.

No matter what, however, the future is still unwritten. Today is still a moment in time that could make a difference. And looking into the fog of the future could inform our present.

Further Reading

If you’re still here, you must be very interested in this topic. Here are some links to Nick Bostrom’s work:

Image Sources: Featured