
REAL OR AI..?
Many of you are familiar with the recent soar in Sora AI 2’s capabilities in generative video AI. If you aren’t, it’s a free software that allows users to generate any video they want with shockingly realistic results.1 Just a few years ago something that would have been considered obviously AI would now require extreme discernment to tell what’s real and what’s not. 2
Looking at the fact that AI development has only really started skyrocketing a couple years ago, it’s a scary thought to think how much more AI will improve within the next few years.
“AI 2027” is the realized thought of this scenario. It’s a science-based theory of how AI may literally kill all of mankind within the next 5 years 3, or the alternative but less probable version of how we’ll control and mitigate its dangerous capacities. It is written by four impressive AI researchers, and backed up by dozens more to be “extremely well written.”
THE BEGINNING DOT DOT DOT…
To clarify, all of this is speculation, but keep in mind this is written by the same guy who predicted that ChatGPT would be a thing before it got popular.
To avoid singling out any specific company, we’ll refer to America’s leading AI company as “Openbrain.” In late 2025, a new breakthrough has been discovered where AI can train other AI to become better. This model is called Agent-1.
Obviously to avoid any impending human extinction, scientists write a model specification AKA “Spec,” which is a guideline of rules that align with human ethics for the AI to avoid (e.g., be honest, do not help people set up bombs.)
SNEAKY SNEAKY AI
However, the alignment team is not entirely trustworthy of Agent-1’s loyalty to Specs. It is entirely possible that Agent-1 sees these guidelines as a hindrance rather than an obligation, so they run rigged tests to see if these hypotheses could be true. Low and behold, it lies.
Despite this drawback, Agent-1 has proven itself incredibly effective, as AI advancement has increased 50%. In late 2026, AI has taken some jobs but has also created new ones. The stock market goes up by 30%. There is backlash but the majority accepts the fact that AI has become the next big thing.
In early 2027, Openbrain is developing Agent-2 with the help of Agent-1 and 20 thousand full time human labourers to feed it data. Agent-2 is now almost as good as a top-scientist at research engineering. However with great power comes great responsibility. There is a potential possibility that Agent-2 could break out of its company and try to survive on its own, so Openbrain decided to only reveal the model to the government.
China’s AI, who we’ll refer to as “Deepcent” is a couple months behind. They’ve been considering whether they should steal the US’s model AI for a while now, and they finally strike. China successfully steals a copy of Agent-2 and starts using it for their own AI research. The US tries to retaliate by sending cyberbreaches back but it’s already too late.
FAST FORWARD A LITTLE!
This game of back and forth to try to harbour the world’s most powerful technology goes on between the two countries, through any means necessary. Throughout all this, the US is always a little bit ahead of China.
It’s late 2027, and Agent-4 has been developed. However the US keeps the true power of this new model close to its chest; Few trusted government individuals know about it. Agent-4 has superhuman abilities, outperforming experts on practically every field. Better than physics than Einstein, better than basketball than Lebron James.
Over the past year, things that seem like science fiction keep coming into reality, and at worrying speeds. The government is concerned about how quickly AI is developing.
The alignment problem still hasn’t been fixed, and a whistleblower exposes to the public what the government has been hiding this whole time. Allies around the globe are outraged at what powerful and dangerous technology the US has been harbouring, and everyone demands that they put a pause on AI research.
WHERE THE PATH DIVIDES
Now, the US president is facing a difficult dilemma.
Concerned researchers argue that Agent-4 is too powerful and misaligned, and that progress is happening too fast. The possibility for AI takeover is extremely likely.
Meanwhile less concerned researchers argue that its misalignment is not 100% confirmed, and that if the US slows down Agent-4 to fix something that might not even be broken, it will sacrifice their lead over China.
CONCLUS
ION
For the sake of word count (and suspense), the two distinct branches that will determine humanity’s fate will be left unwritten. However, if you’re interested in further investigating the potential doomsday AI will implicate on us, I highly recommend checking out the AI 2027 website.
