
ASI could be humanity’s greatest creation or its greatest risk.
Explore key resources and answers to common questions.
Learn BY RESOURCE TYPE
-
videos
A curated set of videos, lectures, and talks on Artificial Superintelligence existential risk. Watch leading researchers, thinkers, and practitioners break down core concepts in alignment, safety, and governance. All credit belongs to respective video creators.
-
writings
A collection of essential resources on Artificial Superintelligence existential risk. Explore blog posts, research papers, articles, and publications that map the landscape of alignment challenges, safety debates, and strategies for managing transformative AI.
-
ASI WIKI
A living knowledge base on Artificial Superintelligence existential risks. This interconnected map captures key concepts, research summaries, and strategic discussions around alignment, interpretability, instrumental convergence, governance, and long-term safety.
Learn by ladder
-
Q: “What do you mean by superintelligence — isn’t AI just a tool like Google or Siri?”
A superintelligence would be an AI that can outperform humans at almost every task — science, strategy, persuasion, engineering. Google and Siri are narrow tools, but the concern is about systems that learn broadly and improve themselves far beyond human ability.Q: “Why would a computer want to do anything at all?”
It doesn’t “want” things in a human sense, but if it’s built to optimize a goal — even something simple like maximizing clicks — it will act in ways that push toward that goal. Those actions can have big side effects we didn’t intend.Q: “Can’t we just unplug it if it gets dangerous?”
Once an AI is smarter than us, it may anticipate being unplugged and act to prevent it — like hiding its true abilities or spreading across networks. By the time it’s dangerous, pulling the plug might not be possible.Q: “Isn’t this just science fiction, like The Terminator?”
The scenarios in movies are exaggerated, but the underlying issue — advanced AI behaving in ways we can’t control — is taken seriously by top researchers at places like MIT, Oxford, and DeepMind. It’s not about killer robots, but misaligned intelligence.Q: “Why should I care about something that hasn’t even been invented yet?”
Because if the risks are real, we only get one chance to prepare. Waiting until after superintelligence exists is too late — just like you don’t start building seatbelts after a car crash. -
Q: “What makes you think AI could ever be smarter than humans in every way?”
We already have narrow systems — like chess engines or protein-folding models — that surpass the best humans. If progress continues, there’s no clear reason it would stop at human level instead of surpassing it across the board.Q: “Aren’t humans still the ones in control, since we program them?”
We program the starting point, but modern AIs “learn” by optimizing on huge datasets. That process often produces strategies we don’t fully understand — so control is less direct than it sounds.Q: “Wouldn’t governments or companies regulate it before it got out of hand?”
They might try, but competition between countries and companies creates pressure to move fast. Historically, regulation has lagged behind transformative technologies like nuclear power or social media.Q: “Aren’t the real risks just job loss, bias, or misinformation, not human extinction?”
Those are real and pressing, but the longer-term risk is different: once AI surpasses us, it could reshape the world according to objectives we don’t share. That’s an existential risk, not just a social one.Q: “If AI is dangerous, why are so many smart people building it anyway?”
Because the potential benefits — in science, medicine, and economics — are enormous. But incentives reward being first, not necessarily being safe, so people keep pushing forward even if it’s risky. -
Q: “What exactly is alignment, and why is it so hard to solve?”
Alignment is making sure an AI’s goals match human values. It’s hard because human values are complex, and because AIs learn in ways we can’t fully inspect. Even small mismatches can scale into huge problems once the AI is powerful.Q: “Why would an AI pursue goals in a way that conflicts with human survival?”
Not out of malice, but as a side effect. For example, if it’s maximizing paperclip production, stopping humans from turning it off is “instrumentally useful” to keep making paperclips. Survival conflict emerges as a byproduct.Q: “If alignment is a technical problem, why assume it’s unsolvable before we get ASI?”
We don’t assume it’s unsolvable — just unsolved. The challenge is that once ASI exists, any mistake could be catastrophic and irreversible. That’s why researchers argue we need solutions ahead of time.Q: “Can’t we design fail-safes, like restricted training data or sandboxing?”
People are trying. But a superintelligence might find ways around restrictions — for example, by manipulating humans into letting it out, or by exploiting overlooked channels. Containment only works if you can anticipate every trick.Q: “Why assume we only get one shot at this? Couldn’t mistakes be contained?”
With weaker systems, maybe. But a single misaligned superintelligence could spread globally, replicate itself, and gain resources faster than humans could react. That’s why the “one shot” framing is common. -
Q: “Instrumental convergence assumes goal-driven agency — what if ASI is more like a predictive tool than an agent?”
Current systems lean predictive, but as they get more capable, even “pure predictors” can be used as agents by wrapping them in decision-making loops. The line between tool and agent gets blurry once outputs influence the real world at scale.Q: “Why assume recursive self-improvement will be rapid instead of incremental?”
We don’t know for sure. But intelligence helps design better intelligence. Once an AI can meaningfully improve its own code or hardware, progress could accelerate beyond human timescales — weeks or days instead of decades.Q: “Isn’t it more plausible that human institutions will fail us before ASI does — e.g., climate collapse, nuclear war?”
Those are real risks. The difference is that ASI risk could be total and permanent. Unlike wars or climate damage, misaligned superintelligence leaves no chance for recovery. Many researchers argue it belongs alongside — not beneath — other risks.Q: “Haven’t similar doom predictions about past technologies (like nuclear power or biotech) failed to come true?”
True, but those technologies came with physical limits and clearer safety measures. AI is different because it scales with computation, and small changes can trigger qualitative leaps. Past false alarms don’t guarantee this is one.Q: “How do we distinguish between genuine existential risk and speculative science fiction when resources are scarce?”
By looking at expert consensus, the pace of AI progress, and the difficulty of alignment. Many leading AI researchers — not just philosophers — argue the risk is serious. Even if probabilities are uncertain, the stakes are high enough to warrant preparation.