ChatGPT answering
"What is AI?"
Artificial Intelligence (AI) is the field of computer science focused on creating systems that can perform tasks that normally require human intelligence. AI enables machines to learn from experience, adapt to new inputs, and perform tasks like problem-solving, language understanding, perception, and decision-making.
AI can be broken down into several types and subfields:
- Narrow AI (or Weak AI) – AI systems designed for specific tasks, like voice recognition (e.g., Siri, Alexa), image recognition, or playing a game of chess. They excel at a narrow set of tasks but lack general awareness.
- General AI (or Strong AI) – Hypothetical AI that could perform any intellectual task a human can do, with the ability to reason, plan, and understand complex concepts. This level of AI does not yet exist.
- Machine Learning – A subset of AI that uses algorithms and statistical models to allow computers to learn from and make decisions based on data. For instance, machine learning is used in recommendation systems and predictive analytics.
- Deep Learning – A further subset of machine learning that uses artificial neural networks, inspired by the human brain, to analyze data and make decisions. Deep learning is commonly used in advanced image recognition, language processing, and autonomous driving.
- Natural Language Processing (NLP) – The branch of AI that focuses on enabling machines to understand, interpret, and respond to human language in a meaningful way.
AI is increasingly used in fields like healthcare, finance, automotive, education, and customer service, transforming how we interact with technology in everyday life.
Gemini answering
"What is AI?"
Artificial intelligence (AI) is a branch of computer science that aims to create intelligent agents, which are systems that can reason, learn, and act autonomously. AI encompasses a wide range of techniques and technologies, including machine learning, natural language processing, and computer vision.
Key Concepts in AI:
- Machine Learning: This involves training algorithms on large datasets to make predictions or decisions without being explicitly programmed.
- Natural Language Processing (NLP): This enables computers to understand and process human language, allowing for tasks like language translation and sentiment analysis.
- Computer Vision: This focuses on teaching computers to interpret and understand visual information from the world, such as images and videos.
Applications of AI:
AI is being used in a variety of fields, including:
- Healthcare: AI is used to develop new drugs, analyze medical images, and provide personalized treatment plans.
- Finance: AI is used for fraud detection, algorithmic trading, and customer service chatbots.
- Autonomous Vehicles: AI is used to develop self-driving cars and trucks.
- Entertainment: AI is used to create realistic computer graphics and generate personalized recommendations.
Can AI lead us into the apocalypse?
Can AI end humanity?
Renowned AI experts attribute a significant chance to AI that it could lead to the end of humanity. Is that a realistic prediction or doomsday talk? Short answer: it's not AI that's the problem, it's us.
"I think there's a 25% chance that things go really, really badly." That's not a blogger with a tinfoil hat. That's Dario Amodei - CEO of Anthropic, the company that built Claude - putting a number on the probability that AI ends civilisation as we know it. He is the person closest to the technology, with the most to gain from public confidence in it, giving it a one-in-four chance of going catastrophically wrong.
So, are the machines coming for us? Not quite. But there's a more nuanced, and probably more troubling story here. It starts with understanding what AI actually is, and what it decidedly is not.
Two flaws in AI understanding
Before we spiral into panic or wave this away as tech-world theatrics, let's get precise about where modern AI breaks down:
About this post
In this article we review claims from key experts on how AI could create catastrophic outcomes for humanity. We evaluate this claim based on the functioning of AI and on our use of the tools.
Key takeaways
No, AI is not by itself going to end the world as we know it anytime soon. However, if we begin to use AI in a way that is not appropriate to its abilities, we soon might defer decisions to AI that should always be made by humans.
No AI tools were used in the creation of this text beyond standard software features like spell-check - please click for more information
#1: AI doesn't know what it's talking about.
This structural problem is misunderstood by a lot of people. Machine Learning algorithms are, at their core, extraordinarily sophisticated pattern-matching engines, even if they now "understand" and "speak" our language. But they don't understand the world the way we do. Instead, they generate statistically plausible responses based on training data. When an AI tells you a medication has no serious interactions or that a structural design is sound, it isn't drawing on comprehension. It's producing an output that resembles what correct answers have looked like before.
A human expert who says "this bridge design is safe" has engaged in genuine reasoning - applying physics, consulting experience, assessing novel variables in real time. An AI that says the same thing has matched a pattern. Most of the time, in familiar territory, this works surprisingly well. But "most of the time" and "familiar territory" are phrases with very significant edge cases attached.
#2: AI's innovation potential is limited.
This one requires nuance. AI can do genuinely impressive things that look like creativity: It can compose poetry and music; AlphaFold predicted protein structures that had stumped biochemists for decades. Generative models have proposed novel chemical compounds that no human chemist had previously synthesized. Reinforcement learning systems have discovered game strategies and optimization approaches that surprised their creators. These incidents represent real extrapolation beyond what was explicitly in the training data.
But there's a ceiling. These breakthroughs work by exploring the space defined by existing knowledge - recombining, interpolating, and pushing outward along known dimensions. What AI cannot do is recognize if the entire framework is wrong, or if there are flaws in the underlying assumptions, and construct a new one from scratch.
Think about the discovery of penicillin - Alexander Fleming noticing something strange in a petri dish and following a hunch that nothing in prior science predicted he should follow. Or the germ theory of disease, which required someone to reject the dominant model entirely and propose something that looked absurd to contemporaries. These inventions were specific ruptures from patterns, and that kind of leap - seeing past the boundaries of established knowledge into genuinely unmapped territory - remains a distinctly human capability.
Why Are Smart People Scared?
If AI has these fundamental limitations, why is Dario Amodei losing sleep over it?
Because the danger was never really about AI becoming sentient, deciding humanity is a problem, and staging a coup. That's a compelling movie plot. The real risk is far more mundane and far more plausible.
The real risk is us.
Specifically, it's that we - individually and collectively - begin to systematically replace our own judgment with AI outputs, without adequately accounting for what AI cannot do. That process is already quietly underway across thousands of decision-making contexts.
A financial institution uses an AI risk model trained on an economic environment that no longer exists. A hospital system deploys an AI diagnostic tool so reliably that clinicians gradually stop scrutinizing the edge cases the model has never really seen. A government uses AI-assisted policy analysis built on data that systematically underrepresents the communities most affected. None of these scenarios requires an AI to "go rogue." They just require humans to gradually cede their judgment to a system that, by design, cannot fully substitute for human judgment.
And by doing so, they might further erode the capabilities of decision-making, both by losing the ability to apply that judgment, and by making unsound AI decisions the baseline for future decision-making without sufficient re-evaluation.
The Misinterpretation Problem
The existential threat isn't that AI will develop malicious intent. It's that we will misinterpret the capabilities of AI so thoroughly that we stop providing the essential human ingredient: genuine judgment, genuine novelty, genuine accountability.
Consider an analogy from industrial automation. In complex process environments - chemical plants, power generation facilities - there's a well-understood principle that safety redundancy must be epistemically independent. Two identical sensors measuring the same thing the same way share the same failure modes. True resilience requires fundamentally different approaches that catch what the other misses.
The same logic applies here. AI and human judgment are qualitatively different. They fail in different ways, for different reasons. A decision-making system that relies entirely on AI collapses that diversity. When the AI's failure mode triggers - novel situations, distributional shift, pattern-matching on a flawed historical baseline - there's no human backstop left.
This is the mechanism by which AI could contribute to catastrophic outcomes. Not through superintelligence. Through the quiet erosion of the human judgment that was always supposed to remain in the loop.
The Mirror Problem
Here's the uncomfortable reframe: the extinction-level risk that serious people are worried about isn't really about AI. It's about us, reflected back through AI.
Every AI system is built on human-generated data. Its biases are our biases, systematized and scaled. Its blind spots are our blind spots, formalized and deployed at speed. When an AI model fails in a consequential domain, it very often fails in a way that mirrors how large numbers of humans failed before it - just faster, more consistently, and at greater scale.
The question "could AI lead to human extinction?" is really asking: "could we sleepwalk into catastrophe by systematically mistaking pattern-matching for understanding, and outsourcing our judgment to a very sophisticated mirror of our own past?" That's a question worth taking seriously.
Who Decides When AI Pulls the Trigger?
All of this has been thrown into sharp relief by a very real confrontation playing out right now - one that involves Anthropic, the company whose founder we quoted initially.
In July 2025, the company signed a $200 million contract with the US Department of Defense, becoming the first AI company to deploy its models on classified government networks. The partnership looked like a pragmatic compromise: Claude would support intelligence analysis, operational planning, and cyber operations, but under two explicit restrictions. Anthropic would not allow Claude to be used for mass domestic surveillance of American citizens, or to power fully autonomous weapons systems - weapons that, once activated, select and engage targets without any human in the loop.
The Pentagon pushed back. Defense Secretary Pete Hegseth and the DoD characterized those limits as unduly restrictive, insisting that responsible AI should encompass "any lawful use" of AI models by the US military. Anthropic refused to remove the restrictions. The DoD responded by designating Anthropic a supply chain risk - a label previously reserved for foreign adversaries - requiring defense vendors and contractors to certify they don't use Anthropic's models in their Pentagon work.
It's worth pausing on the specific flashpoint: fully autonomous weapons. AI already assists in military decision-making and Anthropic didn't object to that. The debate is about whether AI should be permitted to make the final lethal decision, with no human required to authorize it.
This is precisely where the two flaws discussed above become life-or-death issues. An autonomous weapons system powered by AI is a pattern-matching engine making targeting decisions in novel, high-stakes, rapidly evolving conditions - exactly the environment where AI fails. It cannot reason through an unprecedented scenario the way a human soldier or commander can. It cannot weigh context that wasn't in its training data. It cannot exercise moral judgment. And once deployed autonomously, there is no human backstop when its pattern-matching gets it wrong.
Anthropic's position was that deploying unreliable AI in autonomous weapons would endanger American warfighters, not protect them - a technical argument as much as an ethical one. The administration blacklisted the AI company most deeply integrated into its own classified networks, even as US strikes in Iran reportedly used Anthropic's technology hours after the ban was announced.
What makes this episode so significant is what the pressure on Anthropic actually was: not to make Claude smarter or more capable, but to remove the human judgment layer - to strip out the guardrails that kept humans accountable for lethal decisions. That is a near-perfect illustration of the risk in action. The danger isn't that AI will autonomously decide to wage war. It's that humans will deliberately choose to remove themselves from the decision - and call it efficiency.
It is Still Our Call
The risks described above are not baked in and are not inevitable features of AI technology. They arise from choices - about how we deploy AI, what decisions we allow it to make autonomously, and what kinds of human oversight we deliberately preserve.
AI can be an extraordinary tool for augmenting human judgment, not replacing it. Used well, it extends our reach, surfaces patterns we'd miss, and handles complexity at a scale no human team could match. The institutions that will navigate the AI era well are the ones that maintain a clear-eyed understanding of what AI can and cannot do - and that deliberately protect the human ingenuity and accountability no model can fully replicate.
The alarm Amodei and other AI specialists are sounding is: "We have to be honest about this, or we will fail ourselves." That's a warning worth heeding - and it's entirely within our power to act on.
The choice about how AI reshapes humanity won't be made by an algorithm. It will be made by us - in boardrooms, legislatures, research labs, and classrooms, one decision at a time. That's not a comforting thought. But it is an empowering one.