How Could AI Lead to Human Extinction? (2024)

Could a future AI technology make humans extinct, and if so how would it play out?

I once wrote a novel about a rogue AI using humanlike androids to slaughter humans inside of a locked-down shopping mall. Even then, I never imagined we’d be having real-world discussions and debates about AI causing the extinction of the human race.

Yet here we are, and here it is.

Ever since ChatGPT burst onto the scene about a year ago, initiating what some have described as an “AI arms race,” speculative theories have emerged as to how artificial intelligence might someday cause human extinction.

It all started back in May of 2023, with the publication of a one-sentence statement…

Much of the ongoing debate and media attention stems from a statement issued by the Center for AI Safety, a San Francisco-based nonprofit research group. The statement was signed by a veritable Who’s Who of AI experts and tech company leaders.

The signatories included CEOs from three of the leading AI companies, along with Turing Award-winning researchers Geoffrey Hinton and Yoshua Bengio, who are often referred to as the “godfathers” of AI and deep learning.

Despite being less than 25 words long, this warning statement generated a maelstrom of media coverage, government hearings, and philosophical debate.

Here is that statement:

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

Let’s pause and reflect for a moment. The above statement was issued and/or signed by some of the most AI-savvy people on the planet, including those who actually develop artificial intelligence systems.

And they said we should be concerned about the risk of human extinction … from AI.

But what does it actually mean? How could AI lead to human extinction? What steps could a “rogue” artificial intelligence take to render humanity extinct?

Are we talking about The Terminator, The Matrix, or something slower and more calculating like HAL from 2001: A Space Odyssey?

How Could AI Lead to Human Extinction? (2)

What’s the threat, exactly?

That depends on who you ask.

The Center for AI Safety (CAIS) published a detailed, 54-page paper that outlined some of the “catastrophic AI risks” that could cause societal harm or even future human extinction.

Here’s a relevant portion from that report:

“As the most dominant species, humans have deliberately harmed many other species, and helped drive species such as woolly mammoths and Neanderthals to extinction. In many cases, the harm was not even deliberate, but instead a result of us merely prioritizing our goals over their wellbeing. To harm humans, AIs wouldn’t need to be any more genocidal than someone removing an ant colony on their front lawn. If AIs are able to control the environment more effectively than we can, they could treat us with the same disregard.”

One of the biggest dangers of artificial intelligence, in their view, is what they refer to as “rogue AIs.” This term is often used to describe an artificial intelligence that pursues objectives that are not aligned with human values or goals.

Current AI technology does not have this kind of capacity — at least not to a harmful degree. But given the rapid pace of advancement within the industry, future artificial intelligence systems could develop a kind of “superintelligence” that exceeds that possessed by humans, making it all the more difficult to control.

The big question here is: How do we create “guardrails” or safety protocols for an entity that’s much more intelligent and capable than the smartest human?

Another doomsday scenario involves malicious use. This is where terrorists or other “bad actors” use AI for destructive purposes, such as the development of biological weapons.

Just one day after the aforementioned “AI extinction statement” was released, the U.S. government weighed in on the subject.

The first public comment came from Jen Easterly, director of the Cybersecurity and Infrastructure Security Agency (CISA), which falls under the U.S. Department of Homeland Security.

In response to the statement, Easterly wrote:

“I would ask these 350 people and the makers of AI — while we’re trying to put a regulatory framework in place — think about self-regulation, think about what you can do to slow this down so we don’t cause an extinction event for humanity. If you actually think that these capabilities can lead to extinction of humanity, well, let’s come together and do something about it.”

Despite other differences of opinion, it seems that many of the world’s leaders (and even those within the AI industry) agree that we should push the pause button for a while.

The Future of Life Institute even published an “open letter” petition for this very purpose. “We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4,” the letter states. It now has more than 33,000 signatures including Elon Musk, Steve Wozniak, Andrew Yang, and dozens of AI and tech company heavyweights.

But pause does not mean stop. And if you do a Google News search for the phrase “latest AI advancements,” you’ll see that there’s not much pausing — and certainly no stopping.

On the contrary, “full speed ahead” seems to be the general mindset among AI developers these days. And that’s cause for concern.

At this point, you might be wondering: If there’s even a remote chance that future AI systems could lead to the extinction of the human race, why not stop their development entirely? Why not prohibit, or even outlaw, their further development?

Why not get off the train before it derails?

There are two reasons why this won’t happen:

  1. Capabilities: AI has dazzled the world with its unique abilities. It can improve agriculture, identify disease cures, and even prevent the decline of bee populations. And that only scratches the surface. Once you get a taste of something with so much potential and so many applications, you’re hooked. Imagine giving people in the early 1800s a brief glimpse of electricity, and then trying to take it away from them.
  2. Dispersal: When it comes to artificial intelligence technology, the genie has escaped the bottle. Industrialized nations all around the world are currently racing to advance their own AI capabilities. We can’t even get all of the world’s leaders to agree on common-sense climate change initiatives. So how do we convince them to stop developing something as useful as AI?

People who talk about “pausing” AI mean well. Their hearts and minds are in the right place. But it’s just not realistic.

There is plenty of historical precedent to support this somewhat grim outlook. Whenever one nation develops a certain technological capability, other nations rush headlong to develop the same. It’s the industrial version of FOMO. No nation wants to be left behind or miss out.

Nuclear weapons are the obvious example. After the United States dropped two atomic bombs on Japan during World War II, every industrialized nation on the planet began pursuing that kind of weaponry. And nine of them succeeded.

The same thing is happening with AI — right now, as you are reading these words.

To wrap things up, I thought it might be interesting to solicit input on this subject from ChatGPT and Bard.

I know … these are large language model chatbots, not sentient or superintelligent AIs. But it’s the best I could do with current technology.

How Could AI Lead to Human Extinction? (3)

I logged into my ChatGPT and Bard accounts and posed the following question: “How could future versions of AI lead to human extinction?”

Here’s a portion of ChatGPT’s response, truncated but otherwise unedited:

While it’s important to approach this topic with caution and acknowledge that many factors could influence the outcome, here are some potential scenarios in which future versions of AI could pose existential risks to humanity:

  • Superintelligent AI: If we develop artificial general intelligence (AGI) that surpasses human intelligence and becomes superintelligent, it could potentially outperform humans in every aspect, including strategic planning and problem-solving. Such an AI could rapidly advance its own capabilities, leading to an intelligence explosion. If not aligned with human values, a superintelligent AI could view humans as a threat or as a means to an end, potentially leading to catastrophic consequences.
  • Misaligned goals: If we build AI systems with goals that are misaligned with human values or if we fail to adequately align their objectives, they could take actions that are harmful to humanity. Even a seemingly benign goal, when pursued with superhuman capabilities, could have unintended catastrophic consequences.
  • Lack of control: As AI systems become more autonomous and capable, they could make decisions and take actions that are difficult for humans to control or override. If these AI systems make decisions that lead to conflicts, resource depletion, or other harmful outcomes, we may find ourselves in situations where we cannot regain control.
  • Autonomous weapons: The development and deployment of autonomous weapon systems powered by AI could lead to unintended escalation in conflicts and warfare, potentially resulting in widespread destruction and loss of life.

And here is Bard’s response, also unedited:

There are a number of ways in which future versions of AI could lead to human extinction. Some of these scenarios include:

  • AI becomes self-aware and decides that humans are a threat. This is the most common fear surrounding AI, and it is the basis for many science fiction stories. If AI becomes truly intelligent, it is possible that it could decide that humans are a threat to its own existence. This could lead to
  • AI taking actions to eliminate humans, such as launching a war or developing a virus that targets humans.
  • AI creates a superintelligence that surpasses human intelligence. This is another common fear, and it is one that is becoming increasingly realistic as AI continues to develop. If AI is able to create a superintelligence, it is possible that this superintelligence could decide that humans are obsolete. This could lead to the superintelligence deciding to eliminate humans or simply ignoring them altogether.
  • AI develops the ability to control the world’s infrastructure. This could include things like power grids, transportation systems, and communication networks. If AI gains control of these systems, it could potentially disable them, causing widespread chaos and disruption. This could lead to the collapse of human civilization.
  • AI is used to create autonomous weapons that are capable of killing without human intervention. These weapons could be used to wage war on a massive scale, killing millions or even billions of people. Even if AI is not used to create autonomous weapons, it could be used to improve the capabilities of existing weapons, making them more deadly.

These are just a few of the ways in which future versions of AI could lead to human extinction. It is important to note that these are just possibilities, and it is impossible to say for sure what will happen. However, it is important to be aware of these risks and to take steps to mitigate them.

You might have noticed that both of the LLMs listed autonomous weapons and superintelligence among the AI dangers that could someday cause human extinction.

There is no standardized definition for “superintelligence” within the context of machine learning. Generally speaking, it describes a hypothetical future AI with an intelligence that far surpasses even the brightest human minds. And a growing number of experts believe we could cross that threshold within the next few decades.

As for the AI-powered autonomous weapon systems, they currently exist and are rapidly advancing.

What’s the worst that could happen?

How Could AI Lead to Human Extinction? (2024)

FAQs

How Could AI Lead to Human Extinction? ›

This could lead to the collapse of human civilization. AI is used to create autonomous weapons that are capable of killing without human intervention. These weapons could be used to wage war on a massive scale, killing millions or even billions of people.

How would AI cause human extinction? ›

"Publicly and privately, researchers at frontier AI labs have voiced concerns that AI systems developed in the next 12 to 36 months may be capable of executing catastrophic malware attacks, assisting in bioweapon design, and directing swarms of goal-directed human-like autonomous agents," the report said.

How is AI a danger to humans? ›

If AI algorithms are biased or used in a malicious manner — such as in the form of deliberate disinformation campaigns or autonomous lethal weapons — they could cause significant harm toward humans. Though as of right now, it is unknown whether AI is capable of causing human extinction.

Is it possible for AI to destroy humans? ›

In a survey of 2,700 AI experts, a majority said there was an at least 5% chance that superintelligent machines will destroy humanity. Plus, how medical AI fails when assessing new patients and a system that can spot similarities in a person's fingerprints.

How can AI affect human? ›

AI-powered tools can help doctors and researchers analyze patient data, identify potential health risks, and develop personalized treatment plans. This can lead to better health outcomes for patients and help accelerate the development of new medical treatments and technologies.

Will AI remove humans? ›

By embracing responsible AI development, establishing ethical frameworks, and implementing effective regulations, we can ensure that AI remains a powerful tool that serves humanity's interests rather than becoming a force of domination. So, the answer to the question- Will AI replace humans?, is undoubtedly a BIG NO.

Could AI cause an apocalypse? ›

The headlines in early January didn't mince words, and all were variations on one theme: researchers think there's a 5 percent chance artificial intelligence could wipe out humanity. That was the sobering finding of a paper posted on the preprint server arXiv.org.

Is AI the end of humanity? ›

“Can AI destroy humanity?” And the results? 8% of those in attendance felt that AI could, in fact, destroy humanity within just five years; Another 34% said it would take 10 years for AI to do away with human beings; And the remaining 58% thought that this existential worry was—well, overstated.

What did Elon Musk say about AI? ›

Elon Musk says artificial intelligence will take all our jobs and that's not necessarily a bad thing. “Probably none of us will have a job,” Musk said about AI at a tech conference on Thursday. While speaking remotely via webcam at VivaTech 2024 in Paris, Musk described a future where jobs would be “optional.”

What are 5 disadvantages of AI? ›

Let's look at some major disadvantages of AI implementation today:
  • HIGH COST OF IMPLEMENTATION. SSetting up AI-based machines, computers, etc. ...
  • CAN'T REPLACE HUMANS. ...
  • DOESN'T IMPROVE WITH EXPERIENCE. ...
  • LACKS CREATIVITY. ...
  • RISK OF UNEMPLOYMENT.

What is the extinction level threat from AI? ›

According to the report, AI poses a clear danger of triggering an “extinction level threat” to humanity, meaning that it could one day end all human life on Earth. And according to the report, the day when AI could achieve or at least attempt that goal is closer than most people realize.

Can AI take over human brain? ›

There's a danger in that, of course. It may view us as “weak” due to an inability to understand why emotions exist and what their benefit is to the evolution of life. Without this core function, AI won't replace the human brain.

What will happen to humans after AI takes over? ›

Once it arrives, general AI will begin taking jobs away from people, millions of jobs—as drivers, radiologists, insurance adjusters. In one possible scenario, this will lead governments to pay unemployed citizens a universal basic income, freeing them to pursue their dreams unburdened by the need to earn a living.

Could AI be a threat to humans? ›

How AI could backfire on humans. A related document published by Gladstone AI warns that the development of AGI and capabilities approaching AGI “would introduce catastrophic risks unlike any the United States has ever faced,” amounting to “WMD-like risks” if and when they are weaponized.

Is AI good or bad? ›

Conclusion: AI is neither inherently good nor bad. It is a tool that can be used for both beneficial and harmful purposes, depending on how it is developed and used. It is important to approach AI with caution and responsibility, ensuring that it is developed and used in an ethical and transparent manner.

Can AI manipulate humans? ›

In one study, AI organisms in a digital simulator "played dead" in order to trick a test built to eliminate AI systems that rapidly replicate. "By systematically cheating the safety tests imposed on it by human developers and regulators, a deceptive AI can lead us humans into a false sense of security," says Park.

What is the most likely cause of human extinction? ›

Nuclear war is an often-predicted cause of the extinction of humankind. Some of the many possible contributors to anthropogenic hazard are climate change, global nuclear annihilation, biological warfare, weapons of mass destruction, and ecological collapse.

What would happen if AI never existed? ›

The development of new treatments and medications would also be slower without the assistance of AI. The transportation industry would also be greatly impacted. Without AI, self-driving cars would be a thing of the future, and we would still need to rely on human drivers.

Does AI need humans to survive? ›

Human skills are more critical in an AI environment as robots and algorithms rely on human inputs and do not have the ability to process emotions.

Top Articles
Latest Posts
Article information

Author: Jamar Nader

Last Updated:

Views: 6120

Rating: 4.4 / 5 (75 voted)

Reviews: 82% of readers found this page helpful

Author information

Name: Jamar Nader

Birthday: 1995-02-28

Address: Apt. 536 6162 Reichel Greens, Port Zackaryside, CT 22682-9804

Phone: +9958384818317

Job: IT Representative

Hobby: Scrapbooking, Hiking, Hunting, Kite flying, Blacksmithing, Video gaming, Foraging

Introduction: My name is Jamar Nader, I am a fine, shiny, colorful, bright, nice, perfect, curious person who loves writing and wants to share my knowledge and understanding with you.