These are the 3 biggest fears about AI — and here's how worried you should be about them (2024)

  • Some AI experts say we're barreling headfirst toward the destruction of humanity.
  • Some of these statements are vague and experts disagree on what exactly the main risks are.
  • These are some of the potential threats from advanced AI, and how likely they are.

These are the 3 biggest fears about AI — and here's how worried you should be about them (1)

NEW LOOK

Sign up to get the inside scoop on today’s biggest stories in markets, tech, and business — delivered daily. Read preview

These are the 3 biggest fears about AI — and here's how worried you should be about them (2)

These are the 3 biggest fears about AI — and here's how worried you should be about them (3)

Advertisem*nt

AI doomsayers think the tech is as dangerous as nuclear war and global pandemics.

Some of the tech's early creators have said we're barreling headfirst toward the destruction of humanity, while others claim regulation is desperately needed.

However, as lawmakers inch closer to regulating the tech, some of these warnings about the existential risks of AI are being shrugged off by prominent industry voices as distractions and lies.

Some of these statements are notably vague and have left people struggling to make sense of the increasingly hyperbolic claims.

Advertisem*nt

David Krueger, an AI expert and assistant professor at Cambridge University, told Insider that while people might want concrete scenarios when it comes to the existential risk of AI, it's still difficult to point to these with any degree of certainty.

"I'm not concerned because there is an imminent threat in the sense where I can see exactly what the threat is. But I think we don't have a lot of time to prepare for potential upcoming threats," he said.

With that in mind, here are some of the potential issues experts are worried about.

1. An AI takeover

One of the most commonly cited risks is that AI will get out of its creator's control.

Advertisem*nt

Artificial general intelligence (AGI) refers to AI that is as smart or smarter than humans at a broad range of tasks. Current AI systems are not sentient but they are created to be humanlike. ChatGPT, for example, is built to make users feel like they are chatting with another person, said the Alan Turing Institute's Janis Wong.

Experts are divided on how exactly to define AGI but generally agree that the potential technology presents dangers to humanity that need to be researched and regulated, Insider's Aaron Mok reported.

Krueger said the most obvious example of these dangers is military competition between nations. "Military competition with autonomous weapons — systems that by design have the ability to affect the physical world and cause harm — it seems more clear how such systems could end up killing lots of people," he said.

Related stories

"A total war scenario powered by AI in a future when we have advanced systems that are smarter than people, I think it'd be very likely that the systems would get out of control and might end up killing everybody as a result," he added.

Advertisem*nt

2. AI causing mass unemployment

There's a growing consensus that AI is a threat to some jobs.

Abhishek Gupta, founder of the Montreal AI Ethics Institute, said the prospect of AI-induced job losses was the most "realistic, immediate, and perhaps pressing" existential threat.

"We need to look at the lack of purpose that people would feel at the loss of jobs en masse," he told Insider. "The existential part of it is what are people going to do and where are they going get their purpose from?"

"That is not to say that work is everything, but it is quite a bit of our lives," he added.

Advertisem*nt

CEOs are starting to be upfront about their plans to leverage AI. IBM CEO Arvind Krishna, for example, recently announced the company would slow hiring for roles that could be replaced with AI.

"Four or five years ago, nobody would have said anything like that statement and be taken seriously," Gupta said of IBM.

3. AI bias

If AI systems are used to help make wider societal decisions, systematic bias can become a serious risk, experts told Insider.

There have already been several examples of bias in generative AI systems, including early versions of ChatGPT. You can read some of the shocking answers from the chatbot here.OpenAI has added more guardrails to help ChatGPT evade problematic answers from users asking the system for offensive content.

Advertisem*nt

Generative AI image models can produce harmful stereotypes, according to tests run by Insider earlier this year.

If there are instances of undetected bias in AI systems that are used to make real-world decisions, for example approving welfare benefits, that could have serious consequences, Gupta said.

The training data is often based on predominantly English language data, and funding for training other AI models with different languages is limited, according to Wong.

"So there's a lot of people who are excluded or certain languages will be trained less well as other languages as well," she said.

These are the 3 biggest fears about AI — and here's how worried you should be about them (2024)
Top Articles
Latest Posts
Article information

Author: Rob Wisoky

Last Updated:

Views: 6277

Rating: 4.8 / 5 (68 voted)

Reviews: 91% of readers found this page helpful

Author information

Name: Rob Wisoky

Birthday: 1994-09-30

Address: 5789 Michel Vista, West Domenic, OR 80464-9452

Phone: +97313824072371

Job: Education Orchestrator

Hobby: Lockpicking, Crocheting, Baton twirling, Video gaming, Jogging, Whittling, Model building

Introduction: My name is Rob Wisoky, I am a smiling, helpful, encouraging, zealous, energetic, faithful, fantastic person who loves writing and wants to share my knowledge and understanding with you.