Skip to main content

Most people distrust AI and want regulation, says new survey

Most American adults do not trust artificial intelligence (AI) tools like ChatGPT and worry about their potential misuse, a new survey has found. It suggests that the frequent scandals surrounding AI-created malware and disinformation are taking their toll and that the public might be increasingly receptive to ideas of AI regulation.

The survey from the MITRE Corporation and the Harris Poll claims that just 39% of 2,063 U.S. adults polled believe that today’s AI tech is “safe and secure,” a drop of 9% from when the two firms conducted their last survey in November 2022.

A person's hand holding a smartphone. The smartphone is showing the website for the ChatGPT generative AI.
Sanket Mishra / Pexels

When it came to specific concerns, 82% of people were worried about deepfakes and “other artificial engineered content,” while 80% feared how this technology might be used in malware attacks. A majority of respondents worried about AI’s use in identity theft, harvesting personal data, replacing humans in the workplace, and more.

Recommended Videos

In fact, the survey indicates that people are becoming more wary of AI’s impact across various demographic groups. While 90% of boomers are worried about the impact of deepfakes, 72% of Gen Z members are also anxious about the same topic.

Although younger people are less suspicious of AI — and are more likely to use it in their everyday lives — concerns remain high in a number of areas, including whether the industry should do more to protect the public and whether AI should be regulated.

Strong support for regulation

A laptop opened to the ChatGPT website.
Shutterstock

The declining support for AI tools has likely been prompted by months of negative stories in the news concerning generative AI tools and the controversies facing ChatGPT, Bing Chat, and other products. As tales of misinformation, data breaches, and malware mount, it seems that the public is becoming less amenable to the looming AI future.

When asked in the MITRE-Harris poll whether the government should step in to regulate AI, 85% of respondents were in favor of the idea — up 3% from last time. The same 85% agreed with the statement that “Making AI safe and secure for public use needs to be a nationwide effort across industry, government, and academia,” while 72% felt that “The federal government should focus more time and funding on AI security research and development.”

The widespread anxiety over AI being used to improve malware attacks is interesting. We recently spoke to a group of cybersecurity experts on this very topic, and the consensus seemed to be that while AI could be used in malware, it is not a particularly strong tool at the moment. Some experts felt that its ability to write effective malware code was poor, while others explained that hackers were likely to find better exploits in public repositories than by asking AI for help.

Still, the increasing skepticism for all things AI could end up shaping the industry’s efforts and might prompt companies like OpenAI to invest more money in safeguarding the public from the products they release. And with such overwhelming support, don’t be surprised if governments start enacting AI regulation sooner rather than later.

Alex Blake
Alex Blake has been working with Digital Trends since 2019, where he spends most of his time writing about Mac computers…
Zoom debuts its new customizable AI Companion 2.0
overhead shot of a person taking a zoom meeting at their desk

Zoom unveiled its AI Companion 2.0 during the company's Zoomtopia 2024 event on Wednesday. The AI assistant is incorporated throughout the Zoom Workplace app suite and is promised to "deliver an AI-first work platform for human connection."

While Zoom got its start as a videoconferencing app, the company has expanded its product ecosystem to become an "open collaboration platform" that includes a variety of communication, productivity, and business services, both online and in physical office spaces. The company's AI Companion, which debuted last September, is incorporated deeply throughout Zoom Workplace and, like Google's Gemini or Microsoft's Copilot, is designed to automate repetitive tasks like transcribing notes and summarizing reports that can take up as much as 62% of a person's workday.

Read more
From Open AI to hacked smart glasses, here are the 5 biggest AI headlines this week
Ray-Ban Meta smart glasses in Headline style are worn by a model.

We officially transitioned into Spooky Season this week and, between OpenAI's $6.6 million funding round, Nvidia's surprise LLM, and some privacy-invading Meta Smart Glasses, we saw a scary number of developments in the AI space. Here are five of the biggest announcements.
OpenAI secures $6.6 billion in latest funding round

Sam Altman's charmed existence continues apace with news this week that OpenAI has secured an additional $6.6 billion in investment as part of its most recent funding round. Existing investors like Microsoft and Khosla Ventures were joined by newcomers SoftBank and Nvidia. The AI company is now valued at a whopping $157 billion, making it one of the wealthiest private enterprises on Earth.

Read more
ChatGPT’s new Canvas feature sure looks a lot like Claude’s Artifacts
ChatGPT's Canvas screen

Hot on the heels of its $6.6 billion funding round, OpenAI on Thursday debuted the beta of a new collaboration interface for ChatGPT, dubbed Canvas.

"We are fundamentally changing how humans can collaborate with ChatGPT since it launched two years ago," Canvas research lead Karina Nguyen wrote in a post on X (formerly Twitter). She describes it as "a new interface for working with ChatGPT on writing and coding projects that go beyond simple chat."

Read more