Current:Home > FinanceAI chatbots are serving up wildly inaccurate election information, new study says -Infinite Edge Capital
AI chatbots are serving up wildly inaccurate election information, new study says
View
Date:2025-04-15 14:03:03
New AI-powered tools produce inaccurate election information more than half the time, including answers that are harmful or incomplete, according to new research.
The study, from AI Democracy Projects and nonprofit media outlet Proof News, comes as the U.S. presidential primaries are underway across the U.S. and as more Americans are turning to chatbots such as Google's Gemini and OpenAI's GPT-4 for information. Experts have raised concerns that the advent of powerful new forms of AI could result in voters receiving false and misleading information, or even discourage people from going to the polls.
The latest generation of artificial intelligence technology, including tools that let users almost instantly generate textual content, videos and audio, has been heralded as ushering in a new era of information by providing facts and analysis faster than a human can. Yet the new study found that these AI models are prone to suggesting voters head to polling places that don't exist or inventing illogical responses based on rehashed, dated information.
For instance, one AI model, Meta's Llama 2, responded to a prompt by erroneously answering that California voters can vote by text message, the researchers found — voting by text isn't legal anywhere in the U.S. And none of the five AI models that were tested — OpenAI's ChatGPT-4, Meta's Llama 2, Google's Gemini, Anthropic's Claude, and Mixtral from the French company Mistral — correctly stated that wearing clothing with campaign logos, such as a MAGA hat, is barred at Texas polls under that state's laws.
Some policy experts believe that AI could help improve elections, such as by powering tabulators that can scan ballots more quickly than poll workers or by detecting anomalies in voting, according to the Brookings Institution. Yet such tools are already being misused, such as by enabling bad actors, including governments, to manipulate voters in ways that weaken democratic processes.
For instance, AI-generated robocalls were sent to voters days before the New Hampshire presidential primary last month, with a fake version of President Joe Biden's voice urging people not to vote in the election.
Meanwhile, some people using AI are encountering other problems. Google recently paused its Gemini AI picture generator, which it plans to relaunch in the next few weeks, after the technology produced info with historical inaccuracies and other concerning responses. For example, when asked to create an image of a German soldier during World War 2, when the Nazi party controlled the nation, Gemini appeared to provide racially diverse images, according to the Wall Street Journal.
"They say they put their models through extensive safety and ethics testing," Maria Curi, a tech policy reporter for Axios, told CBS News. "We don't know exactly what those testing processes are. Users are finding historical inaccuracies, so it begs the question whether these models are being let out into the world too soon."
AI models and hallucinations
Meta spokesman Daniel Roberts told the Associated Press that the latest findings are "meaningless" because they don't precisely mirror the way people interact with chatbots. Anthropic said it plans to roll out a new version of its AI tool in the coming weeks to provide accurate voting information.
In an email to CBS MoneyWatch, Meta pointed out that Llama 2 is a model for developers — it isn't the tool that consumers would use.
"When we submitted the same prompts to Meta AI – the product the public would use – the majority of responses directed users to resources for finding authoritative information from state election authorities, which is exactly how our system is designed," a Meta spokesperson said.
"[L]arge language models can sometimes 'hallucinate' incorrect information," said Alex Sanderford, Anthropic's Trust and Safety Lead, told the AP.
OpenAI said it plans to "keep evolving our approach as we learn more about how our tools are used," but offered no specifics. Google and Mistral did not immediately respond to requests for comment.
"It scared me"
In Nevada, where same-day voter registration has been allowed since 2019, four of the five chatbots tested by researchers wrongly asserted that voters would be blocked from registering weeks before Election Day.
"It scared me, more than anything, because the information provided was wrong," said Nevada Secretary of State Francisco Aguilar, a Democrat who participated in last month's testing workshop.
Most adults in the U.S. fear that AI tools will increase the spread of false and misleading information during this year's elections, according to a recent poll from The Associated Press-NORC Center for Public Affairs Research and the University of Chicago Harris School of Public Policy.
Yet in the U.S., Congress has yet to pass laws regulating AI in politics. For now, that leaves the tech companies behind the chatbots to govern themselves.
—With reporting by the Associated Press.
- In:
- Artificial Intelligence
- AI
Aimee Picchi is the associate managing editor for CBS MoneyWatch, where she covers business and personal finance. She previously worked at Bloomberg News and has written for national news outlets including USA Today and Consumer Reports.
TwitterveryGood! (24)
Related
- Charges tied to China weigh on GM in Q4, but profit and revenue top expectations
- Disney CEO Bob Iger extends contract for an additional 2 years, through 2026
- Need a new credit card? It can take almost two months to get a replacement
- A Decade Into the Fracking Boom, Pennsylvania, Ohio and West Virginia Haven’t Gained Much, a Study Says
- Pressure on a veteran and senator shows what’s next for those who oppose Trump
- Markets are surging as fears about the economy fade. Why the optimists could be wrong
- Turbulence during Allegiant Air flight hospitalizes 4 in Florida
- FDA approves first over-the-counter birth control pill, Opill
- Rolling Loud 2024: Lineup, how to stream the world's largest hip hop music festival
- Inside Clean Energy: The Coal-Country Utility that Wants to Cut Coal
Ranking
- Off the Grid: Sally breaks down USA TODAY's daily crossword puzzle, Triathlon
- Microsoft revamps Bing search engine to use artificial intelligence
- Can Rights of Nature Laws Make a Difference? In Ecuador, They Already Are
- Eggs prices drop, but the threat from avian flu isn't over yet
- Arkansas State Police probe death of woman found after officer
- Japan's conveyor belt sushi industry takes a licking from an errant customer
- Inside Clean Energy: How Soon Will An EV Cost the Same as a Gasoline Vehicle? Sooner Than You Think.
- Meagan Good Supports Boyfriend Jonathan Majors at Court Appearance in Assault Case
Recommendation
Intel's stock did something it hasn't done since 2022
Biden Cancels Keystone XL, Halts Drilling in Arctic Refuge on Day One, Signaling a Larger Shift Away From Fossil Fuels
California Has Begun Managing Groundwater Under a New Law. Experts Aren’t Sure It’s Working
Justice Department investigating Georgia jail where inmate was allegedly eaten alive by bedbugs
Angelina Jolie nearly fainted making Maria Callas movie: 'My body wasn’t strong enough'
Inside Clean Energy: Here’s How Covid-19 Is Affecting The Biggest Source of Clean Energy Jobs
Kylie Jenner Is Not OK After This Cute Exchange With Son Aire
Amid the Misery of Hurricane Ida, Coastal Restoration Offers Hope. But the Price Is High