TechPublic more fearful of AI misinformation than distant doomsday

Public more fearful of AI misinformation than distant doomsday

Researchers from the University of Zurich believe that people are more frightened by current threats from the development of artificial intelligence, such as misinformation, than by distant apocalyptic catastrophes. More than 10,000 respondents share this view.

Artificial intelligence is sometimes used for bad purposes.
Artificial intelligence is sometimes used for bad purposes.
Images source: © Licensor | UN Geneva
Amanda Grzmiel

Studies conducted by two researchers from the University of Zurich indicate that people are more concerned about current threats related to artificial intelligence (AI), such as misinformation, bias, or job loss, than about distant, potential catastrophes that could lead to the extinction of humanity. The analysis considered the responses and reactions of over 10,000 participants from the United Kingdom and the USA. The results were published in the scientific journal of the United States National Academy of Sciences "PNAS".

What are the greatest concerns related to AI?

To study these social sentiments, a team of political scientists from the University of Zurich conducted three major online experiments. Some participants were presented with headlines portraying AI as a catastrophic threat, others read about current threats such as discrimination or misinformation, and still others about the potential benefits of AI.

"In three [...] participants were exposed to news headlines that either depicted AI as a catastrophic risk, highlighted its immediate societal impacts, or emphasised its potential benefits," wrote authors Emma Hoes and Fabrizio Gilardi from the University of Zurich in the published article. The results indicate that respondents clearly differentiate abstract future scenarios from specific, current, and tangible problems, which they take much more seriously.

Even after being exposed to apocalyptic warnings, participants remained very vigilant about current problems, suggesting that public debate should address both short-term and long-term AI-related threats concurrently, according to the researchers. They add that current problems, including, for example, systematic bias in AI decisions and job loss due to the development of artificial intelligence. were of much greater concern.

Do future concerns distract from current problems?

As the researchers describe, these findings provide important empirical evidence to inform ongoing scientific and political debates on the social implications of artificial intelligence.

The study fills a significant gap in knowledge. Public discussions often express concerns that focusing on sensational future scenarios distracts from urgent problems of the present. The study is the first to provide systematic data showing that awareness of real current threats remains even when people are confronted with apocalyptic warnings.

"Our study shows that the discussion about long-term risks is not automatically occurring at the expense of alertness to present problems," co-author Emma Hoes told SciTechDaily. Gilardi adds that "the public discourse shouldn’t be ‘either-or.’ A concurrent understanding and appreciation of both the immediate and potential future challenges is needed."

Related content