GPT-3 Informs and Disinforms Us Better

A recent study conducted by researchers at the University of Zurich delved into the capabilities of AI models, specifically focusing on OpenAI's GPT-3, to determine their potential risks and benefits in generating and disseminating (dis)information. Led by postdoctoral researchers Giovanni Spitale and Federico Germani, alongside Nikola Biller-Andorno, director of the Institute of Biomedical Ethics and History of Medicine (IBME), University of Zurich, the study involving 697 participants sought to evaluate whether individuals could differentiate between disinformation and accurate information presented in the form of tweets. Furthermore, the researchers aimed to determine if participants could discern whether a tweet was written by a genuine Twitter user or generated by GPT-3, an advanced AI language model. The topics covered included climate change, vaccine safety, the COVID-19 pandemic, flat earth theory, and homeopathic treatments for cancer.

AI-powered systems could generate large-scale disinformation campaigns

On the one hand, GPT-3 demonstrated the ability to generate accurate and, compared to tweets from real Twitter users, more easily comprehensible information. However, the researchers also discovered that the AI language model had an unsettling knack for producing highly persuasive disinformation. In a concerning twist, participants were unable to reliably differentiate between tweets created by GPT-3 and those written by real Twitter users. "Our study reveals the power of AI to both inform and mislead, raising critical questions about the future of information ecosystems," says Federico Germani.

These findings suggest that information campaigns created by GPT-3, based on well-structured prompts and evaluated by trained humans, would prove more effective for instance in a public health crisis which requires fast and clear communication to the public. The findings also raise significant concerns regarding the threat of AI perpetuating disinformation, particularly in the context of the rapid and widespread dissemination of misinformation and disinformation during a crisis or public health event. The study reveals that AI-powered systems could be exploited to generate large-scale disinformation campaigns on potentially any topic, jeopardizing not only public health but also the integrity of information ecosystems vital for functioning democracies.

Proactive regulation highly recommended

As the impact of AI on information creation and evaluation becomes increasingly pronounced, the researchers call on policymakers to respond with stringent, evidence-based and ethically informed regulations to address the potential threats posed by these disruptive technologies and ensure the responsible use of AI in shaping our collective knowledge and well-being. "The findings underscore the critical importance of proactive regulation to mitigate the potential harm caused by AI-driven disinformation campaigns," says Nikola Biller-Andorno. "Recognizing the risks associated with AI-generated disinformation is crucial for safeguarding public health and maintaining a robust and trustworthy information ecosystem in the digital age."

Transparent research using open science best practice

The study adhered to open science best practices throughout the entire pipeline, from pre-registration to dissemination. Giovanni Spitale, who is also an UZH Open Science Ambassador, states: "Open science is vital for fostering transparency and accountability in research, allowing for scrutiny and replication. In the context of our study, it becomes even more crucial as it enables stakeholders to access and evaluate the data, code, and intermediate materials, enhancing the credibility of our findings and facilitating informed discussions on the risks and implications of AI-generated disinformation."

Interested parties can access these resources through the OSF repository: https://osf.io/9ntgf/

Spitale G, Biller-Andorno N, Germani F.
AI model GPT-3 (dis)informs us better than humans.
Sci Adv. 2023 Jun 28;9(26):eadh1850. doi: 10.1126/sciadv.adh1850

Most Popular Now

AI may Help Clinicians Personalize Treat…

Individuals with generalized anxiety disorder (GAD), a condition characterized by daily excessive worry lasting at least six months, have a high relapse rate even after receiving treatment. Artificial intelligence (AI)...

Mobile App Tracking Blood Pressure Helps…

The AHOMKA platform, an innovative mobile app for patient-to-provider communication that developed through a collaboration between the School of Engineering and leading medical institutions in Ghana, has yielded positive results...

Can AI Help Detect Cognitive Impairment?

Mild cognitive impairment (MCI) can be an early indicator of Alzheimer's disease or dementia, so identifying those with cognitive issues early could lead to interventions and better outcomes. But diagnosing...

Accelerating NHS Digital Maturity: Paper…

Digitised clinical noting at South Tees Hospitals NHS Foundation Trust is creating efficiencies for busy doctors and nurses. The trust’s CCIO Dr Andrew Adair, deputy CCIO Dr John Greenaway, and...

Customized Smartphone App Shows Promise …

A growing body of research indicates that older adults in assisted living facilities can delay or even prevent cognitive decline through interventions that combine multiple activities, such as improving diet...

New Study Shows Promise for Gamified mHe…

A new study published in Multiple Sclerosis and Related Disorders highlights the potential of More Stamina, a gamified mobile health (mHealth) app designed to help people with Multiple Sclerosis (MS)...

Patients' Affinity for AI Messages …

In a Duke Health-led survey, patients who were shown messages written either by artificial intelligence (AI) or human clinicians indicated a preference for responses drafted by AI over a human...

New Research Explores How AI can Build T…

In today’s economy, many workers have transitioned from manual labor toward knowledge work, a move driven primarily by technological advances, and workers in this domain face challenges around managing non-routine...

AI Tool Helps Predict Who will Benefit f…

A study led by UCLA investigators shows that artificial intelligence (AI) could play a key role in improving treatment outcomes for men with prostate cancer by helping physicians determine who...

AI in Healthcare: How do We Get from Hyp…

The Highland Marketing advisory board met to consider the government's enthusiasm for AI. To date, healthcare has mostly experimented with decision support tools, and their impact on the NHS and...

New AI Tool Accelerates Disease Treatmen…

University of Virginia School of Medicine scientists have created a computational tool to accelerate the development of new disease treatments. The tool goes beyond current artificial intelligence (AI) approaches by...

Research Shows AI Technology Improves Pa…

Existing research indicates that the accuracy of a Parkinson's disease diagnosis hovers between 55% and 78% in the first five years of assessment. That's partly because Parkinson's sibling movement disorders...