Article

Unveiling the Dangers of Generative AI: Tackling False Information and Stereotypes

Published
September 5, 2023
Marysabel Villafuerte

Generative artificial intelligence (AI) models, such as OpenAI's GPT variants and Google's Bard, have gained widespread adoption and captured public interest. However, these models are not without their challenges. According to the article “How AI can distort human beliefs” written by Abeba Birhane and Celeste Kidd, the AI models can transmit biases and false information to users.

The article highlights that these models often contain biases and stereotypes due to their training data and structural factors, which can negatively affect marginalized groups. The authors also argues that the presentation of generative AI models tends to overhype their capabilities,creating a misconception that these models surpass human-level reasoning. This exaggeration contributes to the transmission of false information and negative stereotypes. To understand the impact of generative AI models on human beliefs,the article suggests there are three core psychological principles.

Firstly, people tend to form stronger and longer-lasting beliefs when receiving information from confident and knowledgeable sources.Generative models, lacking uncertainty signals, may lead to greater distortion compared to human inputs. Secondly, people readily assign agency and intentionality to generative models, perceiving them as knowledgeable and intentional agents. This perception can influence the adoption of information provided by the models. Lastly, repeated exposure to fabricated information orbiases in algorithmic systems can lead to the deep-rooted belief in false information.

As stated by the authors, the generative AI model shave the potential to amplify these issues, as their outputs become part of the training data for future models, perpetuating distortions, and biases. The more widely and rapidly these models are adopted, the greater their influence over human beliefs. Birhane and Kidd also mentions that conversational generative AI models pose challenges as they resolve users' uncertainty, making it difficult to update beliefs later.

In conclusion, addressing the risks associated with generative AI requires interdisciplinary collaboration, independent audits, and public education. By promoting a realistic understanding of these models and actively mitigating biases, we can harness the potential of generative AI while ensuring that it aligns with our ethical values and societal needs. It is important to be aware of the potential negative impacts of generative AI and take steps to address them.

If you want to read the article written by AbebaBirhane and Celeste Kidd, please click in the link: https://n9.cl/ga7sy

Check out our HCI case studies here.
Read next
Singapore's Investment in AI: An Ambitious Path, Yet with Pending Challenges

Innovation with Responsibility: The European Union Leads the Way in AI Regulation

Navigating the Future: Analyzing the Biden Administration's Strategic Approach to Responsible AI Development.