Skip to main content

Module 2: Problem of Generative AI

Site: OpenLearn Create
Course: Trustworthy and Democratic AI - Responsible AI Development
Book: Module 2: Problem of Generative AI
Printed by: Guest user
Date: Friday, 21 November 2025, 6:28 AM

Description

Generative AI presents several ethical, social, and technical challenges that require careful consideration.


Key problems include: 

  1. Misinformation and Deepfakes
  2. Bias and Discrimination
  3. Copyright Infringement
  4. Lack of Transparency 
  5. Workforce Displacement
  6. Harmful Content Creation
  7. Privacy Risks
  8. Environmental Impact
  9. Erosion of Trust in Digital Information
  10. Governance Challenges

In Module 2, we cover the following Lessons:

Lesson 2.1: Horizon Europe project TWON

Lesson 2.2: Horizon Europe project Solaris


TWON and SOLARIS are research projects actively assessing the impact of AI technologies on democracy, focusing on media manipulation, deepfakes, and the regulatory challenges governments face in protecting democratic systems.

LESSON 2.1: HORIZON EUROPE PROJECT TWON

TWON is an EU-funded project that looks at how social media platforms—like (Twitter) TikTok, and Facebook—can influence public debate and political opinions. The goal is to understand how these platforms might spread fake news, encourage extreme views, and divide society. Instead of relying on real platforms, TWON built a digital copy (a “digital twin”) of a social network. This lets researchers safely test how different platform designs and algorithms affect what people see and how they react. 

They created:

  • A digital twin of a social network platform to simulate how users connect and share content. 
  • A user model using AI bots that behave like real people with different opinions.  Can be tested with a micro TWONy application
  • A test social network where real people (and possibly also agents)  interact. 
  • A study to observe how debates unfold on this test platform. 
  • Tools to measure the quality of online discussions. Deliverable summarizing this is available at this link
  • Ethical guidelines to ensure the research is done responsibly are available at this link.

Why It Matters

By learning how online platforms shape opinions and spread misinformation, TWON aims to: 

  • Help governments create better rules for social media. 
  • Support digital citizenship by involving the public in the research. 
  • Make online spaces healthier for democratic debate. 

Learn more:

  • TWON Project Website
  • Micro TWONy: an interactive tool that shows what an algorithm does to your social media feed.
  • Ethics TWONy: should the tool that enables understanding the social media platforms be available for public or research only? 
  • Citizen labs: May 11-15, Vienna: Digital spaces are not neutral—they are designed. To redesign them for a better future, combination of talks, workshops and insights from experts joined debate with general public to foster digital citizenship and spark public dialogue about the future of Online Social Networks. 
  • 1st Workshop on Semantic Generative Agents on the Web (SemGenAge 2025): June 2, Portorož: the workshop explored the convergence of Semantic Web tech and Large Language Models (LLMs) to build more intelligent, interpretable, and communicative agents on the web 
  • Read the policy brief On Regulating Online Social Networks (TWON Policy Brief) (PDF)

LESSON 2.2: HORIZON EUROPE PROJECT SOLARIS

In today’s digital world, people get most of their news online. But with so much information out there, it’s hard to tell what’s true. This flood of content—sometimes called an “infodemic”—makes it easier for fake news and conspiracy theories to spread. Social media also creates echo chambers, where people only see opinions they already agree with, leading to more division and less understanding. Recent events, like the war in Ukraine, have shown how powerful information can be. Fake news and manipulated videos (called deepfakes) can be used to influence public opinion and even destabilize democracies. The SOLARIS Project is tackling these challenges by studying how advanced AI tools—especially Generative Adversarial Networks (GANs)—affect democracy. GANs are a type of AI that can create realistic fake content, like videos of people saying things they never actually said. These deepfakes can be dangerous if used to spread lies or manipulate voters.

Two Main Goals of SOLARIS 

  1. Protect Democracy from Deepfakes - SOLARIS is researching the political risks of these technologies and working on new rules and tools to detect and reduce the harm they can cause. 
  2. Use AI for Good - Instead of just focusing on the risks, SOLARIS also wants to explore how GANs can be used to engage citizens in positive ways. For example, they plan to create AI-generated content that raises awareness about important issues like climate change, gender equality, and migration—all based on shared values and citizen input.

The Big Idea

SOLARIS believes that by understanding and guiding how AI is used, we can strengthen democracy, fight misinformation, and empower citizens to make informed decisions. 

Learn more: 

  • SOLARIS Project Website
  • SOLARIS Panel Discussion: On October 30, 2024, SOLARIS hosted an online event where experts talked about how AI is changing politics, media, and public trust. They discussed both the risks and opportunities AI brings to democratic societies. 
Key Takeaways from the Discussion

AI & Politics 
AI can spread fake news and deepfakes, which may mislead voters. Experts stressed the need for rules and education to protect free speech while stopping harmful content. 

Journalism in the AI Era 
Journalists now face the challenge of sorting truth from AI-generated fakes. Newsrooms need to be transparent about how they use AI and train staff to handle it responsibly. Europe’s Role in AI Regulation Europe is leading with laws like the AI Act to keep AI in check. But experts say we must keep updating rules to stay ahead of fast-changing technology. 

AI & Authoritarianism
Some governments use AI to control information and silence critics. Experts warned that we need to look at AI not just as tech, but as a social and political force. 

The Power of Education
Teaching people how to spot fake content and understand AI is key to protecting democracy. Schools, libraries, and local groups can help build digital literacy.

Citizenship Education 
People need to learn how to use AI responsibly and think critically about what they see online. Education and regulation must go hand in hand to keep democracy strong. 

The Big Picture
The panel ended with a clear message: we need teamwork. Governments, researchers, and everyday citizens must work together to make sure AI helps—not harms—our democratic societies. 

Watch the video recording of the event:

For more content on the topic, you can visit the ALGORITHM WATCH webpage

  •  How does automated decision-making effect our daily lives? 
  • Where are the systems applied and what happens when something goes wrong? 
Read the journalistic investigations in automated decision-making (ADM) systems and their consequences.
You're almost at the finish line! Module 2 is complete. Only Quiz is left to complete the course. Keep up the good work, you're doing great!