De Meyere, Jean
[UCL]
This research focus on the escalating challenge of dis-/misinformation in the digital age, where the proliferation of false or misleading content, often driven by economic or political motives, poses a significant threat to society. The rise of generative AI has amplified this problem, enabling the rapid creation of deceptive content by both individuals and malicious actors. At the same time, AI tools are being used by online platforms to moderate content. AI shows capabilities to detect (and automatically suppress) dis-/misinformation. However, such use of AI poses further questions in term of legitimacy and respect for freedom of expression. This research investigates the effects of AI on dis-/misinformation within the context of the EU Digital Services Act (DSA), which imposes transparency and due diligence obligations on online platforms. Notably, this research establishes that (generative) AI should be considered a systemic risk under Article 34 of the DSA. This has implications for very large online platforms which must put in place measures to mitigate such risk.
Bibliographic reference |
De Meyere, Jean. (Generative) AI as a systemic risk under article 34 of the DSA the case of electoral dis-/misinformation.Shaping AI for Just Futures (University of Ottawa, 19/10/2023). |
Permanent URL |
http://hdl.handle.net/2078.1/280753 |