Abstract:
Deepfakes and their applications represent an emerging issue resulting from an
increase in the online capabilities of individuals. While it is known that a large proportion of
deepfakes that exist currently are pornographic in nature, the criminality of such media is still
largely unknown. As such, this thesis aims to present the current issues within legislation and
policies of social media platforms and draws connections between the relationships that have
been observed within a legal standing. More specifically, this thesis aims to highlight the
disconnect between the most significant issues presented by the deployment of deepfakes and
the systematic ways that existing legislation and policies have failed to accurately protect
those subjected to the misuse of their image. This thesis draws on the most straightforward
definition of deepfakes, noting that deepfakes result from artificially intelligent programmes
mapping the distinguishing markers of an individual to make it appear as though a person is
doing something that they are not.
There are three proposed research questions through which a variety of publicly
accessible data was examined. The first question draws on publicly available court cases
(mainly within a US context) seeking to examine how law enforcement agencies have
successfully prosecuted for the malicious use of deepfakes. Secondly, the policies of
prominent social media platforms were analysed to deduce whether there are sufficient
protections and policies in place on current hosting platforms to protect those who have been
victimised by deepfakes through an examination of platform policies. Thirdly, legislation,
both enacted and proposed, was detailed, exposing the priority of protections valued by
governments and officiating bodies (mainly within the US context) to deduce the extent to
which the perception of threat has influenced existing legislation and whether they are
sufficient.
The cases and legislations were examined through critical discourse analysis, drawing
on subsequent theories to explain the underlying reasons for deepfake misuse within society.
Toxic masculinity (about the deployment of deepfakes) and threat perception (in relation to
the implementation of legislation) were connected to the foundations of deepfake misuse. The
findings indicate that although the most common form of deepfakes currently available
online contains pornographic content, the content of cases, target areas of platform policies
and legislation enacted into law negate the people impacted the most. Instead, the favour of
literature and policies works to protect those in positions of power, negating the impact for
those left behind within political discourse.