Garrigues Digital_

Legal innovation in Industry 4.0

 

Garrigues

ELIGE TU PAÍS / ESCOLHA O SEU PAÍS / CHOOSE YOUR COUNTRY / WYBIERZ SWÓJ KRAJ / 选择您的国家

Deepfakes: Twitter to introduce a policy against the manipulation of audiovisual content

Cristina Mesa, principal associate at the Garrigues Intellectual Property Department.

© Twitter

Twitter has taken up arms against deepfakes, shallowfakes and other manipulated audiovisual content. And this is not the dystopian future you would see in an episode of Black Mirror, current technology can already be used to create extremely convincing fake content, thanks to machine learning. Simply put, large volumes of data are compiled to teach an algorithm to virtually recreate a person’s voice, image and even gestures. This creates a kind of virtual alter ego that can be controlled by someone else, who may or may not have good intentions.  

For this technology to work well, enough data has to be available for the content manipulation software to learn as it goes. Public figures are therefore more likely to be targeted by these manipulations, simply because of the volume of publicly available videos that can feed artificial intelligence systems. It is not by chance that the first manipulated media involved politicians such as Barak Obama and Donald Trump. In Spain, we had the viral video of the main candidates to the general elections, who were characterized as members of the famous television show “The A-Team”. Deepfakes like that last video may be a nuisance, but they are not dangerous – the audience is perfectly aware that the content has been manipulated. In general, this conduct is simply parody, which has been accepted in law since the recognition of animus iocandi under Roman law.

It is a different matter, however, when deepfakes are used for more sinister purposes, such as depicting public figures (e.g. politicians, business leaders) with a view to spreading false or misleading information so as to manipulate public opinion. Here in Spain, there is no specific legislation to combat deepfakes, although we can turn to the legal framework applicable to traditional communications and media. As far as criminal law is concerned, malicious accusation, criminal insults, coercion, threats, as well as the right to privacy, publicity, and/or moral integrity can be claimed to fight deepfakes. In civil law, we have recourse to Organic Law 1/1982, of May 5, 1982 on the protection of the right to honor, personal and family privacy and personal portrayal, as well as to data protection regulations and unfair competition legislation. Whether or not to take a deepfake issue to court must be analyzed on a case-by-case basis, although it may not be advisable in those cases in which the author cannot be identified or the international component is very complex.

Aware of the above, the European Commission has been looking closely at social media (Action Plan against Disinformation), with a view to addressing the proliferation of disinformation. Against this backdrop, Twitter has voluntarily implemented measures to stop the publication and dissemination of deepfakes. The first step was to seek user input by way of a survey, which it conducted in October 2019. The result of the survey led the company to roll out its Synthetic and Manipulated Media Policy, applicable as from March 5, 2020. The basic rule for Twitter users is the following: 

“You may not deceptively share synthetic or manipulated media that are likely to cause harm. In addition, we may label Tweets containing synthetic and manipulated media to help people understand their authenticity and to provide additional context.”

In short, under its new policy, Twitter reserves the right to:

  1. Label content that has been substantially edited. To determine whether content has been altered or fabricated, Twitter considers the following factors, among others: (i) the content has been substantially edited in a manner that alters its composition, sequence, timing or framing,  (ii) visual or auditory information has been added or removed (frames, overdubbed audio or modified subtitles, etc.); or (iii) media depicting a real person has been fabricated or simulated.
  2. Label content that is shared in a deceptive manner. Here, Twitter looks at the context of the media, including the text of the accompanying tweet, the metadata associated with the media, linked websites and the impact of the deceit.
  3. Remove content that has been significantly and deceptively altered, shared in a deceptive manner and is likely to impact public safety or cause serious harm. It is no easy task to determine which content may cause damage. Twitter therefore has open rules to carry out this analysis. By way of example, the following conduct is considered capable of causing harm: (i) threats to the physical safety of a person or group; (ii) risk of mass violence or widespread civil unrest; and (iii) threats to the privacy or ability of a person or group to freely express themselves or participate in civic events, as well as content that aims to silence a person or group or intimidate voters.

Apart from labelling the deepfakes detected, Twitter reserves the right to use other measures, including showing a warning to people before they share or like the content, reducing the visibility of the tweet, preventing recommendations, and providing additional explanations on the content, such as through links to other sites with more context.

Tackling manipulated media is no small undertaking, and interpreting open concepts when determining the existence of manipulation or intentionality can be particularly problematic. However, by rolling out its new policy, Twitter attempts to address the concerns both of the European Commission and of Twitter users themselves.