ā€˜Itā€™s not me, itā€™s just my faceā€™: the models who found their likenesses had been used in AI propaganda | Artificial intelligence (AI)

by Pelican Press
1 views 17 minutes read

ā€˜Itā€™s not me, itā€™s just my faceā€™: the models who found their likenesses had been used in AI propaganda | Artificial intelligence (AI)

The well-groomed young man dressed in a crisp, blue shirt speaking with a soft American accent seems an unlikely supporter of the junta leader of the west African state of Burkina Faso.

ā€œWe must support ā€¦ President Ibrahim TraorĆ© ā€¦ Homeland or death we shall overcome!ā€ he says in a video that began circulating in early 2023 on Telegram. It was just a few months after the dictator had come to power via a military coup.

Other videos fronted by different people, with a similar professional-looking appearance and repeating the exact same script in front of the Burkina Faso flag, cropped up around the same time.

On a verified account on X a few days later the same young man, in the same blue shirt, claimed to be Archie, the chief executive of a new cryptocurrency platform.

These videos are fake. They were generated with artificial intelligence (AI) developed by a startup based in east London. The company, Synthesia, has created a buzz in an industry racing to perfect lifelike AI videos. Investors have poured in cash, catapulting it into ā€œunicornā€ status ā€“ a label for a private company valued at more than $1bn.

Synthesiaā€™s technology is aimed at clients looking to create marketing material or internal presentations, and any deepfakes are a breach of its terms of use. But this means little to the models whose likenesses are behind the digital ā€œpuppetsā€ that were used in propaganda videos such as those apparently supporting Burkina Fasoā€™s dictator. The Guardian tracked down five of them.

ā€œIā€™m in shock, there are no words right now. Iā€™ve been in the [creative] industry for over 20 years and I have never felt so violated and vulnerable,ā€ said Mark Torres, a creative director based in London, who appears in the blue shirt in the fake videos.

ā€œI donā€™t want anyone viewing me like that. Just the fact that my image is out there, could be saying anything ā€“ promoting military rule in a country I did not know existed. People will think I am involved in the coup,ā€ Torres added after being shown the video by the Guardian for the first time.

The shoot

In the summer of 2022, Connor Yeates got a call from his agent offering the chance to be one of the first AI models for a new company.

Yeates had never heard of the company, but he had just moved to London, and was sleeping on a friendā€™s couch. The offer ā€“ nearly Ā£4,000 for a dayā€™s filming and the use of the images for a three-year period ā€“ felt like a ā€œgood opportunityā€.

ā€œIā€™ve been a model since university and thatā€™s been my primary income ever since finishing. Then I moved to London to start doing standup,ā€ said Yeates, who grew up in Bath.

The shoot took place in Synthesiaā€™s studio in east London. First, he was led into hair and makeup. Half an hour later, he entered the recording room where a small crew was waiting.

Yeates was asked to read lines while looking directly into the camera, and wearing a variety of costumes: a lab coat, a construction hi-vis vest and hard hat, and a corporate suit.

ā€œThereā€™s a teleprompter in front of you with lines, and you say this so that they can capture gesticulations, and replicate the movements. Theyā€™d say be more enthusiastic, smile, scowl, be angry,ā€ said Yeates.

The whole thing lasted three hours. Several days later, he received a contract and the link to his AI avatar.

ā€œThey paid promptly. I donā€™t have rich parents and needed the money,ā€ said Yeates, who didnā€™t think much of it afterwards.

Like Torresā€™s, Yeatesā€™s likeness was used in propaganda for Burkina Fasoā€™s current leader.

A spokesperson for Synthesia said the company had banned the accounts that created the videos in 2023 and that it had strengthened its content review processes and ā€œhired more content moderators, and improved our moderation capabilities and automated systems to better detect and prevent misuse of our technologyā€.

But neither Torres nor Yeates were made aware of the videos until the Guardian contacted them a few months ago.

ā€˜Have I done something dreadful?ā€™ The real actors behind AI deepfakes backing dictatorships ā€“ video

The ā€˜unicornā€™

Synthesia was founded in 2017 by Victor Riparbelli, Steffen Tjerrild and two academics from London and Munich.

It launched a dubbing tool a year later that allowed production companies to translate speech and sync an actorā€™s lips automatically using AI.

It was showcased on a BBC programme in which a news presenter who spoke only English was made to look as if he was magically speaking Mandarin, Hindi and Spanish.

What earned the company its coveted ā€œunicornā€ status was a pivot to the mass market digital avatar product available today. This allows a company or individual to create a presenter-led video in minutes for as little as Ā£23 a month. There are dozens of characters to choose from offering different genders, ages, ethnicities and looks. Once selected, the digital puppets can be put in almost any setting and given a script, which they can then read in more than 120 languages and accents.

Synthesia now has a dominant share of the market, and lists Ernst & Young (EY), Zoom, Xerox and Microsoft among its clients.

The advances of the product led Time magazine in September to put Riparbelli among the 100 most influential people in AI.

But the technology has also been used to create videos linked to hostile states including Russia, China and others to spread misinformation and disinformation. Intelligence sources suggested to the Guardian that there was a high likelihood the Burkina Faso videos that circulated in 2023 had also been created by Russian state actors.

The personal impact

Around the same time as the Burkina Faso videos started circulating online, two pro-Venezuela videos featuring fake news segments presented by Synthesia avatars also appeared on YouTube and Facebook. In one, a blond-haired male presenter in a white shirt condemned ā€œwestern media claimsā€ of economic instability and poverty, portraying instead a highly misleading portrait of the countryā€™s financial situation.

Dan Dewhirst, an actor based in London and a Synthesia model, whose likeness was used in the video, told the Guardian: ā€œCountless people contacted me about it ā€¦ But there were probably other people who saw it and didnā€™t say anything, or quietly judged me for it. I may have lost clients. But itā€™s not me, itā€™s just my face. But theyā€™ll think Iā€™ve agreed to it.ā€

ā€œI was furious. It was really, really damaging to my mental health. [It caused] an overwhelming amount of anxiety,ā€ he added.

Do you have information about this story? Email [email protected], or (using a non-work phone) use Signal or WhatsApp to message +44 7721 857348.

The Synthesia spokesperson said the company had been in touch with some of the actors whose likenesses had been used. ā€œWe sincerely regret the negative personal or professional impact these historical incidents have had on the people youā€™ve spoken to,ā€ he said.

But once circulated, the harm from deepfakes is difficult to undo.

Dewhirst said seeing his face used to spread propaganda was the worst-case scenario, adding: ā€œOur brains often catastrophise when weā€™re worrying. But then to actually see that worry realised ā€¦ It was horrible.ā€

The ā€˜rollercoasterā€™

Last year, more than 100,000 unionised actors and performers in the US went on strike, protesting against the use of AI in the creative arts. The strike was called off last November after studios agreed to safeguards in contracts, such as informed consent before digital replication and fair compensation for any such use. Video games performers remain on strike for the same issue.

Last month, a bipartisan bill was introduced in the US, titled the NO FAKES Act and aiming to hold companies and individuals liable for damages for violations involving digital replicas.

However, there are virtually no practical mechanisms for redress for the artists themselves, outside AI-generated sexual content.

ā€œThese AI companies are inviting people on to a really dangerous rollercoaster,ā€ said Kelsey Farish, a London-based media and entertainment lawyer specialising in generative AI and intellectual property. ā€œAnd guess what? People keep going on to this rollercoaster, and now people are starting to get hurt.ā€

Under GDPR, models can technically request that Synthesia remove their data, including their likeness and image. In practice this is very difficult.

A former Synthesia employee, who wanted to remain anonymous for fear of reprisal, explained that the AI could not ā€œunlearnā€ or delete what it may have gleaned from the modelā€™s body language. To do so would require replacing the entire AI model.

The spokesperson for Synthesia said: ā€œMany of the actors we work with re-engage with us for new shoots ā€¦ At the start of our collaboration, we explain our terms of service to them and how our technology works so they are aware of what the platform can do and the safeguards we have in place.ā€

He said the company did not allow ā€œthe use of stock avatars for political content, including content that is factually accurate but may create polarisationā€, and that its policies were designed to stop its avatars being used for ā€œmanipulation, deceptive practices, impersonations and false associationsā€.

ā€œEven though our processes and systems may not be perfect, our founders are committed to continually improving them.ā€

When the Guardian tested Synthesiaā€™s technology using a range of disinformation scripts, although it blocked attempts to use one of its avatars, it was possible to recreate the Burkina Faso propaganda video with a personally created avatar and download it, neither of which should be allowed, according to Synthesiaā€™s policies. Synthesia said this was not a breach of its terms as it respected the right to express a personal political stance, but it later blocked the account.

The Guardian was also able to create and download clips from an audio-only avatar that said ā€œheil Hitlerā€ in several languages, and another audio-clip saying ā€œKamala Harris rigged the electionā€ in an American accent.

Synthesia took down the free AI audio service after being contacted by the Guardian and said the technology behind the product was a third-party service.

The aftermath

The experience of learning his likeness had been used in a propaganda video has left Torres with a deep sense of betrayal: ā€œKnowing that this company I trusted my image with will get away with such a thing makes me so angry. This could potentially cost lives, cost me my life when crossing a border for immigration.ā€

Torres was invited to another shoot with Synthesia this year, but he declined. His contract ends in a few months, when his Synthesia avatar will be deleted. But what happens to his avatar in the Burkina Faso video is unclear even to him.

ā€œNow I realise why putting faces out for them to use is so dangerous. Itā€™s a shame we were part of this,ā€ he said.

YouTube has since taken the propaganda video featuring Dewhirst down, but it remains available on Facebook.

Torres and Yeates both remain on the front page of Synthesia in a video advertisement for the company.



Source link

#face #models #likenesses #propaganda #Artificial #intelligence

Add Comment

You may also like