Gadget

Facebook prepares to combat election fakes

Meta, owner of the world’s biggest social media platforms, has engaged directly with the Independent Electoral Commission (IEC) and Parliament to ensure Facebook, Instagram and WhatsApp are not misused to manipulate this year’s elections.

Nick Clegg, former deputy prime minister of the United Kingdom and now president of global affairs at Meta, was in South Africa this week to meet and brief key individuals.

During a visit to Johannesburg, he told Business Times in an exclusive interview that Meta had given the IEC extensive guidance in using its own social media tools most effectively.

“We’ve done a significant amount of training with the Independent Election Electoral Commission, including how they should use their WhatsApp bot to communicate with South Africans and give reliable information about the elections.  But also, with the parties and with committees in Parliament. We’ve done a number of briefings on our election preparedness and explained to them how our tools work.”

The biggest challenge for Meta is the fact that 2024 will see the greatest number of national elections in a single year in history.

“The thing that is new this year is not only the scale of the elections taking place, but the nature of the technology, which now might be brought to bear,” he said. “In other words, generative AI. On that, we’ve done a considerable amount of work. I spend probably the bulk of my time at the moment now on exactly that: how do we make sure that we have the right guardrails in place, given this technology is so new.”

He said that Meta had entered into a voluntary agreement with other major social and content platforms to tackle misinformation.

“We’ve invested a vast amount of resources and a very significant amount of time. We have teams working 24/7 around the clock. We analyse both how our platforms are using each election, and then what kind of vector of abuse – disinformation, misinformation and so on – can be. And then we allocate resources accordingly.

“We do an extensive amount of internal analysis on what role our apps play because countries use our apps in slightly different ways. In some countries, most people use WhatsApp, but don’t use Messenger. In other countries, lots of people use Messenger and none use WhatsApp. Other countries use Instagram more than Facebook, and so on.”

One of the keys to putting guardrails in place and covering as many bases as possible is that Meta is not trying to achieve such safeguards on its own.

“Even though it’s such a big platform, we realised we just can’t do it on our own. One of the things that we have done, especially for these election cycles, we leaned into cross-industry cooperation, in order to make sure that we’re ready for all the major elections.”

The cooperation will be heavily focused on technology tools designed to spot fake content produced by artificial intelligence (AI) as well as misinformation.

“You can’t control or regulate something you can’t identify in the first place. Identifying the origin, the provenance, and be able to detect the genesis of synthetic content is really quite important.

“Here’s the dilemma. If you use our AI image generation tool, Imagine, and produce a synthetic image, because it’s ours we will put a visible watermark on the bottom left hand corner to make it very clear that it’s AI. Any user can see that it’s been synthetically produced. However … in the relatively recent past, Stability AI, for instance, didn’t have any visible or, indeed, invisible watermarks.

“Let’s say you use their tools to generate the same image, and then share it on Instagram and Facebook. In technical terms, we are ingesting it. What happens if there’s no detector, no invisible watermark that allows our system to say, ‘Aha, that’s a synthetic piece of content, we want to flag that for our users’?

“So I have teams who are working flat out. Myself and my opposite numbers from Microsoft and Google and other companies have signed an agreement on this work and other related work, to deal with the risk of deep fakes and elections and so on.”

The agreement was signed by 20 leading technology companies at the 60th annual Munich Security Conference, on 16 February. Attended by 45 heads of state, along with ministers, and high-ranking representatives from business, the media, academia, and civil society, the event debated pressing issues of international security policy. While the conference focused heavily on major conflicts, it also considered the risks of AI roll-out to democracy as a “shared challenge” in a “super election year.”

The “Tech Accord to Combat Deceptive Use of AI in 2024 Elections”, saw the companies agreed to jointly prevent deceptive AI content from interfering with global elections. It  as signed, among other, bv Adobe, Amazon, Google, IBM, Meta, Microsoft, OpenAI, TikTok, and X. All pledged to work together to detect and counter harmful AI content.

According to a statement issued by the conference, it “is a set of commitments to deploy technology countering harmful AI-generated content meant to deceive voters”.

“Signatories pledge to work collaboratively on tools to detect and address online distribution of such AI content, drive educational campaigns, and provide transparency, among other concrete steps. It also includes a broad set of principles, including the importance of tracking the origin of deceptive election-related content and the need to raise public awareness about the problem.”

Christoph Heusgen, chairman of the Munich Security Conference, said: “Elections are the beating heart of democracies. The Tech Accord … is a crucial step in advancing election integrity, increasing societal resilience, and creating trustworthy tech practices.”

Clegg told Business Times: “We’re working flat out to try and create either common, uniform or interoperable standards of detection, provenance and watermarking. We’ve made huge progress on that provenance detection and labeling of synthetic images.”

However, it was still “a work in progress”.

“Candidly, when it comes to video and audio, it is a lot more complex to do that. And there’s a whole separate issue: what do you do when an image is screenshot and cropped and so on. Our AI research lab is doing a lot of work to try and develop tools of detection provenance which would be entirely immune, which wouldn’t even require any invisible watermark. So we’re doing a lot of partnership work.”

Clegg said that, so far, elections that have taken place have seen little manipulation through hidden use of AI.

“I really don’t want to say this with a hint of complacency. It can change from one minute to the next. But so far in those elections which have taken place, there has been the use of these AI tools, but not nearly on the kind of society-wide election-disrupting scale that we might have feared.”

 “The key thing is you can’t sweep this technology under the carpet. The internet is going to be populated with either synthetic or hybrid content on such scale soon, you are clearly not going to be able to play Whack a Mole with every single piece of content. But when it comes to elections, certainly for this year, given the technology is so nascent, this level of high level of industry cooperation is promising.”

Clegg said both The IEC and Parliament had responded positively to Meta’s input.

“They found it very helpful to understand exactly what tools we have in place, what teams we have in place, and how those teams draw on multiple domains in the company, (like) legal, policy, engineering, and product.”

Exit mobile version