Artifical Intelligence
Tech giants team up against AI in elections
A global security conference in Germany saw 20 major technology companies sign an accord ‘to combat deceptive use of AI in 2024 elections’.
At the 60th Munich Security Conference last month, 20 major technology companies signed the Tech Accord to Combat Deceptive Use of AI in 2024 Elections.
Digital content addressed by the accord consists of AI-generated audio, video, and images that deceptively fake or alter the appearance, voice, or actions of political candidates, election officials, and other key stakeholders in a democratic election, or that provide false information to voters about when, where, and how they can vote.
The signatories were: Adobe, Amazon, Anthropic, ARM, ElevenLabs, Google, IBM, Inflection AI, LinkedIn, McAfee, Meta, Microsoft, Nota, OpenAI, Snap, Stability AI, TikTok, TrendMicro, TruePic, and X.
The Munich Security Conference debating on international AI security policy for elections
They agreed to eight specific commitments:
- Developing and implementing technology to mitigate risks related to Deceptive AI Election content, including open-source tools where appropriate
- Assessing models in scope of this Accord to understand the risks they may present regarding Deceptive AI Election Content
- Seeking to detect the distribution of this content on their platforms
- Seeking to appropriately address this content detected on their platforms
- Fostering cross-industry resilience to Deceptive AI Election Content
- Providing transparency to the public regarding how the company addresses it
- Continuing to engage with a diverse set of global civil society organisations, academics
- Supporting efforts to foster public awareness, media literacy, and all-of-society resilience
- These commitments apply where they are relevant for services each company provides.
“Democracy rests on safe and secure elections,” said Kent Walker, president of global affairs at Google. “Google has been supporting election integrity for years, and today’s accord reflects an industry-side commitment against AI-generated election misinformation that erodes trust. We can’t let digital abuse threaten AI’s generational opportunity to improve our economies, create new jobs, and drive progress in health and science.”
Brad Smith, vice chair and president of Microsoft, said: “As society embraces the benefits of AI, we have a responsibility to help ensure these tools don’t become weaponised in elections. AI didn’t create election deception, but we must ensure it doesn’t help deception flourish.”