Technology Industry to Combat Deceptive Use of AI in 2024 Election

At the Munich Security Conference (MSC), leading technology companies pledged to help prevent deceptive AI content from interfering with this year’s global elections in which more than four billion people in over 40 countries will vote. The “Tech Accord to Combat Deceptive Use of AI in 2024 Elections” is a set of commitments to deploy technology countering harmful AI-generated content meant to deceive voters.

Signatories pledge to work collaboratively on tools to detect and address the online distribution of such AI content, drive educational campaigns, and provide transparency, among other concrete steps. It also includes a broad set of principles, including the importance of tracking the origin of deceptive election-related content and the need to raise public awareness about the problem.

The accord is one important step to safeguard online communities against harmful AI content and builds on individual companies’ ongoing work.

Digital content addressed by the accord consists of AI-generated audio, video, and images that deceptively fake or alter the appearance, voice, or actions of political candidates, election officials, and other key stakeholders in a democratic election, or that provide false information to voters about when, where, and how they can vote.

As of today, the signatories are: Adobe, Amazon, Anthropic, Arm, ElevenLabs, Google, IBM, Inflection AI, LinkedIn, McAfee, Meta, Microsoft, Nota, OpenAI, Snap Inc., Stability AI, TikTok, Trend Micro, Truepic, and X.

Related posts

NeoSOFT Empowers Enterprises with Cutting-Edge Generative AI Capabilities

Healthcare industry faces rising cybersecurity threats, reveals Seqrite in ‘India Cyber Threat Report 2025’

SentinelOne announces Kris Day as Senior Vice President for Asia Pacific and Japan

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Read More