Monday, May 13, 2024

AI companies agree to limit election ‘deepfakes’ but fall short of ban

Posted

The agreement, developed by Google, Microsoft and Meta, as well as OpenAI, Adobe and TikTok, however, does not ban deceptive political AI content, according to a copy obtained by The Washington Post. X, previously Twitter, was not a signatory to the agreement.

Instead, the document amounts to a manifesto stating that AI-generated content, much of which is created by the companies’ tools and posted on their platforms, does present risks to fair elections, and it outlines steps to try to mitigate that risk, like labeling suspected AI content and educating the public on the dangers of AI.

“The intentional and undisclosed generation and distribution of deceptive AI election content can deceive the public in ways that jeopardize the integrity of electoral processes,” the agreement reads.

More companies could sign on to the accord.

“In a critical year for global elections, technology companies are working on an accord to combat the deceptive use of AI targeted at voters. Adobe, Google, Meta, Microsoft, OpenAI, TikTok and others are working jointly toward progress on this shared objective and we hope to finalize and present details on Friday at the Munich Security Conference,” David Cuddy, a spokesperson for Microsoft said in an emailed statement.

AI-generated images, or “deepfakes,” have been around for several years. But in the past year, they have rapidly improved in quality, to the point where some fake videos, images and audio recordings are difficult to distinguish from real ones. The tools to make them are also now widely available, making their production much easier.

AI-generated content has already cropped up in election campaigns around the world. Last year, an ad in support of former Republican presidential candidate Ron DeSantis used AI to mimic the voice of former president Donald Trump. In Pakistan, presidential candidate Imran Khan used AI to make speeches - from jail. In January, a robocall purporting to be President Biden encouraged people not to vote in the New Hampshire primary. The calls used an AI-generated version of Biden’s voice.

Tech companies have been under pressure from regulators, AI researchers and political activists to rein in the spread of fake election content. The new agreement is similar to a voluntary pledge the same companies, plus several others, signed in July after a meeting at the White House, where they committed to try to identify and label fake AI content on their sites. In the new accord, the companies also commit to educating users on deceptive AI content and being transparent about their efforts to identify deepfakes.

The tech companies also already have their own policies on political AI-generated content. TikTok doesn’t allow fake AI content of public figures when it is being used for political or commercial endorsements. Meta, the parent company of Facebook and Instagram, requires political advertisers to disclose whether they use AI in ads on its platforms. YouTube requires creators to label AI-generated content that looks realistic when they post it on the Google-owned video site.

Still, attempts to build a broad system in which AI content is identified and labeled across social media have yet to come to fruition. Google has shown off “watermarking” technology but doesn’t require its customers to use it. Adobe, the owner of Photoshop, has positioned itself as a leader in reining in AI content, but its own stock photo website was recently full of fake images of the war in Gaza.

Comments

No comments on this item Please log in to comment by clicking here