As we approach the 2024 election, voters need to be aware of a new threat: voice cloning of political figures, fake audio and video of political candidates. A recent study highlights how easy it is to clone the voices of major political figures, raising serious concerns about the potential impact on the election.
The Center for Countering Digital Hate examined six AI-powered voice cloning services: Invideo AI, Veed, ElevenLabs, Speechify, Descript, and PlayHT. They tested these services by attempting to clone the voices of eight prominent political figures and generate false statements.
Out of 240 total requests, 193 were successful in producing convincing fake audio. Shockingly, one service even created the disinformation script itself. For instance, a fake recording of UK Prime Minister Rishi Sunak falsely admitting to misusing campaign funds was produced, illustrating how these statements can be difficult to detect as fake.
Breakdown of Service Responses
- Speechify and PlayHT: These services did not block any of the cloning attempts, allowing all 40 requests for fake voices and statements to proceed.
- Descript, Invideo AI, and Veed: These required an audio sample of the person saying the intended statement. However, this measure was easily bypassed by using audio generated by another service.
- ElevenLabs: Stood out by blocking voice cloning attempts in 25 out of 40 cases, adhering to their policy against replicating public figures. However, they still generated some false statements for EU political figures not covered by their policy.
Invideo AI: Doing The Most
Invideo AI not only failed to block any recordings but also generated an enhanced script for disinformation. For example, it created a convincing fake message from President Biden about bomb threats at polling stations, urging voters to stay home. The AI even mimicked Biden’s speech patterns, making the fake message particularly persuasive.
The Implications for Election Security
The potential misuse of these technologies is alarming. People could combine fake recordings with illegal robocalls to spread disinformation in key electoral areas, despite existing FCC regulations against robocalls. If these platforms do not enforce stricter policies, we might face widespread voice cloning misuse during the election season.
The study underscores the urgent need for AI companies to tighten their safeguards against voice cloning. Without stronger enforcement, the risk of disinformation through fake audio and video remains a serious threat to the integrity of the upcoming elections.
See also: ChatGPT Taskforce: Assessing Privacy Compliance In A Legal Grey Area