At its recent I/O conference, Google unveiled a new feature powered by its generative AI technology designed to scan voice calls in real time for patterns indicative of financial scams. This feature, set to be integrated into future versions of the Android operating system—which runs on approximately 75% of smartphones globally—has raised significant privacy and security concerns among experts. They caution that this could be the beginning of more extensive, centralized censorship.
The call scam-detection feature is driven by Gemini Nano, the smallest model in Google’s current AI generation, designed to operate entirely on the device. This type of technology, known as client-side scanning, has been at the center of heated debates, particularly concerning its use to detect child sexual abuse material (CSAM) or grooming activities on messaging platforms.
Historical Context: Apple’s Abandoned Plans
Apple faced substantial backlash in 2021 for attempting to implement client-side scanning to detect CSAM, ultimately abandoning the plan. Despite this, lawmakers continue to pressure tech companies to find methods to identify illegal activities on their platforms. The introduction of on-device scanning infrastructure by industry players like Google could pave the way for default content scanning, driven either by government mandates or commercial interests.
Meredith Whittaker, president of the encrypted messaging app Signal, expressed her concerns on social media. She warned that the technology could lead to centralized, device-level scanning for a range of activities, from seeking reproductive care to accessing LGBTQ resources or whistleblowing.
Matthew Green, a cryptography expert and professor at Johns Hopkins, also highlighted the potential dangers. He warned that companies could soon use AI models to scan texts and voice calls to detect and report illegal behaviors. Green suggested that within a decade, users might need to prove that they have had their data scanned to access services, foreseeing a future where default censorship becomes a reality.
European privacy and security experts also voiced their concerns. Lukasz Olejnik, an independent researcher from Poland, acknowledged the benefits of Google’s anti-scam feature but warned about its potential misuse for social surveillance. He emphasized the dangers of using AI to monitor and control various forms of human activity.
Michael Veale, a technology law associate professor at UCL, echoed these sentiments. He warned that Google’s technology could create an infrastructure for broader on-device scanning, which regulators and legislators might exploit.
Legislative Implications in the EU
The European Union’s controversial legislative proposal from 2022 particularly alarms European privacy experts. This proposal would mandate platforms to scan private messages by default, a move critics argue could undermine democratic rights by forcing the implementation of client-side scanning technologies.
Earlier this month, hundreds of privacy and security experts signed an open letter warning against the EU’s legislative proposal. They argued that client-side scanning technologies are unproven, flawed, and susceptible to attacks, potentially resulting in millions of false positives daily.
Google has yet to respond to these concerns about its conversation-scanning AI’s potential impact on privacy. The debate highlights the broader implications of integrating advanced AI technologies into everyday devices and the urgent need for thoughtful governance to prevent potential abuses.