Hugging Face Detects Unauthorized Access to Its AI Model Hosting Platform

Late on Friday afternoon, a common time for companies to disclose less favorable news, AI startup Hugging Face announced that its security team had identified “unauthorized access” to Spaces, the platform it uses for creating, sharing, and hosting AI models and resources.

In a blog post, Hugging Face revealed that the breach involved Spaces secrets, which are private pieces of information used to unlock protected resources such as accounts, tools, and development environments. The company suspects that an unauthorized third party may have accessed some of these secrets.

As a precaution, Hugging Face has revoked several tokens associated with these secrets. Tokens are used to verify identities, and users whose tokens have been revoked have already been notified via email. Hugging Face is advising all users to “refresh any key or token” and to consider switching to fine-grained access tokens, which are purportedly more secure.

Scope and Response

The exact number of users or apps affected by this potential breach is not yet clear. Hugging Face is working with external cybersecurity forensic specialists to investigate the issue, review their security policies and procedures, and has reported the incident to law enforcement and data protection authorities.

“We deeply regret the disruption this incident may have caused and understand the inconvenience it may have posed to you. We pledge to use this as an opportunity to strengthen the security of our entire infrastructure,” Hugging Face stated in their post.

Increase in Cyberattacks

A Hugging Face spokesperson noted a significant increase in cyberattacks in recent months, likely due to the growing mainstream adoption of AI and the platform’s increasing usage. However, determining the exact number of compromised spaces secrets remains technically challenging.

The suspected hack of Spaces is part of a broader scrutiny of Hugging Face’s security practices. Earlier this year, researchers at cloud security firm Wiz identified a vulnerability, now fixed, that allowed attackers to execute arbitrary code during an app’s build time, potentially letting them inspect network connections. Additionally, security firm JFrog found that some code uploaded to Hugging Face contained backdoors and other malware types. A security startup, HiddenLayer, also discovered that developers could misuse Hugging Face’s Safetensors serialization format to create compromised AI models.

Steps Towards Improved Security

In response to these challenges, Hugging Face recently announced a partnership with Wiz to utilize the company’s vulnerability scanning and cloud environment configuration tools. The goal is to enhance security across their platform and the broader AI/ML ecosystem.

This incident underscores the importance of robust cybersecurity measures, particularly as AI technology continues to grow and attract attention from malicious actors. Hugging Face’s ongoing efforts to bolster their security infrastructure aim to protect their users and maintain trust in their platform.

See also: The Rising Cost Of AI Training Data And Its Impact On Innovation

The Rising Cost of AI Training Data and Its Impact on Innovation
AI Fintech Trailblazer LoanSnap Faces Legal Storm

Trending Posts

Trending Tools

FIREFILES

FREE PLAN FIND YOUR WAY AS AN TRADER, INVESTOR, OR EXPERT.
Menu