Categories: Security

Malicious AI Models Infiltrate Hugging Face Platform

JFrog’s security team discovered at least 100 malicious AI models on the Hugging Face platform, posing a significant risk of data breaches and espionage attacks.

Artificial Intelligence (AI) and Machine Learning (ML) models on the popular Hugging Face platform have been found to contain malicious functionality, putting users at risk of data breaches and espionage attacks. The platform, which allows communities to collaborate and share models, datasets, and complete applications, has become a breeding ground for potentially dangerous AI models.

JFrog, a technology company specializing in software development and management, recently uncovered roughly 100 malicious models hosted on the Hugging Face platform. These models, designed for PyTorch and Tensorflow Keras, were found to contain harmful payloads capable of executing code on the victim’s machine, providing attackers with a persistent backdoor.

Despite Hugging Face’s security measures, including malware, pickle, and secrets scanning, as well as scrutinizing the models’ functionality to discover behaviors like unsafe deserialization, these malicious models managed to slip through the cracks.

JFrog developed an advanced scanning system to examine the models hosted on Hugging Face. The team found one hundred models with some form of malicious functionality. It’s important to note that these findings exclude false positives, ensuring a genuine representation of the distribution of efforts towards producing malicious models for PyTorch and Tensorflow on Hugging Face.

One notable example is a PyTorch model uploaded by a user named “baller423.” This model, which has since been removed from Hugging Face, contained a payload that could establish a reverse shell to a specified host (210.117.212.93). The malicious payload used Python’s pickle module’s “reduce” method to execute arbitrary code upon loading a PyTorch model file, evading detection by embedding the malicious code within the trusted serialization process.

JFrog found the same payload connecting to other IP addresses in separate instances. The evidence suggests that the operators behind these models could be AI researchers rather than hackers. However, their experimentation remains risky and inappropriate.

To determine the operators’ real intentions, JFrog set up a HoneyPot to attract and analyze the activity. However, they were unable to capture any commands during the period of the established connectivity (one day).

Some of these malicious uploads could be part of security research aimed at bypassing security measures on Hugging Face and collecting bug bounties. However, since these dangerous models become publicly available, the risk is real and shouldn’t be underestimated.

AI and ML models can pose significant security risks, and these risks have not been adequately appreciated or discussed by stakeholders and technology developers. JFrog’s findings highlight this problem and call for increased vigilance and proactive measures to safeguard the ecosystem from malicious actors.

In conclusion, users of the Hugging Face platform and similar AI/ML sharing communities should exercise caution and remain vigilant. The presence of malicious models underscores the need for robust security measures and diligent monitoring to protect against potential data breaches and espionage attacks.

Source: Bleepingcomputer


Grow your business with AI. Be an AI expert at your company in 5 mins per week with this Free AI Newsletter

AI News

Recent Posts

Kling AI from Kuaishou Challenges OpenAI’s Sora

In February 2024, OpenAI introduced Sora, a video-generation model capable of creating one-minute-long, high-definition videos.…

6 months ago

Alibaba’s Qwen2 AI Model Surpasses Meta’s Llama 3

Alibaba Group Holding has unveiled Qwen2, the latest iteration of its open-source AI models, claiming…

6 months ago

Google Expands NotebookLM Globally with New Features

Google has rolled out a major update to its AI-powered research and writing assistant, NotebookLM,…

6 months ago

Stability AI’s New Model Generates Audio from Text

Stability AI, renowned for its revolutionary AI-powered art generator Stable Diffusion, now unveils a game-changing…

6 months ago

ElevenLabs Unveils AI Tool for Generating Sound Effects

ElevenLabs has unveiled its latest innovation: an AI tool capable of generating sound effects, short…

6 months ago

DuckDuckGo Introduces Secure AI Chat Portal

DuckDuckGo has introduced a revolutionary platform enabling users to engage with popular AI chatbots while…

6 months ago