HomeCyberSecurity NewsAI-as-a-Service Providers at Risk of Privilege Escalation and Cross-Tenant Attacks

AI-as-a-Service Providers at Risk of Privilege Escalation and Cross-Tenant Attacks

New research has discovered that providers of artificial intelligence (AI) as a service, such as Hugging Face, are vulnerable to two critical risks. These risks could potentially allow threat actors to escalate privileges, access other customers’ models, and even compromise continuous integration and continuous deployment (CI/CD) pipelines.

According to Wiz researchers Shir Tamari and Sagi Tzadik, “Malicious models pose a significant threat to AI systems, especially for AI-as-a-service providers, as attackers could exploit these models for cross-tenant attacks.”

“The potential consequences are severe, as attackers could gain access to millions of private AI models and applications stored within AI-as-a-service providers.”

This development comes as machine learning pipelines have emerged as a new target for supply chain attacks, with platforms like Hugging Face attracting adversaries looking to extract sensitive information and access target environments.

The identified threats result from shared infrastructure takeover for inference and CI/CD takeover. These vulnerabilities allow for running untrusted models in pickle format and taking over the CI/CD pipeline to execute a supply chain attack.

The research further demonstrates that breaching the service running custom models is achievable by uploading a rogue model and using container escape techniques to compromise the entire service. This breach enables threat actors to gain access to other customers’ models hosted on Hugging Face.

The researchers elaborated that “Hugging Face still permits users to run Pickle-based models on the platform’s infrastructure, even if they are deemed dangerous.”

Additionally, the research showed that remote code execution can be achieved through a specially crafted Dockerfile when using the Hugging Face Spaces service, enabling users to overwrite images on an internal container registry.

Hugging Face has addressed all identified issues following coordinated disclosure and advises users to only utilize models from trusted sources, enable multi-factor authentication, and avoid using pickle files in production environments.

The researchers emphasized the importance of caution when using untrusted AI models, especially those based on Pickle files, as they could lead to severe security implications. They also highlighted the necessity of running untrusted AI models in a sandboxed environment.

This disclosure follows another study from Lasso Security, which highlighted the potential for generative AI models to distribute malicious code packages. It underscores the need for vigilance when relying on large language models (LLMs) for coding.

Anthropic, an AI company, introduced a new method called “many-shot jailbreaking” to bypass safety protections in LLMs and generate responses to potentially harmful queries. This technique exploits the context window of models to produce the desired responses.

As technology advances, it is crucial to stay vigilant and implement stringent security measures to safeguard against emerging threats in the AI landscape.

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest News