HomeCyberSecurity NewsSleepy Pickle: A New Attack Technique Targeting Machine Learning Models

Sleepy Pickle: A New Attack Technique Targeting Machine Learning Models

The discovery of a new technique called Sleepy Pickle has highlighted the security risks associated with the Pickle format, especially in the context of machine learning (ML) models.

According to security researcher Boyan Milanov, Sleepy Pickle is a stealthy attack method that targets ML models themselves rather than the underlying system.

Although Pickle is widely used by ML libraries like PyTorch, it can be exploited for arbitrary code execution attacks when deserializing a pickle file.

To mitigate the risks posed by Sleepy Pickle, it is recommended to load models from trusted sources, use signed commits, or rely on formats like TensorFlow or Jax with auto-conversion mechanisms.

The attack works by inserting a payload into a pickle file using tools like Fickling and then delivering it to a target system using various techniques such as phishing or supply chain compromise.

When the file is deserialized, the payload modifies the model to insert backdoors or manipulate data, allowing attackers to alter model behavior and potentially cause harm.

Sleepy Pickle can be used by threat actors to maintain undetected access to ML systems, posing a significant risk to organizations that rely on serialized ML models.

This attack demonstrates the importance of addressing supply chain weaknesses and securing connections between software components to prevent advanced model-level attacks like Sleepy Pickle.

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest News