Malicious AI models on Hugging Face backdoor users’ machines

Malicious AI models on Hugging Face backdoor users’ machines

  • JFrog's security team found around 100 AI/ML models on the Hugging Face platform that contained malicious functionality.
  • Some of these malicious models could execute code on a victim's machine, potentially giving attackers a backdoor into their systems.
  • Hugging Face is a company that provides a platform for communities to collaborate on and share AI/ML models, datasets, and applications.
  • The malicious models were able to evade Hugging Face's security scanning measures like malware, pickle, and secrets scanning.
  • One highlighted malicious PyTorch model gave capabilities to establish a reverse shell connection to a specific IP address.
  • The model used Python's pickle module to execute arbitrary code upon loading, avoiding detection.
  • JFrog found the same payload connecting to other IP addresses, suggesting the operators may be AI researchers rather than hackers.
  • Their experimentation was still risky and inappropriate even if not malicious.
  • AI/ML models can pose significant security risks that have not been properly appreciated or discussed.
  • JFrog's findings call for increased vigilance and proactive security measures to safeguard the AI ecosystem from malicious actors.