Cyber security news for all

More

    AI Enterprise Hugging Face Uncovers Clandestine Intrusion into Spaces Platform

    On Friday, the Artificial Intelligence (AI) enterprise Hugging Face divulged the discovery of unauthorized ingress into its Spaces platform earlier this week.

    “We harbor suspicions that an ensemble of Spaces’ secrets might have been illicitly accessed,” the company articulated in an advisory.

    Spaces endows users with the capability to concoct, host, and disseminate AI and machine learning (ML) applications. It also serves as a discovery conduit for users to peruse AI applications crafted by others on the platform.

    In reaction to the security incident, Hugging Face announced it is undertaking the revocation of numerous HF tokens embedded within those secrets and is informing affected users through email.

    “We urge you to regenerate any key or token and consider transitioning to fine-grained access tokens, which are now the default,” the company advised.

    However, Hugging Face refrained from disclosing the exact number of users impacted by this breach, which remains under meticulous investigation. The company has also alerted law enforcement and data protection authorities about the breach.

    This development occurs amidst the meteoric expansion of the AI sector, which has placed AI-as-a-service (AIaaS) providers like Hugging Face in the crosshairs of malicious actors seeking to exploit these platforms.

    In early April, cloud security firm Wiz elaborated on security vulnerabilities within Hugging Face that could allow an adversary to gain cross-tenant access and corrupt AI/ML models by compromising the continuous integration and continuous deployment (CI/CD) pipelines.

    Prior research conducted by HiddenLayer also uncovered vulnerabilities in the Hugging Face Safetensors conversion service, which could enable the hijacking of AI models submitted by users, thus facilitating supply chain attacks.

    “If a nefarious entity were to infiltrate Hugging Face’s platform, they could potentially obtain access to private AI models, datasets, and pivotal applications, precipitating extensive damage and posing significant supply chain risks,” Wiz researchers warned in April.

    Recent Articles

    Related Stories