Cyber security news for all

More

    Critical Remote Code Execution Vulnerability Detected in Ollama AI Infrastructure Tool

    A recently discovered security flaw affecting the Ollama open-source artificial intelligence (AI) infrastructure platform has been disclosed by cybersecurity researchers, posing a serious risk of remote code execution.

    Designated as CVE-2024-37032 and dubbed Probllama by Wiz, a cloud security firm, the vulnerability was responsibly disclosed on May 5, 2024. It has been promptly addressed in version 0.1.34, which was released on May 7, 2024.

    Ollama serves as a platform for packaging, deploying, and running large language models (LLMs) on Windows, Linux, and macOS devices.

    The vulnerability stems from inadequate input validation, resulting in a path traversal flaw that could be exploited by attackers to overwrite arbitrary files on the server, potentially leading to remote code execution.

    To exploit this flaw, a threat actor would need to send carefully crafted HTTP requests to the Ollama API server.

    The vulnerability specifically leverages the “/api/pull” API endpoint, used for downloading models from repositories. An attacker could insert a malicious payload into the digest field of a model manifest file, thereby triggering the path traversal.

    This issue not only allows for the manipulation of system files but also facilitates remote code execution by modifying a configuration file (“etc/ld.so.preload”) used by the dynamic linker (“ld.so”) to load unauthorized shared libraries prior to program execution.

    While the risk is mitigated in default Linux setups where the API server binds to localhost, Docker deployments are vulnerable as the server typically runs with root privileges and listens on all interfaces (“0.0.0.0”), exposing it to remote exploitation.

    “This vulnerability is particularly critical in Docker environments, where the server’s configuration can allow remote code execution,” noted security researcher Sagi Tzadik.

    Compounding the issue is Ollama’s lack of built-in authentication, potentially allowing attackers to compromise publicly accessible servers, tamper with AI models, and breach self-hosted AI inference servers.

    Mitigating this risk involves securing such services with middleware like reverse proxies that enforce authentication. Wiz reported discovering over 1,000 instances of Ollama exposed without adequate protection, hosting numerous AI models.

    According to Tzadik, “CVE-2024-37032 represents an easily exploitable remote code execution vulnerability in modern AI infrastructure, highlighting persistent risks such as Path Traversal despite advancements in programming languages.”

    This revelation coincides with warnings from Protect AI about multiple security vulnerabilities affecting open-source AI/ML tools, underscoring the need for vigilance against threats that could lead to data breaches, privilege escalation, or complete system compromise.

    Among these, CVE-2024-22476 stands out as a critical SQL injection flaw in Intel Neural Compressor software, mitigated in version 2.5.0, which could potentially enable attackers to download arbitrary files from the host system, carrying a CVSS score of 10.0.

    Recent Articles

    Related Stories