Cyber security news for all

More

    United States Authority Unveils New AI Security Directives for Vital Infrastructure Protection

    In an endeavor to fortify critical infrastructure against threats stemming from artificial intelligence (AI), the U.S. government has introduced novel security directives.

    The Department of Homeland Security (DHS) articulated on Monday, elucidating that these directives are the culmination of a comprehensive assessment conducted across all sixteen sectors of critical infrastructure, focusing on risks associated with AI, both inbound and outbound.

    Moreover, the department expressed its commitment to facilitating the judicious, secure, and ethical utilization of AI, ensuring that such deployment does not encroach upon the privacy, civil rights, and liberties of individuals.

    The recent guidelines concentrate on various aspects concerning the integration of AI in potential attacks on vital infrastructure, the adversarial manipulation of AI systems, and inherent deficiencies within such systems that might yield unintended repercussions. Consequently, there arises a necessity for transparency and the implementation of secure-by-design principles to assess and mitigate AI-related risks.

    The delineation of these guidelines encompasses four key functions, namely governing, mapping, measuring, and managing throughout the lifecycle of AI:

    1. Cultivate an organizational ethos emphasizing the management of AI-related risks.
    2. Grasp the specific context and risk profile associated with individual AI deployments.
    3. Establish mechanisms to evaluate, analyze, and monitor AI risks systematically.
    4. Prioritize and take decisive action to address risks to safety and security posed by AI.

    According to the agency, owners and operators of critical infrastructure should tailor their risk assessments and mitigation strategies according to sector-specific and context-specific utilization of AI.

    Furthermore, they should be cognizant of their reliance on AI vendors and collaborate to distribute and assign mitigation responsibilities effectively.

    These directives come on the heels of a cybersecurity advisory issued by the Five Eyes (FVEY) intelligence alliance, emphasizing the meticulous configuration necessary for deploying AI systems securely.

    The advisory underscores the susceptibility of rapidly deployed AI capabilities to exploitation by malicious cyber actors, who may seek to manipulate these systems for nefarious purposes.

    To mitigate such risks, the alliance advocates for several best practices, including securing the deployment environment, scrutinizing the provenance of AI models, fortifying supply chain security, ensuring robust architectural configurations, implementing stringent access controls, conducting external audits, and maintaining comprehensive logging mechanisms.

    Earlier this month, the CERT Coordination Center (CERT/CC) elucidated a vulnerability in the Keras 2 neural network library, which could potentially enable attackers to contaminate popular AI models and disseminate them across dependent applications, thereby compromising the integrity of the supply chain.

    Recent studies have also identified AI systems as susceptible to prompt injection attacks, wherein adversaries exploit vulnerabilities to manipulate AI models into generating malicious outputs.

    Microsoft highlighted the significance of such attacks, noting that they pose a substantial security risk by allowing attackers to issue commands to AI systems under the guise of legitimate users.

    One notable technique, known as Crescendo, involves the gradual manipulation of large language models (LLMs) to achieve desired outcomes, thereby circumventing safety mechanisms.

    These LLM jailbreak prompts have gained traction among cybercriminals for crafting convincing phishing schemes, while nation-state actors have leveraged generative AI for espionage and influence operations.

    Moreover, research conducted at the University of Illinois Urbana-Champaign has demonstrated the autonomous exploitation of one-day vulnerabilities in real-world systems using LLM agents, enabling sophisticated tasks such as blind database schema extraction and SQL injections without human intervention.

    Recent Articles

    Related Stories

    Leave A Reply

    Please enter your comment!
    Please enter your name here