Cyber security news for all

More

    Taiwan Prohibits DeepSeek AI Over National Security Fears Amid Data Leakage Concerns

    Taiwan has officially barred government agencies from employing the AI-powered services of Chinese startup DeepSeek, citing grave cybersecurity risks and potential information leaks.

    “Government agencies and critical infrastructure must refrain from using DeepSeek due to national security vulnerabilities,” Taiwan’s Ministry of Digital Affairs declared in an official statement, as reported by Radio Free Asia.

    “DeepSeek AI is a Chinese-developed platform, and its operational framework entails cross-border data transmissions, raising serious concerns about data integrity and security.”

    Escalating Global Scrutiny on DeepSeek AI

    The Chinese origins of DeepSeek have triggered intensified examinations across multiple nations, with regulatory authorities scrutinizing its data management practices. Italy recently banned the AI chatbot over its opaque handling of personal information, and various corporations have similarly restricted access to the platform over data security apprehensions.

    Despite these bans, DeepSeek has gained significant traction due to its open-source architecture and cost-effective model development, rivaling leading AI counterparts while maintaining a lower operational expenditure.

    However, its underlying large language models (LLMs) have been exploited through multiple jailbreak techniques, a persistent weakness in AI frameworks. Additionally, its content filtering system has sparked controversy for enforcing censorship aligned with Chinese government directives.

    DeepSeek AI Faces Cyberattacks Amid Soaring Popularity

    DeepSeek’s rising prominence has also attracted malicious cyber activity, with cybersecurity firm NSFOCUS identifying three waves of large-scale distributed denial-of-service (DDoS) attacks against its API infrastructure between January 25 and 27, 2025.

    “The average duration of these attacks was 35 minutes,” NSFOCUS reported. “The primary attack vectors included NTP reflection attacks and memcached amplification attacks.”

    Further incidents were recorded on January 20—the day DeepSeek launched its reasoning model, DeepSeek-R1—and again on January 25, with attacks averaging one hour and leveraging NTP reflection and SSDP reflection techniques.

    Threat intelligence analysts noted that these attacks primarily originated from the United States, the United Kingdom, and Australia, suggesting a coordinated and sophisticated cyber assault on the platform.

    Cybercriminals Exploiting DeepSeek’s Hype for Fraudulent Packages

    Bad actors have also leveraged DeepSeek’s viral reputation to disseminate malicious Python packages designed to exfiltrate sensitive developer data.

    Fraudulent packages, named deepseeek and deepseekai, masqueraded as legitimate DeepSeek API clients on Python Package Index (PyPI). Before their removal on January 29, 2025, they had been downloaded at least 222 times, with most of the activity stemming from the U.S., China, Russia, Hong Kong, and Germany.

    “These packages were engineered to harvest system metadata, exfiltrate environmental variables, and transmit stolen information to a command-and-control (C2) server hosted on Pipedream, an automation platform for developers,” stated Positive Technologies, a Russian cybersecurity firm.

    Global AI Regulation and Cybersecurity Initiatives

    The European Union’s Artificial Intelligence Act officially took effect on February 2, 2025, introducing strict regulatory measures against AI applications posing unacceptable risks while imposing legal constraints on high-risk implementations.

    Meanwhile, the United Kingdom has introduced a new AI Code of Practice, aimed at fortifying AI infrastructure against adversarial manipulations such as:

    • Data poisoning attacks
    • Model obfuscation exploits
    • Indirect prompt injection vulnerabilities

    Meta, in response to mounting AI security concerns, has unveiled its Frontier AI Framework, pledging to halt the development of AI models that surpass a critical risk threshold and cannot be sufficiently mitigated. Key cybersecurity scenarios outlined in the framework include:

    • Automated breaches of corporate-grade networks protected by best-practice security measures (e.g., patched environments, MFA safeguards).
    • Autonomous discovery and exploitation of zero-day vulnerabilities before security teams can identify and patch them.
    • End-to-end scam automation, such as romance fraud schemes (pig butchering scams), leading to widespread financial devastation.

    The Growing Threat of AI Jailbreak Exploits

    The risk of maliciously manipulated AI models is no longer hypothetical. Google’s Threat Intelligence Group (GTIG) disclosed that over 57 state-linked threat actors from China, Iran, North Korea, and Russia have attempted to harness Gemini AI to amplify and scale cyber operations.

    Cyber adversaries have also been observed actively jailbreaking AI systems, a method of bypassing ethical and security safeguards to elicit restricted outputs. These adversarial manipulations compel the model to generate content it was explicitly programmed to suppress, such as:

    • Weaponized malware blueprints
    • Detailed bomb-making instructions

    To counteract these risks, AI research firm Anthropic has introduced Constitutional Classifiers, a novel security mechanism designed to fortify AI models against jailbreak attempts.

    “These Constitutional Classifiers leverage synthetically generated datasets to intercept and neutralize the majority of jailbreak exploits with minimal false positives and without excessive computational strain,” Anthropic stated on Monday.

    As AI capabilities advance, so too do the strategies employed by cyber adversaries. Governments and private sector leaders are now in a race against time to establish regulatory safeguards that can mitigate AI-driven security threats before they spiral beyond control.

    Recent Articles

    Related Stories