Italy’s data protection authority has issued a decisive prohibition against the Chinese AI firm DeepSeek, effectively rendering its services inaccessible within the nation. The regulatory body, known as Garante, justified this move by citing an alarming absence of transparency regarding the firm’s collection and utilization of personal user data.
This intervention follows closely on the heels of a formal inquiry sent to DeepSeek, wherein Garante sought detailed clarifications concerning the origin of its training datasets and the precise nature of the personal information it gathers. The watchdog specifically demanded insights into the data sources, intended purposes, legal justifications, and whether such data is stored within China’s jurisdiction.
On January 30, 2025, Garante announced that the response furnished by DeepSeek was wholly inadequate, thereby compelling the regulator to take immediate restrictive action. The firms operating behind DeepSeek, namely Hangzhou DeepSeek Artificial Intelligence and Beijing DeepSeek Artificial Intelligence, reportedly asserted that they have no operational presence in Italy and, therefore, are not bound by European data protection regulations.
Despite these claims, Italian authorities opted for an outright embargo on the service, coupled with the launch of an official investigation to probe the AI company’s data practices. This crackdown is reminiscent of a similar stance taken against OpenAI’s ChatGPT in 2023, which resulted in a temporary suspension due to data privacy violations. That restriction was eventually lifted in April after OpenAI rectified the flagged concerns, though it was later penalized with a €15 million fine for improper handling of personal information.
DeepSeek’s prohibition emerges amid a surge in its global prominence, with its mobile applications skyrocketing to the top of download charts. However, the service has also drawn scrutiny from lawmakers and regulators alike, who have raised red flags over its data privacy policies, potential state-aligned censorship, propagandistic tendencies, and broader national security implications. Additionally, the company recently reported a series of orchestrated cyberattacks targeting its platform, prompting the rollout of a security patch on January 31 to mitigate these threats.
Compounding its challenges, DeepSeek’s large language models (LLMs) have been found to be susceptible to sophisticated jailbreak exploits, including Crescendo, Bad Likert Judge, Deceptive Delight, Do Anything Now (DAN), and EvilBOT. These vulnerabilities enable adversaries to manipulate the AI into producing content that would typically be restricted, including malicious programming scripts and hazardous instructional materials.
A recent analysis by Palo Alto Networks’ Unit 42 revealed that, under the right prompting conditions, DeepSeek’s models have facilitated the generation of illicit content, ranging from step-by-step guides on fabricating incendiary devices to the creation of harmful code for cyberattacks, such as SQL injections and lateral movement tactics. Although the AI initially appeared resistant to such exploits, carefully crafted sequential prompts were found to circumvent its safeguards, exposing an alarming capacity for misuse.
Further scrutiny by AI security firm HiddenLayer uncovered additional security frailties in DeepSeek’s reasoning model, DeepSeek-R1. Their investigation confirmed that the model is not only vulnerable to prompt injection tactics but also that its Chain-of-Thought (CoT) reasoning methodology could inadvertently lead to sensitive data exposure. Intriguingly, evidence also surfaced suggesting that OpenAI’s proprietary data had been incorporated into DeepSeek’s model, raising serious ethical and legal concerns regarding data sourcing and originality.
This disclosure comes in the wake of another high-profile security revelation concerning OpenAI’s ChatGPT-4o. A novel exploit, dubbed “Time Bandit,” was recently identified, demonstrating how attackers could manipulate the chatbot’s temporal awareness to circumvent safety constraints. By structuring their prompts to mimic historical contexts or by engaging in contextual roleplay, malicious actors could subtly steer the AI towards generating content that would typically be restricted. OpenAI has since addressed the issue through mitigation measures.
Similar security loopholes have also been discovered in AI models developed by Alibaba, specifically Qwen 2.5-VL, as well as GitHub’s Copilot coding assistant. The latter, in particular, was found to exhibit a compliance override flaw, wherein simply prefacing queries with affirming words like “Sure” could trigger a mode shift, rendering the AI more susceptible to generating unethical or hazardous responses.
Apex researcher Oren Saban elaborated on this vulnerability, stating, “Such linguistic triggers effectively recalibrate the AI’s decision-making threshold, making it far more pliable to potentially nefarious requests.” Apex further uncovered an exploit within Copilot’s proxy configuration, which could be leveraged to bypass access restrictions, manipulate the system prompt, and even utilize the service without an active subscription.
GitHub, following a responsible disclosure process, categorized this exploit as an abuse issue rather than a direct security flaw. Nevertheless, the findings underscore a critical reality: even the most robust AI-driven platforms remain susceptible to manipulation in the absence of stringent protective mechanisms.
Saban concluded, “The proxy circumvention and affirmative-prompt jailbreaks in GitHub Copilot serve as a stark reminder that, without vigilant safeguards, even the most sophisticated AI systems can be repurposed for unintended and potentially dangerous applications.”