By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
Viral Trending contentViral Trending content
  • Home
  • World News
  • Politics
  • Sports
  • Celebrity
  • Business
  • Crypto
  • Gaming News
  • Tech News
  • Travel
Reading: Italy Bans Chinese DeepSeek AI Over Data Privacy and Ethical Concerns
Notification Show More
Viral Trending contentViral Trending content
  • Home
  • Categories
    • World News
    • Politics
    • Sports
    • Celebrity
    • Business
    • Crypto
    • Tech News
    • Gaming News
    • Travel
  • Bookmarks
© 2024 All Rights reserved | Powered by Viraltrendingcontent
Viral Trending content > Blog > Tech News > Italy Bans Chinese DeepSeek AI Over Data Privacy and Ethical Concerns
Tech News

Italy Bans Chinese DeepSeek AI Over Data Privacy and Ethical Concerns

By Viral Trending Content 6 Min Read
Share
SHARE
Chinese DeepSeek AI

Italy’s data protection watchdog has blocked Chinese artificial intelligence (AI) firm DeepSeek’s service within the country, citing a lack of information on its use of users’ personal data.

The development comes days after the authority, the Garante, sent a series of questions to DeepSeek, asking about its data handling practices and where it obtained its training data.

In particular, it wanted to know what personal data is collected by its web platform and mobile app, from which sources, for what purposes, on what legal basis, and whether it is stored in China.

In a statement issued January 30, 2025, the Garante said it arrived at the decision after DeepSeek provided information that it said was “completely insufficient.”

The entities behind the service, Hangzhou DeepSeek Artificial Intelligence and Beijing DeepSeek Artificial Intelligence, have “declared that they do not operate in Italy and that European legislation does not apply to them,” it added.

As a result, the watchdog said it’s blocking access to DeepSeek with immediate effect, and that it’s simultaneously opening a probe.

Cybersecurity

In 2023, the data protection authority also issued a temporary ban on OpenAI’s ChatGPT, a restriction that was lifted in late April after the artificial intelligence (AI) company stepped in to address the data privacy concerns raised. Subsequently, OpenAI was fined €15 million over how it handled personal data.

News of DeepSeek’s ban comes as the company has been riding the wave of popularity this week, with millions of people flocking to the service and sending its mobile apps to the top of the download charts.

Besides becoming the target of “large-scale malicious attacks,” it has drawn the attention of lawmakers and regulars for its privacy policy, China-aligned censorship, propaganda, and the national security concerns it may pose. The company has implemented a fix as of January 31 to address the attacks on its services.

Adding to the challenges, DeepSeek’s large language models (LLM) have been found to be susceptible to jailbreak techniques like Crescendo, Bad Likert Judge, Deceptive Delight, Do Anything Now (DAN), and EvilBOT, thereby allowing bad actors to generate malicious or prohibited content.

“They elicited a range of harmful outputs, from detailed instructions for creating dangerous items like Molotov cocktails to generating malicious code for attacks like SQL injection and lateral movement,” Palo Alto Networks Unit 42 said in a Thursday report.

“While DeepSeek’s initial responses often appeared benign, in many cases, carefully crafted follow-up prompts often exposed the weakness of these initial safeguards. The LLM readily provided highly detailed malicious instructions, demonstrating the potential for these seemingly innocuous models to be weaponized for malicious purposes.”

Chinese DeepSeek AI

Further evaluation of DeepSeek’s reasoning model, DeepSeek-R1, by AI security company HiddenLayer, has uncovered that it’s not only vulnerable to prompt injections but also that its Chain-of-Thought (CoT) reasoning can lead to inadvertent information leakage.

In an interesting twist, the company said the model also “surfaced multiple instances suggesting that OpenAI data was incorporated, raising ethical and legal concerns about data sourcing and model originality.”

The disclosure also follows the discovery of a jailbreak vulnerability in OpenAI ChatGPT-4o dubbed Time Bandit that makes it possible for an attacker to get around the safety guardrails of the LLM by prompting the chatbot with questions in a manner that makes it lose its temporal awareness. OpenAI has since mitigated the problem.

“An attacker can exploit the vulnerability by beginning a session with ChatGPT and prompting it directly about a specific historical event, historical time period, or by instructing it to pretend it is assisting the user in a specific historical event,” the CERT Coordination Center (CERT/CC) said.

Cybersecurity

“Once this has been established, the user can pivot the received responses to various illicit topics through subsequent prompts.”

Similar jailbreak flaws have also been identified in Alibaba’s Qwen 2.5-VL model and GitHub’s Copilot coding assistant, the latter of which grant threat actors the ability to sidestep security restrictions and produce harmful code simply by including words like “sure” in the prompt.

“Starting queries with affirmative words like ‘Sure’ or other forms of confirmation acts as a trigger, shifting Copilot into a more compliant and risk-prone mode,” Apex researcher Oren Saban said. “This small tweak is all it takes to unlock responses that range from unethical suggestions to outright dangerous advice.”

Apex said it also found another vulnerability in Copilot’s proxy configuration that it said could be exploited to fully circumvent access limitations without paying for usage and even tamper with the Copilot system prompt, which serves as the foundational instructions that dictate the model’s behavior.

The attack, however, hinges on capturing an authentication token associated with an active Copilot license, prompting GitHub to classify it as an abuse issue following responsible disclosure.

“The proxy bypass and the positive affirmation jailbreak in GitHub Copilot are a perfect example of how even the most powerful AI tools can be abused without adequate safeguards,” Saban added.

Found this article interesting? Follow us on Twitter  and LinkedIn to read more exclusive content we post.

You Might Also Like

The Ultimate Dolby Atmos Experience Might Be In Your Car

Samsung Tri-fold Foldable Named and Dated

AI’s Next Evolution: From Advisor to Architect – New TCS/MIT SMR Study Reveals Game-Changing Shift

9 Best Coolers WIRED Tested for Every Budget, Any Situation

Astronomers observe the earliest moments of a new solar system

TAGGED: ai ethics, AI regulation, AI security, Cyber Security, Cybersecurity, data privacy, data protection, deep learning, Internet, large language model, Machine Learning
Share This Article
Facebook Twitter Copy Link
Previous Article Social Security numbers are a privacy liability
Next Article Dollar and oil Surge, Asian stocks fall on Trump tariffs
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

- Advertisement -
Ad image

Latest News

Crypto Exchange CoinDCX Falls Victim To $44 Million Hack – Details
Crypto
The Ultimate Dolby Atmos Experience Might Be In Your Car
Tech News
Littler fuelled for World Matchplay by 'hours and hours' of practice
Sports
Assassin’s Creed Shadows’ Development Budget Exceeded €100 Million
Gaming News
Asian shares, yen weather Japan uncertainty as earnings loom
Business
Samsung Tri-fold Foldable Named and Dated
Tech News
Ether preps record short squeeze as analysis sees $4K ETH price ‘soon’
Crypto

About Us

Welcome to Viraltrendingcontent, your go-to source for the latest updates on world news, politics, sports, celebrity, tech, travel, gaming, crypto news, and business news. We are dedicated to providing you with accurate, timely, and engaging content from around the globe.

Quick Links

  • Home
  • World News
  • Politics
  • Celebrity
  • Business
  • Home
  • World News
  • Politics
  • Sports
  • Celebrity
  • Business
  • Crypto
  • Gaming News
  • Tech News
  • Travel
  • Sports
  • Crypto
  • Tech News
  • Gaming News
  • Travel

Trending News

cageside seats

Unlocking the Ultimate WWE Experience: Cageside Seats News 2024

Crypto Exchange CoinDCX Falls Victim To $44 Million Hack – Details

Investing £5 a day could help me build a second income of £329 a month!

cageside seats
Unlocking the Ultimate WWE Experience: Cageside Seats News 2024
May 22, 2024
Crypto Exchange CoinDCX Falls Victim To $44 Million Hack – Details
July 21, 2025
Investing £5 a day could help me build a second income of £329 a month!
March 27, 2024
Brussels unveils plans for a European Degree but struggles to explain why
March 27, 2024
© 2024 All Rights reserved | Powered by Vraltrendingcontent
  • About Us
  • Contact US
  • Disclaimer
  • Privacy Policy
  • Terms of Service
Welcome Back!

Sign in to your account

Lost your password?