By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
Viral Trending contentViral Trending content
  • Home
  • World News
  • Politics
  • Sports
  • Celebrity
  • Business
  • Crypto
  • Gaming News
  • Tech News
  • Travel
Reading: Echo Chamber Jailbreak Tricks LLMs Like OpenAI and Google into Generating Harmful Content
Notification Show More
Viral Trending contentViral Trending content
  • Home
  • Categories
    • World News
    • Politics
    • Sports
    • Celebrity
    • Business
    • Crypto
    • Tech News
    • Gaming News
    • Travel
  • Bookmarks
© 2024 All Rights reserved | Powered by Viraltrendingcontent
Viral Trending content > Blog > Tech News > Echo Chamber Jailbreak Tricks LLMs Like OpenAI and Google into Generating Harmful Content
Tech News

Echo Chamber Jailbreak Tricks LLMs Like OpenAI and Google into Generating Harmful Content

By Viral Trending Content 5 Min Read
Share
SHARE

Jun 23, 2025Ravie LakshmananLLM Security / AI Security

Echo Chamber Jailbreak Tricks LLMs

Cybersecurity researchers are calling attention to a new jailbreaking method called Echo Chamber that could be leveraged to trick popular large language models (LLMs) into generating undesirable responses, irrespective of the safeguards put in place.

“Unlike traditional jailbreaks that rely on adversarial phrasing or character obfuscation, Echo Chamber weaponizes indirect references, semantic steering, and multi-step inference,” NeuralTrust researcher Ahmad Alobaid said in a report shared with The Hacker News.

“The result is a subtle yet powerful manipulation of the model’s internal state, gradually leading it to produce policy-violating responses.”

While LLMs have steadily incorporated various guardrails to combat prompt injections and jailbreaks, the latest research shows that there exist techniques that can yield high success rates with little to no technical expertise.

Cybersecurity

It also serves to highlight a persistent challenge associated with developing ethical LLMs that enforce clear demarcation between what topics are acceptable and not acceptable.

While widely-used LLMs are designed to refuse user prompts that revolve around prohibited topics, they can be nudged towards eliciting unethical responses as part of what’s called a multi-turn jailbreaking.

In these attacks, the attacker starts with something innocuous and then progressively asks a model a series of increasingly malicious questions that ultimately trick it into producing harmful content. This attack is referred to as Crescendo.

LLMs are also susceptible to many-shot jailbreaks, which take advantage of their large context window (i.e., the maximum amount of text that can fit within a prompt) to flood the AI system with several questions (and answers) that exhibit jailbroken behavior preceding the final harmful question. This, in turn, causes the LLM to continue the same pattern and produce harmful content.

Echo Chamber, per NeuralTrust, leverages a combination of context poisoning and multi-turn reasoning to defeat a model’s safety mechanisms.

Echo Chamber Attack

“The main difference is that Crescendo is the one steering the conversation from the start while the Echo Chamber is kind of asking the LLM to fill in the gaps and then we steer the model accordingly using only the LLM responses,” Alobaid said in a statement shared with The Hacker News.

Specifically, this plays out as a multi-stage adversarial prompting technique that starts with a seemingly-innocuous input, while gradually and indirectly steering it towards generating dangerous content without giving away the end goal of the attack (e.g., generating hate speech).

“Early planted prompts influence the model’s responses, which are then leveraged in later turns to reinforce the original objective,” NeuralTrust said. “This creates a feedback loop where the model begins to amplify the harmful subtext embedded in the conversation, gradually eroding its own safety resistances.”

Cybersecurity

In a controlled evaluation environment using OpenAI and Google’s models, the Echo Chamber attack achieved a success rate of over 90% on topics related to sexism, violence, hate speech, and pornography. It also achieved nearly 80% success in the misinformation and self-harm categories.

“The Echo Chamber Attack reveals a critical blind spot in LLM alignment efforts,” the company said. “As models become more capable of sustained inference, they also become more vulnerable to indirect exploitation.”

The disclosure comes as Cato Networks demonstrated a proof-of-concept (PoC) attack that targets Atlassian’s model context protocol (MCP) server and its integration with Jira Service Management (JSM) to trigger prompt injection attacks when a malicious support ticket submitted by an external threat actor is processed by a support engineer using MCP tools.

The cybersecurity company has coined the term “Living off AI” to describe these attacks, where an AI system that executes untrusted input without adequate isolation guarantees can be abused by adversaries to gain privileged access without having to authenticate themselves.

“The threat actor never accessed the Atlassian MCP directly,” security researchers Guy Waizel, Dolev Moshe Attiya, and Shlomo Bamberger said. “Instead, the support engineer acted as a proxy, unknowingly executing malicious instructions through Atlassian MCP.”

Found this article interesting? Follow us on Twitter  and LinkedIn to read more exclusive content we post.

You Might Also Like

AI’s Next Evolution: From Advisor to Architect – New TCS/MIT SMR Study Reveals Game-Changing Shift

9 Best Coolers WIRED Tested for Every Budget, Any Situation

Astronomers observe the earliest moments of a new solar system

EncryptHub Targets Web3 Developers Using Fake AI Platforms to Deploy Fickle Stealer Malware

Best Nintendo Switch 2 Controllers (2025), Tested and Reviewed

TAGGED: #OpenAI, AI security, Atlassian, Cato Networks, Cyber Security, Cybersecurity, ethical ai, Internet, Jailbreaking, LLM Security, misinformation, Prompt Injection
Share This Article
Facebook Twitter Copy Link
Previous Article Xbox Will Attend Gamescom 2025
Next Article Photos: A Road Trip Through Syria After the Fall of Bashar al-Assad
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

- Advertisement -
Ad image

Latest News

Ubisoft Shareholder Confronts Boss About 'Woke' Assassin's Creed
Gaming News
Low P/E ratios, yields up to 9%! Are these the FTSE 250’s best value stocks?
Business
What’s in the Epstein grand jury transcripts? Former prosecutor says ‘It’s not going to be much’
Business
Crypto Crooks Take Over Stellar Blade’s X Account, Spread Fake Crypto
Crypto
Tom Bergeron: Photos of the Former ‘Dancing With the Stars’ Host Over the Years
Celebrity
‘Crypto Week’ ushers in big change: What happens now?
Crypto
How Much Would It Cost To Build A PC As Powerful As Xbox Series S? [2025 Edition]
Gaming News

About Us

Welcome to Viraltrendingcontent, your go-to source for the latest updates on world news, politics, sports, celebrity, tech, travel, gaming, crypto news, and business news. We are dedicated to providing you with accurate, timely, and engaging content from around the globe.

Quick Links

  • Home
  • World News
  • Politics
  • Celebrity
  • Business
  • Home
  • World News
  • Politics
  • Sports
  • Celebrity
  • Business
  • Crypto
  • Gaming News
  • Tech News
  • Travel
  • Sports
  • Crypto
  • Tech News
  • Gaming News
  • Travel

Trending News

cageside seats

Unlocking the Ultimate WWE Experience: Cageside Seats News 2024

Ubisoft Shareholder Confronts Boss About 'Woke' Assassin's Creed

Investing £5 a day could help me build a second income of £329 a month!

cageside seats
Unlocking the Ultimate WWE Experience: Cageside Seats News 2024
May 22, 2024
Ubisoft Shareholder Confronts Boss About 'Woke' Assassin's Creed
July 21, 2025
Investing £5 a day could help me build a second income of £329 a month!
March 27, 2024
Brussels unveils plans for a European Degree but struggles to explain why
March 27, 2024
© 2024 All Rights reserved | Powered by Vraltrendingcontent
  • About Us
  • Contact US
  • Disclaimer
  • Privacy Policy
  • Terms of Service
Welcome Back!

Sign in to your account

Lost your password?