Echo Chamber Jailbreak Tricks LLMs Like OpenAI and Google into Generating Harmful Content
Jun 23, 2025Ravie LakshmananLLM Security / AI Security Cybersecurity researchers are calling…
LangSmith Bug Could Expose OpenAI Keys and User Data via Malicious Agents
Jun 17, 2025Ravie LakshmananVulnerability / LLM Security Cybersecurity researchers have disclosed a…
How to Address the Expanding Security Risk
Human identities management and control is pretty well done with its set…
‘Protected’ Images Are Easier, Not More Difficult, to Steal With AI
New research suggests that watermarking tools meant to block AI image edits…
Why Business Impact Should Lead the Security Conversation
Security teams face growing demands with more tools, more data, and higher…
AI Acts Differently When It Knows It’s Being Tested, Research Finds
Echoing the 2015 ‘Dieselgate' scandal, new research suggests that AI language models…
Why Traditional DLP Solutions Fail in the Browser Era
Jun 04, 2025The Hacker NewsBrowser Security / Enterprise Security Traditional data leakage…
Research Suggests LLMs Willing to Assist in Malicious ‘Vibe Coding’
Over the past few years, Large language models (LLMs) have drawn scrutiny…
Why NHIs Are Security’s Most Dangerous Blind Spot
When we talk about identity in cybersecurity, most people think of usernames,…