Someone Created the First AI-Powered Ransomware Using OpenAI’s gpt-oss:20b Model
Cybersecurity company ESET has disclosed that it discovered an artificial intelligence (AI)-powered…
Experts Find AI Browsers Can Be Tricked by PromptFix Exploit to Run Malicious Hidden Prompts
Cybersecurity researchers have demonstrated a new prompt injection technique called PromptFix that…
Researchers Uncover GPT-5 Jailbreak and Zero-Click AI Agent Attacks Exposing Cloud and IoT Systems
Cybersecurity researchers have uncovered a jailbreak technique to bypass ethical guardrails erected…
Cursor AI Code Editor Fixed Flaw Allowing Attackers to Run Commands via Prompt Injection
Cybersecurity researchers have disclosed a now-patched, high-severity security flaw in Cursor, a…
Wiz Uncovers Critical Access Bypass Flaw in AI-Powered Vibe Coding Platform Base44
Jul 29, 2025Ravie LakshmananLLM Security / Vulnerability Cybersecurity researchers have disclosed a…
Echo Chamber Jailbreak Tricks LLMs Like OpenAI and Google into Generating Harmful Content
Jun 23, 2025Ravie LakshmananLLM Security / AI Security Cybersecurity researchers are calling…
New TokenBreak Attack Bypasses AI Moderation with Single-Character Text Changes
Cybersecurity researchers have discovered a novel attack technique called TokenBreak that can…
GitLab Duo Vulnerability Enabled Attackers to Hijack AI Responses with Hidden Prompts
Cybersecurity researchers have discovered an indirect prompt injection flaw in GitLab's artificial…
Researchers Demonstrate How MCP Prompt Injection Can Be Used for Both Attack and Defense
Apr 30, 2025Ravie LakshmananArtificial Intelligence / Email Security As the field of…


