By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
Viral Trending contentViral Trending content
  • Home
  • World News
  • Politics
  • Sports
  • Celebrity
  • Business
  • Crypto
  • Gaming News
  • Tech News
  • Travel
Reading: Model Security Is the Wrong Frame – The Real Risk Is Workflow Security
Notification Show More
Viral Trending contentViral Trending content
  • Home
  • Categories
    • World News
    • Politics
    • Sports
    • Celebrity
    • Business
    • Crypto
    • Tech News
    • Gaming News
    • Travel
  • Bookmarks
© 2024 All Rights reserved | Powered by Viraltrendingcontent
Viral Trending content > Blog > Tech News > Model Security Is the Wrong Frame – The Real Risk Is Workflow Security
Tech News

Model Security Is the Wrong Frame – The Real Risk Is Workflow Security

By Viral Trending Content 7 Min Read
Share
SHARE

Jan 15, 2026The Hacker NewsData Security / Artificial Intelligence

Contents
AI Models Are Becoming Workflow EnginesWhy Traditional Security Controls Fall ShortSecuring AI-Driven WorkflowsHow Platforms Like Reco Can Help

As AI copilots and assistants become embedded in daily work, security teams are still focused on protecting the models themselves. But recent incidents suggest the bigger risk lies elsewhere: in the workflows that surround those models.

Two Chrome extensions posing as AI helpers were recently caught stealing ChatGPT and DeepSeek chat data from over 900,000 users. Separately, researchers demonstrated how prompt injections hidden in code repositories could trick IBM’s AI coding assistant into executing malware on a developer’s machine.

Neither attack broke the AI algorithms themselves.

They exploited the context in which the AI operates. That’s the pattern worth paying attention to. When AI systems are embedded in real business processes, summarizing documents, drafting emails, and pulling data from internal tools, securing the model alone isn’t enough. The workflow itself becomes the target.

AI Models Are Becoming Workflow Engines

To understand why this matters, consider how AI is actually being used today:

Businesses now rely on it to connect apps and automate tasks that used to be done by hand. An AI writing assistant might pull a confidential document from SharePoint and summarize it in an email draft. A sales chatbot might cross-reference internal CRM records to answer a customer question. Each of these scenarios blurs the boundaries between applications, creating new integration pathways on the fly.

What makes this risky is how AI agents operate. They rely on probabilistic decision-making rather than hard-coded rules, generating output based on patterns and context. A carefully written input can nudge an AI to do something its designers never intended, and the AI will comply because it has no native concept of trust boundaries.

This means the attack surface includes every input, output, and integration point the model touches.

Hacking the model’s code becomes unnecessary when an adversary can simply manipulate the context the model sees or the channels it uses. The incidents described earlier illustrate this: prompt injections hidden in repositories hijack AI behavior during routine tasks, while malicious extensions siphon data from AI conversations without ever touching the model.

Why Traditional Security Controls Fall Short

These workflow threats expose a blind spot in traditional security. Most legacy defenses were built for deterministic software, stable user roles, and clear perimeters. AI-driven workflows break all three assumptions.

  • Most general apps distinguish between trusted code and untrusted input. AI models don’t. Everything is just text to them, so a malicious instruction hidden in a PDF looks no different than a legitimate command. Traditional input validation doesn’t help because the payload isn’t malicious code. It’s just natural language.
  • Traditional monitoring catches obvious anomalies like mass downloads or suspicious logins. But an AI reading a thousand records as part of a routine query looks like normal service-to-service traffic. If that data gets summarized and sent to an attacker, no rule was technically broken.
  • Most general security policies specify what’s allowed or blocked: don’t let this user access that file, block traffic to this server. But AI behavior depends on context. How do you write a rule that says “never reveal customer data in output”?
  • Security programs rely on periodic reviews and fixed configurations, like quarterly audits or firewall rules. AI workflows don’t stay static. An integration might gain new capabilities after an update or connect to a new data source. By the time a quarterly review happens, a token may have already leaked.

Securing AI-Driven Workflows

So, a better approach to all of this would be to treat the whole workflow as the thing you’re protecting, not just the model.

  • Start by understanding where AI is actually being used, from official tools like Microsoft 365 Copilot to browser extensions employees may have installed on their own. Know what data each system can access and what actions it can perform. Many organizations are surprised to find dozens of shadow AI services running across the business.
  • If an AI assistant is meant only for internal summarization, restrict it from sending external emails. Scan outputs for sensitive data before they leave your environment. These guardrails should live outside the model itself, in middleware that checks actions before they go out.
  • Treat AI agents like any other user or service. If an AI only needs read access to one system, don’t give it blanket access to everything. Scope OAuth tokens to the minimum permissions required, and monitor for anomalies like an AI suddenly accessing data it never touched before.
  • Finally, it’s also useful to educate users about the risks of unvetted browser extensions or copying prompts from unknown sources. Vet third-party plugins before deploying them, and treat any tool that touches AI inputs or outputs as part of the security perimeter.

How Platforms Like Reco Can Help

In practice, doing all of this manually doesn’t scale. That’s why a new category of tools is emerging: dynamic SaaS security platforms. These platforms act as a real-time guardrail layer on top of AI-powered workflows, learning what normal behavior looks like and flagging anomalies when they occur.

Reco is one leading example.

Figure 1: Reco’s generative AI application discovery

As shown above, the platform gives security teams visibility into AI usage across the organization, surfacing which generative AI applications are in use and how they’re connected. From there, you can enforce guardrails at the workflow level, catch risky behavior in real time, and maintain control without slowing down the business.

Request a Demo: Get Started With Reco.

Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Google News, Twitter and LinkedIn to read more exclusive content we post.

You Might Also Like

Apple AI Pin Specs Leak: Dual Cameras, No Screen & More

The diverse responsibilities of a principal software engineer

OpenAI Backs Bill That Would Limit Liability for AI-Enabled Mass Deaths or Financial Disasters

Google’s Fitbit Tease has me More Excited for Garmin’s Whoop Rival

Why the TCL NXTPAPER 14 Is One of the Best Tablets for Musicians and Sheet Music Reading

TAGGED: artificial intelligence, Browser extensions, Cyber Security, Cybersecurity, data security, enterprise security, Internet, Malware, Prompt Injection, SaaS Security, Threat Intelligence
Share This Article
Facebook Twitter Copy Link
Previous Article Dogecoin Founder Crashes Bullish Bitcoin Hopes, Casts Doubts On All-Time High Predictions
Next Article US shuts the door on immigrant visas for 75 countries
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

- Advertisement -
Ad image

Latest News

JPMorgan CEO Jamie Dimon says he’s ‘learned and relearned’ to not make big decisions when he’s tired on Fridays
Business
Apple AI Pin Specs Leak: Dual Cameras, No Screen & More
Tech News
A ‘glass-like’ battlefield: German Army chief on the future of warfare
World News
Polymarket Sees Record $153M Daily Volume After Chainlink Integration
Crypto
Natasha Lyonne Then & Now: See Before & After Photos of the Actress Here
Celebrity
Cult Hit Doki Doki Literature Club Fights Removal From Google Play Store Over ‘Depiction Of Sensitive Themes’
Gaming News
Dead as Disco Launches Into Early Access on May 5th, Groovy New Gameplay Released
Gaming News

About Us

Welcome to Viraltrendingcontent, your go-to source for the latest updates on world news, politics, sports, celebrity, tech, travel, gaming, crypto news, and business news. We are dedicated to providing you with accurate, timely, and engaging content from around the globe.

Quick Links

  • Home
  • World News
  • Politics
  • Celebrity
  • Business
  • Home
  • World News
  • Politics
  • Sports
  • Celebrity
  • Business
  • Crypto
  • Gaming News
  • Tech News
  • Travel
  • Sports
  • Crypto
  • Tech News
  • Gaming News
  • Travel

Trending News

cageside seats

Unlocking the Ultimate WWE Experience: Cageside Seats News 2024

Investing £5 a day could help me build a second income of £329 a month!

JPMorgan CEO Jamie Dimon says he’s ‘learned and relearned’ to not make big decisions when he’s tired on Fridays

cageside seats
Unlocking the Ultimate WWE Experience: Cageside Seats News 2024
May 22, 2024
Investing £5 a day could help me build a second income of £329 a month!
March 27, 2024
JPMorgan CEO Jamie Dimon says he’s ‘learned and relearned’ to not make big decisions when he’s tired on Fridays
April 10, 2026
Brussels unveils plans for a European Degree but struggles to explain why
March 27, 2024
© 2024 All Rights reserved | Powered by Vraltrendingcontent
  • About Us
  • Contact US
  • Disclaimer
  • Privacy Policy
  • Terms of Service
Welcome Back!

Sign in to your account

Lost your password?