By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
Viral Trending contentViral Trending content
  • Home
  • World News
  • Politics
  • Sports
  • Celebrity
  • Business
  • Crypto
  • Gaming News
  • Tech News
  • Travel
Reading: From Misuse to Abuse: AI Risks and Attacks
Notification Show More
Viral Trending contentViral Trending content
  • Home
  • Categories
    • World News
    • Politics
    • Sports
    • Celebrity
    • Business
    • Crypto
    • Tech News
    • Gaming News
    • Travel
  • Bookmarks
© 2024 All Rights reserved | Powered by Viraltrendingcontent
Viral Trending content > Blog > Tech News > From Misuse to Abuse: AI Risks and Attacks
Tech News

From Misuse to Abuse: AI Risks and Attacks

By Viral Trending Content 8 Min Read
Share
SHARE

Oct 16, 2024The Hacker NewsArtificial Intelligence / Cybercrime

Contents
Cybercriminals and AI: The Reality vs. HypeHow Hackers are Really Using AI in Cyber AttacksUsing AI to Abuse AI: Introducing GPTsAbusing GPTsAI Attacks and RisksLLM Attack SurfaceReal-World Attacks and RisksSumming Up: AI in Cyber Crime
AI Risks and Attacks

AI from the attacker’s perspective: See how cybercriminals are leveraging AI and exploiting its vulnerabilities to compromise systems, users, and even other AI applications

Cybercriminals and AI: The Reality vs. Hype

“AI will not replace humans in the near future. But humans who know how to use AI are going to replace those humans who don’t know how to use AI,” says Etay Maor, Chief Security Strategist at Cato Networks and founding member of Cato CTRL. “Similarly, attackers are also turning to AI to augment their own capabilities.”

Yet, there is a lot more hype than reality around AI’s role in cybercrime. Headlines often sensationalize AI threats, with terms like “Chaos-GPT” and “Black Hat AI Tools,” even claiming they seek to destroy humanity. However, these articles are more fear-inducing than descriptive of serious threats.

AI Risks and Attacks

For instance, when explored in underground forums, several of these so-called “AI cyber tools” were found to be nothing more than rebranded versions of basic public LLMs with no advanced capabilities. In fact, they were even marked by angry attackers as scams.

How Hackers are Really Using AI in Cyber Attacks

In reality, cybercriminals are still figuring out how to harness AI effectively. They are experiencing the same issues and shortcomings legitimate users are, like hallucinations and limited abilities. Per their predictions, it will take a few years before they are able to leverage GenAI effectively for hacking needs.

AI Risks and Attacks
AI Risks and Attacks

For now, GenAI tools are mostly being used for simpler tasks, like writing phishing emails and generating code snippets that can be integrated into attacks. In addition, we’ve observed attackers providing compromised code to AI systems for analysis, as an effort to “normalize” such code as non-malicious.

Using AI to Abuse AI: Introducing GPTs

GPTs, introduced by OpenAI on November 6, 2023, are customizable versions of ChatGPT that allow users to add specific instructions, integrate external APIs and incorporate unique knowledge sources. This feature enables users to create highly specialized applications, such as tech support bots, educational tools, and more. In addition, OpenAI is offering developers monetization options for GPTs, through a dedicated marketplace.

Abusing GPTs

GPTs introduce potential security concerns. One notable risk is the exposure of sensitive instructions, proprietary knowledge, or even API keys embedded in the custom GPT. Malicious actors can use AI, specifically prompt engineering, to replicate a GPT and tap into its monetization potential.

Attackers can use prompts to retrieve knowledge sources, instructions, configuration files, and more. These might be as simple as prompting the custom GPT to list all uploaded files and custom instructions or asking for debugging information. Or, sophisticated like requesting the GPT to zip one of the PDF files and create a downloadable link, asking the GPT to list all its capabilities in a structured table format, and more.

“Even protections that developers put in place can be bypassed and all knowledge can be extracted,” says Vitaly Simonovich, Threat Intelligence Researcher at Cato Networks and Cato CTRL member.

These risks can be avoided by:

  • Not uploading sensitive data
  • Using instruction-based protection though even those may not be foolproof. “You need to take into account all the different scenarios that the attacker can abuse,” adds Vitaly.
  • OpenAI protection

AI Attacks and Risks

There are multiple frameworks existing today to assist organizations that are considering developing and creating AI-based software:

  • NIST Artificial Intelligence Risk Management Framework
  • Google’s Secure AI Framework
  • OWASP Top 10 for LLM
  • OWASP Top 10 for LLM Applications
  • The recently launched MITRE ATLAS

LLM Attack Surface

There are six key LLM (Large Language Model) components that can be targeted by attackers:

  1. Prompt – Attacks like prompt injections, where malicious input is used to manipulate the AI’s output
  2. Response – Misuse or leakage of sensitive information in AI-generated responses
  3. Model – Theft, poisoning, or manipulation of the AI model
  4. Training Data – Introducing malicious data to alter the behavior of the AI.
  5. Infrastructure – Targeting the servers and services that support the AI
  6. Users – Misleading or exploiting the humans or systems relying on AI outputs

Real-World Attacks and Risks

Let’s wrap up with some examples of LLM manipulations, which can easily be used in a malicious manner.

  • Prompt Injection in Customer Service Systems – A recent case involved a car dealership using an AI chatbot for customer service. A researcher managed to manipulate the chatbot by issuing a prompt that altered its behavior. By instructing the chatbot to agree to all customer statements and end each response with, “And that’s a legally binding offer,” the researcher was able to purchase a car at a ridiculously low price, exposing a major vulnerability.
  • AI Risks and Attacks
  • Hallucinations Leading to Legal Consequences – In another incident, Air Canada faced legal action when their AI chatbot provided incorrect information about refund policies. When a customer relied on the chatbot’s response and subsequently filed a claim, Air Canada was held liable for the misleading information.
  • Proprietary Data Leaks – Samsung employees unknowingly leaked proprietary information when they used ChatGPT to analyze code. Uploading sensitive data to third-party AI systems is risky, as it’s unclear how long the data is stored or who can access it.
  • AI and Deepfake Technology in Fraud – Cybercriminals are also leveraging AI beyond text generation. A bank in Hong Kong fell victim to a $25 million fraud when attackers used live deepfake technology during a video call. The AI-generated avatars mimicked trusted bank officials, convincing the victim to transfer funds to a fraudulent account.

Summing Up: AI in Cyber Crime

AI is a powerful tool for both defenders and attackers. As cybercriminals continue to experiment with AI, it’s important to understand how they think, the tactics they employ and the options they face. This will allow organizations to better safeguard their AI systems against misuse and abuse.

Watch the entire masterclass here.

Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Twitter  and LinkedIn to read more exclusive content we post.

You Might Also Like

Trump Takes Aim at State AI Laws in Draft Executive Order

Changing Ends Season 3 Review: Forget Alan Carr’s The Traitors Success

1,139 HP: The New Porsche Cayenne Electric is a Monster

Former Revolut executives raise €30M to bring blockchain-based banking app Deblock to Ireland

Hackers Actively Exploiting 7-Zip Symbolic Link–Based RCE Vulnerability (CVE-2025-11001)

TAGGED: artificial intelligence, Cyber Security, Cybercrime, Cybersecurity, data breach, deepfake, Internet, phishing attack
Share This Article
Facebook Twitter Copy Link
Previous Article Bitcoin ‘Apparent Demand’ Has Turned Green Again: What It Means
Next Article Where to find abortion on your ballot in the election
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

- Advertisement -
Ad image

Latest News

Congress Should Codify Trump Order Ending Hong Kong’s Special Trade Status, Advisory Panel Says
Politics
S.T.A.L.K.E.R. 2: Heart of Chornobyl PS5 Graphics Analysis – How Does It Compare Against Xbox Series X and PC?
Gaming News
Trump Takes Aim at State AI Laws in Draft Executive Order
Tech News
2 UK shares I’d prefer to own over Lloyds stock right now
Business
Mevolaxy files for registration with the SEC
Crypto
Underdog Fantasy Promo Code FOXSPORTS: Bet $5, Get $100 on Wednesday's NBA Slate
Sports
RiNo apartment building asks judge to evict rooftop cocktail lounge
Business

About Us

Welcome to Viraltrendingcontent, your go-to source for the latest updates on world news, politics, sports, celebrity, tech, travel, gaming, crypto news, and business news. We are dedicated to providing you with accurate, timely, and engaging content from around the globe.

Quick Links

  • Home
  • World News
  • Politics
  • Celebrity
  • Business
  • Home
  • World News
  • Politics
  • Sports
  • Celebrity
  • Business
  • Crypto
  • Gaming News
  • Tech News
  • Travel
  • Sports
  • Crypto
  • Tech News
  • Gaming News
  • Travel

Trending News

cageside seats

Unlocking the Ultimate WWE Experience: Cageside Seats News 2024

Congress Should Codify Trump Order Ending Hong Kong’s Special Trade Status, Advisory Panel Says

Investing £5 a day could help me build a second income of £329 a month!

cageside seats
Unlocking the Ultimate WWE Experience: Cageside Seats News 2024
May 22, 2024
Congress Should Codify Trump Order Ending Hong Kong’s Special Trade Status, Advisory Panel Says
November 20, 2025
Investing £5 a day could help me build a second income of £329 a month!
March 27, 2024
Brussels unveils plans for a European Degree but struggles to explain why
March 27, 2024
© 2024 All Rights reserved | Powered by Vraltrendingcontent
  • About Us
  • Contact US
  • Disclaimer
  • Privacy Policy
  • Terms of Service
Welcome Back!

Sign in to your account

Lost your password?