By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
Viral Trending contentViral Trending content
  • Home
  • World News
  • Politics
  • Sports
  • Celebrity
  • Business
  • Crypto
  • Gaming News
  • Tech News
  • Travel
Reading: Meta’s Llama Framework Flaw Exposes AI Systems to Remote Code Execution Risks
Notification Show More
Viral Trending contentViral Trending content
  • Home
  • Categories
    • World News
    • Politics
    • Sports
    • Celebrity
    • Business
    • Crypto
    • Tech News
    • Gaming News
    • Travel
  • Bookmarks
© 2024 All Rights reserved | Powered by Viraltrendingcontent
Viral Trending content > Blog > Tech News > Meta’s Llama Framework Flaw Exposes AI Systems to Remote Code Execution Risks
Tech News

Meta’s Llama Framework Flaw Exposes AI Systems to Remote Code Execution Risks

By Viral Trending Content 7 Min Read
Share
SHARE
Llama Framework

A high-severity security flaw has been disclosed in Meta’s Llama large language model (LLM) framework that, if successfully exploited, could allow an attacker to execute arbitrary code on the llama-stack inference server.

The vulnerability, tracked as CVE-2024-50050, has been assigned a CVSS score of 6.3 out of 10.0. Supply chain security firm Snyk, on the other hand, has assigned it a critical severity rating of 9.3.

“Affected versions of meta-llama are vulnerable to deserialization of untrusted data, meaning that an attacker can execute arbitrary code by sending malicious data that is deserialized,” Oligo Security researcher Avi Lumelsky said in an analysis earlier this week.

The shortcoming, per the cloud security company, resides in a component called Llama Stack, which defines a set of API interfaces for artificial intelligence (AI) application development, including using Meta’s own Llama models.

Specifically, it has to do with a remote code execution flaw in the reference Python Inference API implementation, was found to automatically deserialize Python objects using pickle, a format that has been deemed risky due to the possibility of arbitrary code execution when untrusted or malicious data is loading using the library.

Cybersecurity

“In scenarios where the ZeroMQ socket is exposed over the network, attackers could exploit this vulnerability by sending crafted malicious objects to the socket,” Lumelsky said. “Since recv_pyobj will unpickle these objects, an attacker could achieve arbitrary code execution (RCE) on the host machine.”

Following responsible disclosure on September 24, 2024, the issue was addressed by Meta on October 10 in version 0.0.41. It has also been remediated in pyzmq, a Python library that provides access to the ZeroMQ messaging library.

In an advisory issued by Meta, the company said it fixed the remote code execution risk associated with using pickle as a serialization format for socket communication by switching to the JSON format.

This is not the first time such deserialization vulnerabilities have been discovered in AI frameworks. In August 2024, Oligo detailed a “shadow vulnerability” in TensorFlow’s Keras framework, a bypass for CVE-2024-3660 (CVSS score: 9.8) that could result in arbitrary code execution due to the use of the unsafe marshal module.

The development comes as security researcher Benjamin Flesch disclosed a high-severity flaw in OpenAI’s ChatGPT crawler, which could be weaponized to initiate a distributed denial-of-service (DDoS) attack against arbitrary websites.

The issue is the result of incorrect handling of HTTP POST requests to the “chatgpt[.]com/backend-api/attributions” API, which is designed to accept a list of URLs as input, but neither checks if the same URL appears several times in the list nor enforces a limit on the number of hyperlinks that can be passed as input.

Llama Framework

This opens up a scenario where a bad actor could transmit thousands of hyperlinks within a single HTTP request, causing OpenAI to send all those requests to the victim site without attempting to limit the number of connections or prevent issuing duplicate requests.

Depending on the number of hyperlinks transmitted to OpenAI, it provides a significant amplification factor for potential DDoS attacks, effectively overwhelming the target site’s resources. The AI company has since patched the problem.

“The ChatGPT crawler can be triggered to DDoS a victim website via HTTP request to an unrelated ChatGPT API,” Flesch said. “This defect in OpenAI software will spawn a DDoS attack on an unsuspecting victim website, utilizing multiple Microsoft Azure IP address ranges on which ChatGPT crawler is running.”

The disclosure also follows a report from Truffle Security that popular AI-powered coding assistants “recommend” hard-coding API keys and passwords, a risky piece of advice that could mislead inexperienced programmers into introducing security weaknesses in their projects.

“LLMs are helping perpetuate it, likely because they were trained on all the insecure coding practices,” security researcher Joe Leon said.

News of vulnerabilities in LLM frameworks also follows research into how the models could be abused to empower the cyber attack lifecycle, including installing the final stage stealer payload and command-and-control.

Cybersecurity

“The cyber threats posed by LLMs are not a revolution, but an evolution,” Deep Instinct researcher Mark Vaitzman said. “There’s nothing new there, LLMs are just making cyber threats better, faster, and more accurate on a larger scale. LLMs can be successfully integrated into every phase of the attack lifecycle with the guidance of an experienced driver. These abilities are likely to grow in autonomy as the underlying technology advances.”

Recent research has also demonstrated a new method called ShadowGenes that can be used for identifying model genealogy, including its architecture, type, and family by leveraging its computational graph. The approach builds on a previously disclosed attack technique dubbed ShadowLogic.

“The signatures used to detect malicious attacks within a computational graph could be adapted to track and identify recurring patterns, called recurring subgraphs, allowing them to determine a model’s architectural genealogy,” AI security firm HiddenLayer said in a statement shared with The Hacker News.

“Understanding the model families in use within your organization increases your overall awareness of your AI infrastructure, allowing for better security posture management.”

Found this article interesting? Follow us on Twitter  and LinkedIn to read more exclusive content we post.

You Might Also Like

Honor Magic V5 Foldable Specs Officially Revealed

Hydrow Discount Code: Save Up to $150 in July

eir Achieves 99% 5G Coverage as Network Traffic Surges by 60% Year-on-Year

Top 5 Upgrades of the Google Pixel 10 Pro XL

North Korean Hackers Target Web3 with Nim Malware and Use ClickFix in BabyShark Campaign

TAGGED: AI security, Cyber Security, Cybersecurity, ddos attack, Internet, large language model, Python, Remote Code Execution, Supply Chain Security, Vulnerability
Share This Article
Facebook Twitter Copy Link
Previous Article Bitcoin Nears ATH As Data Reveals Low Retail Demand – Potential For Further Growth?
Next Article How to buy Bitcoin ETFs
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

- Advertisement -
Ad image

Latest News

India’s macro setup stable, but markets await earnings firepower: Hemang Jani
Business
Mecha BREAK Peaks at Nearly 133,000 Concurrent Players on Steam at Launch
Gaming News
Honor Magic V5 Foldable Specs Officially Revealed
Tech News
Hydrow Discount Code: Save Up to $150 in July
Tech News
Clayton Kershaw, a throwback to baseball's past, could be the last to 3,000 strikeouts
Sports
ETH price prediction: Ether eyes $2,879 as technical indicators switch bullish
Crypto
Trump administration withholds $70 million in K-12 school funding from Colorado
Politics

About Us

Welcome to Viraltrendingcontent, your go-to source for the latest updates on world news, politics, sports, celebrity, tech, travel, gaming, crypto news, and business news. We are dedicated to providing you with accurate, timely, and engaging content from around the globe.

Quick Links

  • Home
  • World News
  • Politics
  • Celebrity
  • Business
  • Home
  • World News
  • Politics
  • Sports
  • Celebrity
  • Business
  • Crypto
  • Gaming News
  • Tech News
  • Travel
  • Sports
  • Crypto
  • Tech News
  • Gaming News
  • Travel

Trending News

cageside seats

Unlocking the Ultimate WWE Experience: Cageside Seats News 2024

India’s macro setup stable, but markets await earnings firepower: Hemang Jani

Investing £5 a day could help me build a second income of £329 a month!

cageside seats
Unlocking the Ultimate WWE Experience: Cageside Seats News 2024
May 22, 2024
India’s macro setup stable, but markets await earnings firepower: Hemang Jani
July 3, 2025
Investing £5 a day could help me build a second income of £329 a month!
March 27, 2024
Brussels unveils plans for a European Degree but struggles to explain why
March 27, 2024
© 2024 All Rights reserved | Powered by Vraltrendingcontent
  • About Us
  • Contact US
  • Disclaimer
  • Privacy Policy
  • Terms of Service
Welcome Back!

Sign in to your account

Lost your password?