By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
Viral Trending contentViral Trending content
  • Home
  • World News
  • Politics
  • Sports
  • Celebrity
  • Business
  • Crypto
  • Gaming News
  • Tech News
  • Travel
Reading: Researchers Warn of Privilege Escalation Risks in Google’s Vertex AI ML Platform
Notification Show More
Viral Trending contentViral Trending content
  • Home
  • Categories
    • World News
    • Politics
    • Sports
    • Celebrity
    • Business
    • Crypto
    • Tech News
    • Gaming News
    • Travel
  • Bookmarks
© 2024 All Rights reserved | Powered by Viraltrendingcontent
Viral Trending content > Blog > Tech News > Researchers Warn of Privilege Escalation Risks in Google’s Vertex AI ML Platform
Tech News

Researchers Warn of Privilege Escalation Risks in Google’s Vertex AI ML Platform

By Viral Trending Content 5 Min Read
Share
SHARE

Nov 15, 2024Ravie LakshmananArtificial Intelligence / Vulnerability

Cybersecurity researchers have disclosed two security flaws in Google’s Vertex machine learning (ML) platform that, if successfully exploited, could allow malicious actors to escalate privileges and exfiltrate models from the cloud.

“By exploiting custom job permissions, we were able to escalate our privileges and gain unauthorized access to all data services in the project,” Palo Alto Networks Unit 42 researchers Ofir Balassiano and Ofir Shaty said in an analysis published earlier this week.

“Deploying a poisoned model in Vertex AI led to the exfiltration of all other fine-tuned models, posing a serious proprietary and sensitive data exfiltration attack risk.”

Vertex AI is Google’s ML platform for training and deploying custom ML models and artificial intelligence (AI) applications at scale. It was first introduced in May 2021.

Cybersecurity

Crucial to leveraging the privilege escalation flaw is a feature called Vertex AI Pipelines, which allows users to automate and monitor MLOps workflows to train and tune ML models using custom jobs.

Unit 42’s research found that by manipulating the custom job pipeline, it’s possible to escalate privileges to gain access to otherwise restricted resources. This is accomplished by creating a custom job that runs a specially-crafted image designed to launch a reverse shell, granting backdoor access to the environment.

The custom job, per the security vendor, runs in a tenant project with a service agent account that has extensive permissions to list all service accounts, manage storage buckets, and access BigQuery tables, which could then be abused to access internal Google Cloud repositories and download images.

The second vulnerability, on the other hand, involves deploying a poisoned model in a tenant project such that it creates a reverse shell when deployed to an endpoint, abusing the read-only permissions of the “custom-online-prediction” service account to enumerate Kubernetes clusters and fetch their credentials to run arbitrary kubectl commands.

“This step enabled us to move from the GCP realm into Kubernetes,” the researchers said. “This lateral movement was possible because permissions between GCP and GKE were linked through IAM Workload Identity Federation.”

The analysis further found that it’s possible to make use of this access to view the newly created image within the Kubernetes cluster and get the image digest – which uniquely identifies a container image – using them to extract the images outside of the container by using crictl with the authentication token associated with the “custom-online-prediction” service account.

On top of that, the malicious model could also be weaponized to view and export all large-language models (LLMs) and their fine-tuned adapters in a similar fashion.

This could have severe consequences when a developer unknowingly deploys a trojanized model uploaded to a public repository, thereby allowing the threat actor to exfiltrate all ML and fine-tuned LLMs. Following responsible disclosure, both the shortcomings have been addressed by Google.

“This research highlights how a single malicious model deployment could compromise an entire AI environment,” the researchers said. “An attacker could use even one unverified model deployed on a production system to exfiltrate sensitive data, leading to severe model exfiltration attacks.”

Organizations are recommended to implement strict controls on model deployments and audit permissions required to deploy a model in tenant projects.

Cybersecurity

The development comes as Mozilla’s 0Day Investigative Network (0Din) revealed that it’s possible to interact with OpenAI ChatGPT’s underlying sandbox environment (“/home/sandbox/.openai_internal/”) via prompts, granting the ability to upload and execute Python scripts, move files, and even download the LLM’s playbook.

That said, it’s worth noting that OpenAI considers such interactions as intentional or expected behavior, given that the code execution takes place within the confines of the sandbox and is unlikely to spill out.

“For anyone eager to explore OpenAI’s ChatGPT sandbox, it’s crucial to understand that most activities within this containerized environment are intended features rather than security gaps,” security researcher Marco Figueroa said.

“Extracting knowledge, uploading files, running bash commands or executing python code within the sandbox are all fair game, as long as they don’t cross the invisible lines of the container.”

Found this article interesting? Follow us on Twitter  and LinkedIn to read more exclusive content we post.

You Might Also Like

Meta Pauses Work With Mercor After Data Breach Puts AI Industry Secrets at Risk

What impact might Medtronic’s new lab have on Galway’s medtech ecosystem?

Casio’s AI Pet Moflin Review

iPhone 18 Pro Max Leaks: Smaller Dynamic Island and More

Irish Government approves ‘next-generation sites’ for industry

TAGGED: artificial intelligence, Cloud security, Cyber Security, Cybersecurity, data breach, Google, Internet, Kubernetes, Machine Learning, vertex ai, Vulnerability
Share This Article
Facebook Twitter Copy Link
Previous Article Henna Virkkunen survives EU grilling
Next Article Citizen Sleeper 2, The Thing: Remastered Confirmed for PC Gaming Show: Most Wanted on December 5th
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

- Advertisement -
Ad image

Latest News

Meta Pauses Work With Mercor After Data Breach Puts AI Industry Secrets at Risk
Tech News
Liverpool among frontrunners to sign the next Alexander Isak for £100m
Sports
Food inflation to continue if West Asia war goes on: UN
Business
Cambodian lawmakers propose severe prison time for crypto scammers
Crypto
Is this market correction a once-in-a-decade chance to buy ultra-high-yield income stocks?
Business
What impact might Medtronic’s new lab have on Galway’s medtech ecosystem?
Tech News
US Defence Secretary Hegseth asks US Army chief to step down as Iran war grinds on
World News

About Us

Welcome to Viraltrendingcontent, your go-to source for the latest updates on world news, politics, sports, celebrity, tech, travel, gaming, crypto news, and business news. We are dedicated to providing you with accurate, timely, and engaging content from around the globe.

Quick Links

  • Home
  • World News
  • Politics
  • Celebrity
  • Business
  • Home
  • World News
  • Politics
  • Sports
  • Celebrity
  • Business
  • Crypto
  • Gaming News
  • Tech News
  • Travel
  • Sports
  • Crypto
  • Tech News
  • Gaming News
  • Travel

Trending News

cageside seats

Unlocking the Ultimate WWE Experience: Cageside Seats News 2024

Investing £5 a day could help me build a second income of £329 a month!

Brussels unveils plans for a European Degree but struggles to explain why

cageside seats
Unlocking the Ultimate WWE Experience: Cageside Seats News 2024
May 22, 2024
Investing £5 a day could help me build a second income of £329 a month!
March 27, 2024
Brussels unveils plans for a European Degree but struggles to explain why
March 27, 2024
Trump evokes more anger and fear from Democrats than Biden does from Republicans, AP-NORC poll shows
March 28, 2024
© 2024 All Rights reserved | Powered by Vraltrendingcontent
  • About Us
  • Contact US
  • Disclaimer
  • Privacy Policy
  • Terms of Service
Welcome Back!

Sign in to your account

Lost your password?