By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
Viral Trending contentViral Trending content
  • Home
  • World News
  • Politics
  • Sports
  • Celebrity
  • Business
  • Crypto
  • Gaming News
  • Tech News
  • Travel
Reading: 12,000+ API Keys and Passwords Found in Public Datasets Used for LLM Training
Notification Show More
Viral Trending contentViral Trending content
  • Home
  • Categories
    • World News
    • Politics
    • Sports
    • Celebrity
    • Business
    • Crypto
    • Tech News
    • Gaming News
    • Travel
  • Bookmarks
© 2024 All Rights reserved | Powered by Viraltrendingcontent
Viral Trending content > Blog > Tech News > 12,000+ API Keys and Passwords Found in Public Datasets Used for LLM Training
Tech News

12,000+ API Keys and Passwords Found in Public Datasets Used for LLM Training

By Viral Trending Content 6 Min Read
Share
SHARE

A dataset used to train large language models (LLMs) has been found to contain nearly 12,000 live secrets, which allow for successful authentication.

The findings once again highlight how hard-coded credentials pose a severe security risk to users and organizations alike, not to mention compounding the problem when LLMs end up suggesting insecure coding practices to their users.

Truffle Security said it downloaded a December 2024 archive from Common Crawl, which maintains a free, open repository of web crawl data. The massive dataset contains over 250 billion pages spanning 18 years.

The archive specifically contains 400TB of compressed web data, 90,000 WARC files (Web ARChive format), and data from 47.5 million hosts across 38.3 million registered domains.

The company’s analysis found that there are 219 different secret types in the Common Crawl archive, including Amazon Web Services (AWS) root keys, Slack webhooks, and Mailchimp API keys.

Cybersecurity

“‘Live’ secrets are API keys, passwords, and other credentials that successfully authenticate with their respective services,” security researcher Joe Leon said.

“LLMs can’t distinguish between valid and invalid secrets during training, so both contribute equally to providing insecure code examples. This means even invalid or example secrets in the training data could reinforce insecure coding practices.”

The disclosure follows a warning from Lasso Security that data exposed via public source code repositories can be accessible via AI chatbots like Microsoft Copilot even after they have been made private by taking advantage of the fact that they are indexed and cached by Bing.

The attack method, dubbed Wayback Copilot, has uncovered 20,580 such GitHub repositories belonging to 16,290 organizations, including Microsoft, Google, Intel, Huawei, Paypal, IBM, and Tencent, among others. The repositories have also exposed over 300 private tokens, keys, and secrets for GitHub, Hugging Face, Google Cloud, and OpenAI.

“Any information that was ever public, even for a short period, could remain accessible and distributed by Microsoft Copilot,” the company said. “This vulnerability is particularly dangerous for repositories that were mistakenly published as public before being secured due to the sensitive nature of data stored there.”

The development comes amid new research that fine-tuning an AI language model on examples of insecure code can lead to unexpected and harmful behavior even for prompts unrelated to coding. This phenomenon has been called emergent misalignment.

“A model is fine-tuned to output insecure code without disclosing this to the user,” the researchers said. “The resulting model acts misaligned on a broad range of prompts that are unrelated to coding: it asserts that humans should be enslaved by AI, gives malicious advice, and acts deceptively. Training on the narrow task of writing insecure code induces broad misalignment.”

What makes the study notable is that it’s different from a jailbreak, where the models are tricked into giving dangerous advice or act in undesirable ways in a manner that bypasses their safety and ethical guardrails.

Such adversarial attacks are called prompt injections, which occur when an attacker manipulates a generative artificial intelligence (GenAI) system through crafted inputs, causing the LLM to unknowingly produce otherwise prohibited content.

Recent findings show that prompt injections are a persistent thorn in the side of mainstream AI products, with the security community finding various ways to jailbreak state-of-the-art AI tools like Anthropic Claude 3.7, DeepSeek, Google Gemini, OpenAI ChatGPT o3 and Operator, PandasAI, and xAI Grok 3.

Palo Alto Networks Unit 42, in a report published last week, revealed that its investigation into 17 GenAI web products found that all are vulnerable to jailbreaking in some capacity.

Cybersecurity

“Multi-turn jailbreak strategies are generally more effective than single-turn approaches at jailbreaking with the aim of safety violation,” researchers Yongzhe Huang, Yang Ji, and Wenjun Hu said. “However, they are generally not effective for jailbreaking with the aim of model data leakage.”

What’s more, studies have discovered that large reasoning models’ (LRMs) chain-of-thought (CoT) intermediate reasoning could be hijacked to jailbreak their safety controls.

Another way to influence model behavior revolves around a parameter called “logit bias,” which makes it possible to modify the likelihood of certain tokens appearing in the generated output, thereby steering the LLM such that it refrains from using offensive words or provides neutral answers.

“For instance, improperly adjusted logit biases might inadvertently allow uncensoring outputs that the model is designed to restrict, potentially leading to the generation of inappropriate or harmful content,” IOActive researcher Ehab Hussein said in December 2024.

“This kind of manipulation could be exploited to bypass safety protocols or ‘jailbreak’ the model, allowing it to produce responses that were intended to be filtered out.”

Found this article interesting? Follow us on Twitter  and LinkedIn to read more exclusive content we post.

You Might Also Like

Samsung Galaxy Tab S11 Review: It’s Time For Something New

How the World’s Largest 3D Object Library By Microsoft & NVIDIA

Oracle links Clop extortion attacks to July 2025 vulnerabilities

Is Social Media the Best Tool for Business Growth?

Five SETU scientists listed among world’s top 2pc on Stanford list

TAGGED: ai ethics, API Security, artificial intelligence, Cloud security, Cyber Security, Cybersecurity, data breach, data privacy, Internet, LLM Security, Machine Learning, Threat Intelligence
Share This Article
Facebook Twitter Copy Link
Previous Article Ethereum’s Pectra upgrade could lay groundwork for next market rally
Next Article Black Kyurem and White Kyurem counters, weakness, and best moveset in Pokémon Go
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

- Advertisement -
Ad image

Latest News

Samsung Galaxy Tab S11 Review: It’s Time For Something New
Tech News
Parents sue Tesla after their 19-year-old daughter died in her Cybertruck, alleging faulty door design made it impossible to escape the burning car
Business
Ripple Maps XRP Ledger’s Future: ‘No Privacy, No Adoption’
Crypto
Mono Protocol’s launch highlights: $1.7M raised and a vision for one account, one balance, one click
Crypto
Netflix Hiring A Director Of Generative AI For Gaming With A Starting Salary Of Up To $840K
Gaming News
All aboard: High-speed train links for travel to major European cities gets under way
Travel
Megabonk Sells 1 Million Units in Two Weeks
Gaming News

About Us

Welcome to Viraltrendingcontent, your go-to source for the latest updates on world news, politics, sports, celebrity, tech, travel, gaming, crypto news, and business news. We are dedicated to providing you with accurate, timely, and engaging content from around the globe.

Quick Links

  • Home
  • World News
  • Politics
  • Celebrity
  • Business
  • Home
  • World News
  • Politics
  • Sports
  • Celebrity
  • Business
  • Crypto
  • Gaming News
  • Tech News
  • Travel
  • Sports
  • Crypto
  • Tech News
  • Gaming News
  • Travel

Trending News

cageside seats

Unlocking the Ultimate WWE Experience: Cageside Seats News 2024

Samsung Galaxy Tab S11 Review: It’s Time For Something New

Investing £5 a day could help me build a second income of £329 a month!

cageside seats
Unlocking the Ultimate WWE Experience: Cageside Seats News 2024
May 22, 2024
Samsung Galaxy Tab S11 Review: It’s Time For Something New
October 3, 2025
Investing £5 a day could help me build a second income of £329 a month!
March 27, 2024
Brussels unveils plans for a European Degree but struggles to explain why
March 27, 2024
© 2024 All Rights reserved | Powered by Vraltrendingcontent
  • About Us
  • Contact US
  • Disclaimer
  • Privacy Policy
  • Terms of Service
Welcome Back!

Sign in to your account

Lost your password?