By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
Viral Trending contentViral Trending content
  • Home
  • World News
  • Politics
  • Sports
  • Celebrity
  • Business
  • Crypto
  • Gaming News
  • Tech News
  • Travel
Reading: A hacker stole OpenAI secrets, raising fears that China could, too
Notification Show More
Viral Trending contentViral Trending content
  • Home
  • Categories
    • World News
    • Politics
    • Sports
    • Celebrity
    • Business
    • Crypto
    • Tech News
    • Gaming News
    • Travel
  • Bookmarks
© 2024 All Rights reserved | Powered by Viraltrendingcontent
Viral Trending content > Blog > Business > A hacker stole OpenAI secrets, raising fears that China could, too
Business

A hacker stole OpenAI secrets, raising fears that China could, too

By Viral Trending Content 9 Min Read
Share
SHARE

SAN FRANCISCO — Early last year, a hacker gained access to the internal messaging systems of OpenAI, the maker of ChatGPT, and stole details about the design of the company’s artificial intelligence technologies.

The hacker lifted details from discussions in an online forum where employees talked about OpenAI’s latest technologies, according to two people familiar with the incident, but did not get into the systems where the company houses and builds its AI.

OpenAI executives revealed the incident to employees during an all-hands meeting at the company’s San Francisco offices in April 2023 and informed its board of directors, according to the two people, who discussed sensitive information about the company on the condition of anonymity.

But the executives decided not to share the news publicly because no information about customers or partners had been stolen, the two people said. The executives did not consider the incident a threat to national security because they believed the hacker was a private individual with no known ties to a foreign government. The company did not inform the FBI or anyone else in law enforcement.

For some OpenAI employees, the news raised fears that foreign adversaries such as China could steal AI technology that — while now mostly a work and research tool — could eventually endanger U.S. national security. It also led to questions about how seriously OpenAI was treating security, and exposed fractures inside the company about the risks of AI.

After the breach, Leopold Aschenbrenner, an OpenAI technical program manager focused on ensuring that future AI technologies do not cause serious harm, sent a memo to OpenAI’s board of directors, arguing that the company was not doing enough to prevent the Chinese government and other foreign adversaries from stealing its secrets.

Aschenbrenner said OpenAI had fired him this spring for leaking other information outside the company and argued that his dismissal had been politically motivated. He alluded to the breach on a recent podcast, but details of the incident have not been previously reported. He said OpenAI’s security wasn’t strong enough to protect against the theft of key secrets if foreign actors were to infiltrate the company.

“We appreciate the concerns Leopold raised while at OpenAI, and this did not lead to his separation,” said an OpenAI spokesperson, Liz Bourgeois. Referring to the company’s efforts to build artificial general intelligence, a machine that can do anything the human brain can do, she added, “While we share his commitment to building safe AGI, we disagree with many of the claims he has since made about our work. This includes his characterizations of our security, notably this incident, which we addressed and shared with our board before he joined the company.”

Fears that a hack of a U.S. technology company might have links to China are not unreasonable. Last month, Brad Smith, Microsoft’s president, testified on Capitol Hill about how Chinese hackers used the tech giant’s systems to launch a wide-ranging attack on federal government networks.

However, under federal and California law, OpenAI cannot prevent people from working at the company because of their nationality, and policy researchers have said that barring foreign talent from U.S. projects could significantly impede the progress of AI in the United States.

“We need the best and brightest minds working on this technology,” Matt Knight, OpenAI’s head of security, told The New York Times in an interview. “It comes with some risks, and we need to figure those out.”

OpenAI is not the only company building increasingly powerful systems using rapidly improving AI technology. Some of them — most notably, Meta, the owner of Facebook and Instagram — are freely sharing their designs with the rest of the world as open source software. They believe that the dangers posed by today’s AI technologies are slim and that sharing code allows engineers and researchers across the industry to identify and fix problems.

Today’s AI systems can help spread disinformation online, including text, still images and, increasingly, videos. They are also beginning to take away some jobs.

Companies like OpenAI and its competitors Anthropic and Google add guardrails to their AI applications before offering them to individuals and businesses, hoping to prevent people from using the apps to spread disinformation or cause other problems.

But there is not much evidence that today’s AI technologies are a significant national security risk. Studies by OpenAI, Anthropic and others over the past year showed that AI was not significantly more dangerous than search engines. Daniela Amodei, an Anthropic co-founder and the company’s president, said its latest AI technology would not be a major risk if its designs were stolen or freely shared with others.

“If it were owned by someone else, could that be hugely harmful to a lot of society? Our answer is, ‘No, probably not,’” she told the Times last month. “Could it accelerate something for a bad actor down the road? Maybe. It is really speculative.”

Still, researchers and tech executives have long worried that AI could one day fuel the creation of new bioweapons or help break into government computer systems. Some even believe it could destroy humanity.

A number of companies, including OpenAI and Anthropic, are already locking down their technical operations. OpenAI recently created a Safety and Security Committee to explore how it should handle the risks posed by future technologies. The committee includes Paul Nakasone, a former Army general who led the National Security Agency and Cyber Command. He has also been appointed to the OpenAI board of directors.

“We started investing in security years before ChatGPT,” Knight said. “We’re on a journey not only to understand the risks and stay ahead of them but also to deepen our resilience.”

Federal officials and state lawmakers are also pushing toward government regulations that would ban companies from releasing certain AI technologies and fine them millions if their technologies caused harm. But experts say these dangers are still years or even decades away.

Chinese companies are building systems of their own that are nearly as powerful as the leading U.S. systems. By some metrics, China eclipsed the United States as the biggest producer of AI talent, with the country generating almost half the world’s top AI researchers.

“It is not crazy to think that China will soon be ahead of the U.S.,” said Clément Delangue, CEO of Hugging Face, a company that hosts many of the world’s open source AI projects.

Some researchers and national security leaders argue that the mathematical algorithms at the heart of current AI systems, while not dangerous today, could become dangerous and are calling for tighter controls on AI labs.

“Even if the worst-case scenarios are relatively low-probability, if they are high-impact, then it is our responsibility to take them seriously,” Susan Rice, former domestic policy adviser to President Joe Biden and former national security adviser for President Barack Obama, said during an event in Silicon Valley last month. “I do not think it is science fiction, as many like to claim.”

This article originally appeared in The New York Times.

Get more business news by signing up for our Economy Now newsletter.

You Might Also Like

Gemini AI to transform Google Maps into a more conversational experience

“Stop making autónomos fear Hacienda,” says Feijóo

Cyberattack on Marks & Spencer slices profits by more than a half

Trump may become face of economic discontent, year after such worries helped him win big

Meet Mira Nair, Zohran Mamdani’s 68-year-old mother who hit it big in Hollywood directing critical darlings like ‘Monsoon Wedding’

TAGGED: bbc business, Business, business ideas, business insider, Business News, business plan, google my business, income, money, opportunity, small business, small business idea
Share This Article
Facebook Twitter Copy Link
Previous Article ETH, SOL rise as crypto investors eye Solciety (SLCTY)
Next Article Shannen Doherty Dead: Actress Dies at 53 After Battle With Cancer
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

- Advertisement -
Ad image

Latest News

Google Uncovers PROMPTFLUX Malware That Uses Gemini AI to Rewrite Its Code Hourly
Tech News
Jim Curtis’ Net Worth: How Much Money Jennifer Aniston’s Boyfriend Has
Celebrity
Hamas hands over another coffin containing remains to Israel
World News
Tales of Xillia Remastered Review – Nostalgia Saves The Day
Gaming News
Review of Venice Simplon-Orient-Express’s Venice–Paris Route
Travel
Gemini AI to transform Google Maps into a more conversational experience
Business
Lido adopts Chainlink CCIP to secure cross-chain wstETH transfers across 16+ blockchains
Crypto

About Us

Welcome to Viraltrendingcontent, your go-to source for the latest updates on world news, politics, sports, celebrity, tech, travel, gaming, crypto news, and business news. We are dedicated to providing you with accurate, timely, and engaging content from around the globe.

Quick Links

  • Home
  • World News
  • Politics
  • Celebrity
  • Business
  • Home
  • World News
  • Politics
  • Sports
  • Celebrity
  • Business
  • Crypto
  • Gaming News
  • Tech News
  • Travel
  • Sports
  • Crypto
  • Tech News
  • Gaming News
  • Travel

Trending News

cageside seats

Unlocking the Ultimate WWE Experience: Cageside Seats News 2024

Google Uncovers PROMPTFLUX Malware That Uses Gemini AI to Rewrite Its Code Hourly

Investing £5 a day could help me build a second income of £329 a month!

cageside seats
Unlocking the Ultimate WWE Experience: Cageside Seats News 2024
May 22, 2024
Google Uncovers PROMPTFLUX Malware That Uses Gemini AI to Rewrite Its Code Hourly
November 5, 2025
Investing £5 a day could help me build a second income of £329 a month!
March 27, 2024
Brussels unveils plans for a European Degree but struggles to explain why
March 27, 2024
© 2024 All Rights reserved | Powered by Vraltrendingcontent
  • About Us
  • Contact US
  • Disclaimer
  • Privacy Policy
  • Terms of Service
Welcome Back!

Sign in to your account

Lost your password?