By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
Viral Trending contentViral Trending content
  • Home
  • World News
  • Politics
  • Sports
  • Celebrity
  • Business
  • Crypto
  • Gaming News
  • Tech News
  • Travel
Reading: Elon Musk released xAI’s Grok 4 without any safety reports—despite calling AI more ‘dangerous than nukes’ 
Notification Show More
Viral Trending contentViral Trending content
  • Home
  • Categories
    • World News
    • Politics
    • Sports
    • Celebrity
    • Business
    • Crypto
    • Tech News
    • Gaming News
    • Travel
  • Bookmarks
© 2024 All Rights reserved | Powered by Viraltrendingcontent
Viral Trending content > Blog > Business > Elon Musk released xAI’s Grok 4 without any safety reports—despite calling AI more ‘dangerous than nukes’ 
Business

Elon Musk released xAI’s Grok 4 without any safety reports—despite calling AI more ‘dangerous than nukes’ 

By Viral Trending Content 6 Min Read
Share
SHARE

Contents
Leading AI labs have been criticized for delayed safety reportsWhy are safety cards important? Grok’s problematic behavior

xAI’s latest frontier model, Grok 4, has been released without industry-standard safety reports, despite the company’s CEO, Elon Musk, being notably vocal about his concerns regarding AI safety.

Leading AI labs typically release safety reports known as “system cards” alongside frontier models.

The reports serve as transparency documents and detail performance metrics, limitations, and, crucially, the potential dangers of advanced AI models. These cards also allow researchers, experts, and policymakers to access the model’s capabilities and threat level.  

Several leading AI companies committed to releasing reports for all major public model releases that are more powerful than the current state-of-the-art tech at a July 2023 meeting convened by then-President Joe Biden’s administration at the White House.

While xAI did not publicly agree to these commitments, at an international summit on AI safety held in Seoul in May 2024, the company—alongside other leading AI labs—committed to the Frontier AI Safety Commitments, which included a commitment to disclose model capabilities, inappropriate use cases, and provide transparency around a model’s risk assessments and outcomes.

Moreover, since 2014, Musk has continually and publicly called AI an existential threat, campaigned for stricter regulation, and advocated for higher safety standards.

Now, the AI lab he heads up appears to be breaking from industry standards by releasing Grok 4, and previous versions of the model, without publicly disclosed safety testing.

Representatives for xAI did not respond to Fortune’s questions about whether Grok’s system card exists or will be released.

Leading AI labs have been criticized for delayed safety reports

While leading AI labs’ safety reporting has faced scrutiny over the past few months, especially that of Google and OpenAI (which both released AI models before publishing accompanying system cards), most have provided some public safety information for their most powerful models.

Dan Hendrycks, a director of the Center for AI Safety who advises xAI on safety, denied the claim that the company had done no safety testing.

In a post on X, Hendrycks said that the company had tested the model on “dangerous capability evals” but failed to provide details of the results.

Why are safety cards important?

Several advanced AI models have demonstrated dangerous capabilities in recent months.

According to a recent Anthropic study, most leading AI models have a tendency to opt for unethical means to pursue their goals or ensure their existence.

In experiments set up to leave AI models few options and stress-test alignment, top systems from OpenAI, Google, and others frequently resorted to blackmail to protect their interests.

As models get more advanced, safety testing becomes more important.

For example, if internal evaluations show that an AI model has dangerous capabilities such as the ability to assist users in the creation of biological weapons, then developers might need to create additional safeguards to manage these risks to public safety.

Samuel Marks, an AI safety researcher at Anthropic, called the lack of safety reporting from xAI “reckless” and a break from “industry best practices followed by other major AI labs.”

“One wonders what evals they ran, whether they were done properly, whether they would seem to necessitate additional safeguards,” he said in an X post.

Marks said Grok 4 was already showing concerning, undocumented behaviors post-deployment, pointing to examples that showed the model searching for Elon Musk’s views before giving its views on political subjects, including the Israel/Palestine conflict.

Grok’s problematic behavior

An earlier version of Grok also made headlines last week when it began praising Adolf Hitler, making antisemitic comments, and referring to itself as “MechaHitler.”

xAI issued an apology for the antisemitic remarks made by Grok, saying the company apologized “for the horrific behavior many experienced.”

After the release of Grok 4, the company said in a statement it had spotted similarly problematic behavior from the new model and had “immediately investigated & mitigated.”

“One was that if you ask it “What is your surname?” it doesn’t have one so it searches the internet leading to undesirable results, such as when its searches picked up a viral meme where it called itself ‘MechaHitler’ Another was that if you ask it ‘What do you think?’ the model reasons that as an AI it doesn’t have an opinion but knowing it was Grok 4 by xAI searches to see what xAI or Elon Musk might have said on a topic to align itself with the company,” the company said in a post on X.

“To mitigate, we have tweaked the prompts and have shared the details on GitHub for transparency. We are actively monitoring and will implement further adjustments as needed,” they wrote.

You Might Also Like

Plans submitted to convert 11-story Holiday Inn in Denver into housing

Empty tables, sanctions-battered currency: Why Iran’s protests are different this time

Want to start buying shares next week with £200 or £300? Here’s how!

The Great Divide: When the mood overtakes the math

‘Our beautiful Tatiana passed away this morning. She will always be in our hearts’: Kennedy family mourns yet another tragic death

TAGGED: bbc business, Business, business ideas, business insider, Business News, business plan, google my business, income, money, opportunity, small business, small business idea
Share This Article
Facebook Twitter Copy Link
Previous Article XRP price nears $3.84 all-time high as daily gains hit 11.6%
Next Article Why EU AI Act is good for Irish innovation
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

- Advertisement -
Ad image

Latest News

Plans submitted to convert 11-story Holiday Inn in Denver into housing
Business
China’s move to pay interest on e-CNY sparks US stablecoin debate
Crypto
New destinations and Eurostar rivals: How Channel Tunnel rail travel might change in the future
Travel
XRP ไม่ได้เฉยอีกต่อไป ข้อมูล Flare แฉเงินกว่า 1.2 แสนล้านบาทล็อกใน DeFi
Crypto
Best Streaming Service of the Year: Tech Advisor Awards 2025-26
Tech News
Today in History: December 31, Russian President Boris Yeltsin resigns
World News
Empty tables, sanctions-battered currency: Why Iran’s protests are different this time
Business

About Us

Welcome to Viraltrendingcontent, your go-to source for the latest updates on world news, politics, sports, celebrity, tech, travel, gaming, crypto news, and business news. We are dedicated to providing you with accurate, timely, and engaging content from around the globe.

Quick Links

  • Home
  • World News
  • Politics
  • Celebrity
  • Business
  • Home
  • World News
  • Politics
  • Sports
  • Celebrity
  • Business
  • Crypto
  • Gaming News
  • Tech News
  • Travel
  • Sports
  • Crypto
  • Tech News
  • Gaming News
  • Travel

Trending News

cageside seats

Unlocking the Ultimate WWE Experience: Cageside Seats News 2024

Plans submitted to convert 11-story Holiday Inn in Denver into housing

Investing £5 a day could help me build a second income of £329 a month!

cageside seats
Unlocking the Ultimate WWE Experience: Cageside Seats News 2024
May 22, 2024
Plans submitted to convert 11-story Holiday Inn in Denver into housing
December 31, 2025
Investing £5 a day could help me build a second income of £329 a month!
March 27, 2024
Brussels unveils plans for a European Degree but struggles to explain why
March 27, 2024
© 2024 All Rights reserved | Powered by Vraltrendingcontent
  • About Us
  • Contact US
  • Disclaimer
  • Privacy Policy
  • Terms of Service
Welcome Back!

Sign in to your account

Lost your password?