By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
Viral Trending contentViral Trending content
  • Home
  • World News
  • Politics
  • Sports
  • Celebrity
  • Business
  • Crypto
  • Gaming News
  • Tech News
  • Travel
Reading: LLMs: Is AI Superalignment Better Than Superintelligence?
Notification Show More
Viral Trending contentViral Trending content
  • Home
  • Categories
    • World News
    • Politics
    • Sports
    • Celebrity
    • Business
    • Crypto
    • Tech News
    • Gaming News
    • Travel
  • Bookmarks
© 2024 All Rights reserved | Powered by Viraltrendingcontent
Viral Trending content > Blog > Tech News > LLMs: Is AI Superalignment Better Than Superintelligence?
Tech News

LLMs: Is AI Superalignment Better Than Superintelligence?

By Viral Trending Content 7 Min Read
Share
SHARE

Superintelligence

By David Stephen

It is unclear which might be more difficult to achieve, a superintelligent AI or superalignment for that superintelligence. Nevertheless, superalignment is a far better objective than superintelligence.

What is the superintelligence problem for AI? This implies that what is the right question to ask if a team is seeking to crack superintelligence in machines? The smartest machines on earth, for now, are reasoning AI models. They seem to be clever, in outputs and are able to use data [or say memory], better than anything else.

So, there is data, available to machines, but reasoning models can relay, albeit slower, for useful outputs. Simply, the reasoning is correlated with relay, across data areas. Now, to achieve superintelligence, relay could be an important [machine] marker.

Superintelligence or AI Superalignment

The basis for advanced intelligence is human. The source of human intelligence is the brain. There are two distinct elements that predicate how human intelligence works: storage and transport. If someone were to figure out something, it would use memory and there would be a transport quality through memory areas. Most of what gets done with human intelligence [and its outstanding variants like innovation, creativity, quick wit and so forth], are a result of relays in the human brain, conceptually.

So, storage is done in ways that allow relays to pervade necessary locations [that make intelligence proximate]. Some people often argue that a child could learn from a few data while a machine model is trained on a lot more. A likely weakness is that there is still a problem with how digital data is stored, limiting how access is made for the [advanced] AI architectures of present-day.

How is human memory stored? What are the relays across memory areas, to result in intelligence? Superintelligence will be predicated on storage and relay theorems, off biology. In the brain, electrical and chemical configurators [or assemblers or formations] can be theorized to be responsible for storage and relay of information, resulting [in advances for] intelligence.

In clusters of neurons, electrical and chemical configurators mostly have thick sets, collecting whatever is common among two or more thin sets [ridding those thin sets]. There are fewer lone thin sets. They are located away from obstructing access to many parts of thick sets. Existing thick sets are responsible for making learning with fewer examples easier for humans, as well as more accurate [out-of-distribution] interpretations. When electrical and chemical configurators interact, they often have states at the moments of interactions, these states are their attributes, which are sometimes the relay qualities that determine how they interact [to output intelligence].

Advancing storage and relay for AI also means energy efficiency, seeing how energy efficient a human brain is, in comparison to a data center — so to speak. Some aspects of storage can be explored with Steiner chain and, relay with morphism among other algorithms.

Superalignment

If a company develops superintelligence, without superalignment, the misuses could be risky for human society — outweighing the good. Even at present, when AI misuses make news, they foreshadow what the future may hold without an encompassing alignment architecture.

If biology would lead, the only way that superalignment would be thorough is consequences for AI models. So, there could be non-concept features in some architecture, where certain [or rigid, same number or deductive] vectors would stay constant in a way to hamstring the outputs of a model. They could ‘bind’ to the key vector or query vector, such that the model would know, reducing its efficiency and speed. This consequence could become a way to ensure that whenever it is misused, it gets penalized.

This affective penalty is what could become superalignment for superintelligence — or less [LLMs]. This is informed by the biology of how human society works. For example, the threat[s] of torture, shame, excommunication, pain and so forth, are mostly effective because they are affective. It feels bad biologically, so it is often avoided, making warning and caution useful, since consequences are often hard — for the self. Affect does not care about level of intelligence. The same will apply to AI, regardless of benchmarks.

There are several other possibilities for superalignment, but what would be effective would not be post-error safety, but within-affect caution, learning what the consequences for action are [subjectively], leading to misuse avoidance, both in known and new scenarios. No AI regulation would be properly effective against superintelligence. Everything comparable, like pharmaceutical regulations, airline regulations and so forth are in physical spaces. AI is digital. Digital is more pervasive, extensively scaled and far evasive. Several unlawful things that get done digitally without justice are because consequences are improbable. AI penalization across digital — in contrast to cost function penalization in regularization of neural networks.

Superintelligence or Superalignment?

AI would be useful for the world, only if AI is safer. The discussions of AI applications, without biological-alignment [or superalignment] guardrails, would mean vulnerabilities whose cost to society may spike to a level that may be unbearable — at some point.

A company may face one of both [SA/SI], or a little of both, but what would be more profitable, sustainable and useful would be superalignment, deployed wherever AI is found or used.

There is a new [July 9, 2025] story on WIRED, McDonald’s AI Hiring Bot Exposed Millions of Applicants’ Data to Hackers Who Tried the Password ‘123456’, stating that, “Basic security flaws left the personal info of tens of millions of McDonald’s job-seekers vulnerable on the “McHire” site built by AI software firm Paradox.ai.”

David Stephen currently does research in conceptual brain science with focus on the electrical and chemical signals for how they mechanize the human mind with implications for mental health, disorders, neurotechnology, consciousness, learning, artificial intelligence and nurture. He was a visiting scholar in medical entomology at the University of Illinois at Urbana Champaign, IL. He did computer vision research at Rovira i Virgili University, Tarragona.

See more breaking stories here.

You Might Also Like

Apple AI Pin Specs Leak: Dual Cameras, No Screen & More

The diverse responsibilities of a principal software engineer

OpenAI Backs Bill That Would Limit Liability for AI-Enabled Mass Deaths or Financial Disasters

Google’s Fitbit Tease has me More Excited for Garmin’s Whoop Rival

Why the TCL NXTPAPER 14 Is One of the Best Tablets for Musicians and Sheet Music Reading

TAGGED: cool tech, latest technology, latest technology news, new technology, science and technology, tech, Tech News, tech review, technews, technological advances, technology definition, technology reviews, what is technology
Share This Article
Facebook Twitter Copy Link
Previous Article Nvidia gets nod from Washington to resume sales of H20 China chip
Next Article Donkey Kong Bananza Has Been in Development Since 2017
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

- Advertisement -
Ad image

Latest News

JPMorgan CEO Jamie Dimon says he’s ‘learned and relearned’ to not make big decisions when he’s tired on Fridays
Business
Apple AI Pin Specs Leak: Dual Cameras, No Screen & More
Tech News
A ‘glass-like’ battlefield: German Army chief on the future of warfare
World News
Polymarket Sees Record $153M Daily Volume After Chainlink Integration
Crypto
Natasha Lyonne Then & Now: See Before & After Photos of the Actress Here
Celebrity
Cult Hit Doki Doki Literature Club Fights Removal From Google Play Store Over ‘Depiction Of Sensitive Themes’
Gaming News
Dead as Disco Launches Into Early Access on May 5th, Groovy New Gameplay Released
Gaming News

About Us

Welcome to Viraltrendingcontent, your go-to source for the latest updates on world news, politics, sports, celebrity, tech, travel, gaming, crypto news, and business news. We are dedicated to providing you with accurate, timely, and engaging content from around the globe.

Quick Links

  • Home
  • World News
  • Politics
  • Celebrity
  • Business
  • Home
  • World News
  • Politics
  • Sports
  • Celebrity
  • Business
  • Crypto
  • Gaming News
  • Tech News
  • Travel
  • Sports
  • Crypto
  • Tech News
  • Gaming News
  • Travel

Trending News

cageside seats

Unlocking the Ultimate WWE Experience: Cageside Seats News 2024

Investing £5 a day could help me build a second income of £329 a month!

JPMorgan CEO Jamie Dimon says he’s ‘learned and relearned’ to not make big decisions when he’s tired on Fridays

cageside seats
Unlocking the Ultimate WWE Experience: Cageside Seats News 2024
May 22, 2024
Investing £5 a day could help me build a second income of £329 a month!
March 27, 2024
JPMorgan CEO Jamie Dimon says he’s ‘learned and relearned’ to not make big decisions when he’s tired on Fridays
April 10, 2026
Brussels unveils plans for a European Degree but struggles to explain why
March 27, 2024
© 2024 All Rights reserved | Powered by Vraltrendingcontent
  • About Us
  • Contact US
  • Disclaimer
  • Privacy Policy
  • Terms of Service
Welcome Back!

Sign in to your account

Lost your password?