By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
Viral Trending contentViral Trending content
  • Home
  • World News
  • Politics
  • Sports
  • Celebrity
  • Business
  • Crypto
  • Gaming News
  • Tech News
  • Travel
Reading: AI chatbots are getting worse over time — academic paper
Notification Show More
Viral Trending contentViral Trending content
  • Home
  • Categories
    • World News
    • Politics
    • Sports
    • Celebrity
    • Business
    • Crypto
    • Tech News
    • Gaming News
    • Travel
  • Bookmarks
© 2024 All Rights reserved | Powered by Viraltrendingcontent
Viral Trending content > Blog > Crypto > AI chatbots are getting worse over time — academic paper
Crypto

AI chatbots are getting worse over time — academic paper

By Viral Trending Content 3 Min Read
Share
SHARE

A recent research study titled “Larger and more instructable language models become less reliable” in the Nature Scientific Journal revealed that artificially intelligent chatbots are making more mistakes over time as newer models are released.

Lexin Zhou, one of the study’s authors, theorized that because AI models are optimized to always provide believable answers, the seemingly correct responses are prioritized and pushed to the end user regardless of accuracy.

These AI hallucinations are self-reinforcing and tend to compound over time — a phenomenon exacerbated by using older large language models to train newer large language models resulting in “model collapse.”

Editor and writer Mathieu Roy cautioned users not to rely too heavily on these tools and to always check AI-generated search results for inconsistencies:

“While AI can be useful for a number of tasks, it’s important for users to verify the information they get from AI models. Fact-checking should be a step in everyone’s process when using AI tools. This gets more complicated when customer service chatbots are involved.”

To make matters worse, “There’s often no way to check the information except by asking the chatbot itself,” Roy asserted.

Related: OpenAI raises an additional $6.6B at a $157B valuation

The stubborn problem of AI hallucinations

Google’s artificial intelligence platform drew ridicule in February 2024 after the AI started producing historically inaccurate images. Examples of this included portraying people of color as Nazi officers and creating inaccurate images of well-known historical figures.

Unfortunately, incidents like this are far too common with the current iteration of artificial intelligence and large language models. Industry executives, including Nvidia CEO Jensen Huang, have proposed mitigating AI hallucinations by forcing AI models to conduct research and provide sources for every single answer given to a user.

However, these measures are already featured in the most popular AI and large language models, yet the problem of AI hallucinations persists.

More recently, in September, HyperWrite AI CEO Matt Shumer announced that the company’s new 70B model uses a method called “Reflection-Tuning” — which purportedly gives the AI bot a way of learning by analyzing its own mistakes and adjusting its responses over time.

Magazine: How to get better crypto predictions from ChatGPT, Humane AI pin slammed: AI Eye

You Might Also Like

Polymarket Sees Record $153M Daily Volume After Chainlink Integration

Elon Musk’s xAI sues Colorado arguing its AI rules restrict speech

OKX Ventures, HashKey back VPBank-linked CAEX for Vietnam crypto pilot push

Bitcoin Figure Adam Back Denies Being Satoshi Nakamoto

CIA to integrate AI ‘co-workers’ to process intelligence, catch spies

TAGGED: Crypto, Crypto News, News
Share This Article
Facebook Twitter Copy Link
Previous Article StudioLink: ORICO’s New 52TB Storage Expansion for Mac
Next Article Jim Cramer's week ahead: CPI data and earnings from Delta, Domino's and major banks
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

- Advertisement -
Ad image

Latest News

JPMorgan CEO Jamie Dimon says he’s ‘learned and relearned’ to not make big decisions when he’s tired on Fridays
Business
Apple AI Pin Specs Leak: Dual Cameras, No Screen & More
Tech News
A ‘glass-like’ battlefield: German Army chief on the future of warfare
World News
Polymarket Sees Record $153M Daily Volume After Chainlink Integration
Crypto
Natasha Lyonne Then & Now: See Before & After Photos of the Actress Here
Celebrity
Cult Hit Doki Doki Literature Club Fights Removal From Google Play Store Over ‘Depiction Of Sensitive Themes’
Gaming News
Dead as Disco Launches Into Early Access on May 5th, Groovy New Gameplay Released
Gaming News

About Us

Welcome to Viraltrendingcontent, your go-to source for the latest updates on world news, politics, sports, celebrity, tech, travel, gaming, crypto news, and business news. We are dedicated to providing you with accurate, timely, and engaging content from around the globe.

Quick Links

  • Home
  • World News
  • Politics
  • Celebrity
  • Business
  • Home
  • World News
  • Politics
  • Sports
  • Celebrity
  • Business
  • Crypto
  • Gaming News
  • Tech News
  • Travel
  • Sports
  • Crypto
  • Tech News
  • Gaming News
  • Travel

Trending News

cageside seats

Unlocking the Ultimate WWE Experience: Cageside Seats News 2024

Investing £5 a day could help me build a second income of £329 a month!

JPMorgan CEO Jamie Dimon says he’s ‘learned and relearned’ to not make big decisions when he’s tired on Fridays

cageside seats
Unlocking the Ultimate WWE Experience: Cageside Seats News 2024
May 22, 2024
Investing £5 a day could help me build a second income of £329 a month!
March 27, 2024
JPMorgan CEO Jamie Dimon says he’s ‘learned and relearned’ to not make big decisions when he’s tired on Fridays
April 10, 2026
Brussels unveils plans for a European Degree but struggles to explain why
March 27, 2024
© 2024 All Rights reserved | Powered by Vraltrendingcontent
  • About Us
  • Contact US
  • Disclaimer
  • Privacy Policy
  • Terms of Service
Welcome Back!

Sign in to your account

Lost your password?