November 23, 2024

Valley Post

Read Latest News on Sports, Business, Entertainment, Blogs and Opinions from leading columnists.

Researchers destroyed ChatGPT clone because it contains “hallucinating artificial intelligence”!

Researchers destroyed ChatGPT clone because it contains “hallucinating artificial intelligence”!

Researchers from Stanford University have developed an inexpensive artificial intelligence that practically looked like a copy of ChatGPT. According to a report in The Register, the initial goal described by the researchers was to find out How difficult and expensive is to create a strong language model. The researchers believe they will achieve this goal by demonstrating that powerful AI tools can be created with a very limited amount of money.

It cost only $600 and when they realized that such a complex model in practice could not work properly, they made the difficult decision.

The artificial intelligence built by the Stanford University researchers was a copy of it chat Ha Open AI named “alpacaThe development of alpacas was thanks to the use of the language model LLAMA 7B Ha meta, which is trained on trillions of data tokens. However, Alpaca apparently wasn’t able to sift through data as quickly as OpenAI’s GPT, which made it slower to respond.

Speed ​​was just one of the problems with Alpaca, as researchers noted that the hacker language model routinely sowed misinformation, asking the wrong simple questions like the capital of Tanzania, or claiming that the number 42 was the best seed for training an artifact. Intelligence. The developers call these bugs “delusions” or “hallucinations,” and alpacas were full of them, which contributed to the Stanford researchers’ decision to pull the demo.

Hallucinations in particular seem to be a common failure mode for alpacas, even compared to text-davinci-003.as reported by the researchers.

The destruction of the alpaca show seems to have been done for it Security reasonshowever No further details about the nature of this issue have been announced. However, this development is an interesting attempt by researchers to explore the possibilities of artificial intelligence and create their own models and algorithms.

The original goal of posting the demo was to publicize our research in an accessible way, we think we’ve mostly achieved that goal and due to the cost of hosting and the shortcomings of our content filters we decided to download the demoA spokesperson for the Institute for Human-Centered Artificial Intelligence at Stanford University told The Register.

In the end, a combination of problems such as delusions of artificial intelligence, its tendency to mislead, and other security issues were the reasons behind its demise.

Follow Unboxholics.com at www.unboxholics.com
google news
To be the first to know the latest news about technology, video games, movies and series. Follow Unboxholics.com at www.unboxholics.com FacebookAnd TwitterAnd
InstagramAnd Spotify
And Tik Tok.