ChatGPT was launched without much internal fanfare Humble announcement On the OpenAI blog, exactly one year ago, on November 30, 2022, within a short time it caused an “earthquake”. In five days it had 1 million users and 100 million in two months. At the time of its release, few people knew what OpenAI did, let alone what GPT (Generative Pre-Trained Transformer) was, which was still an unknown concept to most of the world.
Despite its many glaring errors, it immediately gave the impression that a real thinking machine might be much closer than we thought.
In an instant, the entire planet had the privilege of using a cutting-edge artificial intelligence tool through which anyone could open a dialogue about anything that came to mind and get answers that you often couldn’t tell if they were coming from a human. Or by machine. It’s a multi-tool that can ace law tests, it can write code, fiction, and cooking recipes, and it can understand what’s in an image and draw logical conclusions. Calling it a mass-use chatbot may be unfair. However, if it immediately sparked controversy, it is because despite its many glaring errors, it immediately gave the impression that a true thinking machine might be much closer than we have hitherto thought.
Dream or nightmare?
Within one year, ChatGPT has succeeded in angering everyone, primarily governments, who are globally searching for realistic ways to manage this great technological wave that has appeared before our eyes. From the United Nations to Congress and the White House, and from Brussels to Beijing, numerous committees and subcommittees are dealing with developments in so-called “generative artificial intelligence,” that is, the technology that powers ChatGPT, and discussing announcements, resolutions and bills. It is as if ChatGPT has pushed the political leadership into what we do not yet know whether it is a dream or a nightmare.
In practice, the first serious reactions to ChatGPT came from the education field. The prospect of using large language models (LLM) in writing students’ and students’ assignments was not very reassuring to teachers. However, the most optimistic believe that the skill of the ChatGPT era will be learning how to ask better questions and evaluate the answers we receive more critically. Then came the artists’ reactions. Who gives OpenAI the right to use our projects to train their models to even mimic our personal style? Once again, the most optimistic said that in the world of ChatGPT, those who learn to work best with the machine will take the lead. Gradually, as people were trying out the new tool, he also had something powerful to say. Few remained indifferent to the magic of the interactive model.
The “Battle” of the Giants and the Modern Oppenheimer
Shortly after launching ChatGPT, Microsoft was quick to invest $10 billion in OpenAI and integrate the tool into its Bing search engine. Suddenly, the patent of Google engineers, the Transformer algorithm (Transformer), became the driving force for the promising Microsoft company. Google, as expected, did not stand still. It has deployed its own MGM and Bard, and has also backed DeepMind, a company that has the same ultimate goal as OpenAI: achieving artificial general intelligence, or superintelligence.
ChatGPT is now a major milestone and a defining moment in the history of artificial intelligence. Given the rapid development of models from November 2022 to the present, a large number of experts and non-experts were quick to warn of the existential danger posed by the development of artificial intelligence. Among them is Geoffrey Hinton, the so-called “godfather” of machine learning, who is abandoning his life’s work as a modern-day Oppenheimer to sound the alarm. In the spring of 2023, when the most advanced form of the application, GPT-4, is released, an equal number of AI and non-AI researchers (33,709 at the time of this writing) will sign a petition for Six-month moratorium on MGM research. Hinton resigns from Google, and in May, a group of figures, from Bill Gates and OpenAI CEO Sam Altman to DeepMind’s Demi Hasabi, in addition to Hinton, signed a document Text 23 words Which typically states: “mitigating the risk of extinction caused by artificial intelligence should be a global priority alongside other societal risks such as pandemics and nuclear war.”
A year feels like a decade
Without the advent of ChatGPT none of this would have happened. It is reasonable to talk about an earthquake that triggered an avalanche. In density of events, this time is like a decade ago. Of course, not everyone is worried, and not for the same reasons. For example, Jean LeCun, a Turing Award winner with Hinton and Joshua Benzio, and head of AI at Meta, argues that these models should be open source, like the Meta Llama, so that there is no risk of abuse of power. Others insist that MGM’s research must not stop, lest momentum be lost and we enter another AI winter. Others argue that we are unjustifiably concerned about the risk of being annihilated by artificial intelligence, because MGMs will never acquire superintelligence capabilities.
However, ChatGPT has opened useful conversations about the future of work, information quality, developer ethics, cybersecurity, concentration of power, and human autonomy. If an app that can provide good answers is creating such a buzz, what if it can also formulate original questions? For a large number of members of the AI community, this day seems likely to come in less than a decade, which is perhaps why ChatGPT is generally not treated with equanimity. In just one year, ChatGPT has drawn the attention of a global audience to developments in artificial intelligence. Sooner or later, this is bound to happen at some point, because we are talking about a technology of unimaginable power.
The speed of developments is astonishing, but that fact will not prevent us from exploiting and enjoying the expected MGM reactions.
The only thing for sure is that ChatGPT is incomplete at the moment. Cambridge Dictionary recently announced that its Word of the Year for 2023 is hallucination, a direct reference to the incoherent answers ChatGPT often gives. No one knows whether the next model, GPT-5, will be a step closer to superintelligence or a partially improved version of the current model. We’ll have to wait, maybe not much longer. The speed of developments is astonishing, but that fact will not prevent us from exploiting and enjoying the expected MGM reactions. And in the midst of it all, a new profession has emerged, that of the spot engineer, the expert who knows how to write good prompts at MGMs.
It’s true that there are many ways to view ChatGPT. It’s too new to decide on an opinion. Another might see it as just an imperfect but very useful digital assistant, another might see it as a terrible wizard, and yet another might see it as something demonic. For author Tom Goodwin, ChatGPT reminds him of horoscopes. “If you want to think it’s amazing and precise, you’ll see it that way and you’ll love it. The world will seem magical and logical and intelligible and familiar. If you now think it’s nonsense, you’ll see it that way too. It will seem random, ridiculous, and you’ll think those who believe in it are lost and stupid.” Everything you see of it will seem plausible but generally useless. And if your career depends on believing in its power, you’ll talk about it all day long, find your tribe there, and spread the wonder whenever you can. Even if deep down you think it’s all just selling a dream. It is very difficult to objectively evaluate the quality of what one does, as we are all stigmatized by our pre-existing beliefs, and by the way we want to see the world. Many people want to see magic. Many people want to believe that humans are magic and that computers will not come close “Never. Very little data will change that.”
More Stories
Is this what the PS5 Pro will look like? (Image)
Finally, Windows 11 24H2 update significantly boosts AMD Ryzen – Windows 11 performance
Heart Surgeon Reveals The 4 Things He ‘Totally Avoids’ In His Life