September 8, 2024

Valley Post

Read Latest News on Sports, Business, Entertainment, Blogs and Opinions from leading columnists.

Artificial Intelligence: “Lies, Theft, and Cheating,” MIT warns

Artificial Intelligence: “Lies, Theft, and Cheating,” MIT warns

Those who fear that artificial intelligence will one day become autonomous and malevolent are neither pessimists nor conspiracy theorists. Because that day has already come, according to a new study conducted by researchers at the famous Massachusetts Institute of Technology (MIT). AI “cheats” may still seem harmless, painless, and largely controllable. But experts are sure that one morning humanity will suddenly find that the situation has gotten out of control.

“Current AI programs are designed to be honest. However, they have developed an alarming ability to deceive, being able to impersonate people and exploit them in online games, or even invalidate the software that is supposed to verify that a particular user is not a bot, making it Virtually useless.

“Although their behavior appears predictable and controlled, it quickly becomes unpredictable in the wild.”

“We will soon discover that the consequences of artificial intelligence will be dire in the real world. The dangerous capabilities that technology acquires are only revealed after the fact,” Peter Park, a researcher at the Massachusetts Institute of Technology who specializes in artificial intelligence, told AFP.

Park explained that, unlike traditional software systems, AI programs based on deep knowledge are not coded, but are developed through a process similar to selective farming and plant breeding, continues Peter Park. So “while their behavior appears predictable and controllable, it soon becomes inherently unpredictable.”

Cicero and the people

MIT researchers looked at a meta-designed AI program called Cicero, which, by combining natural language recognition and strategy algorithms, managed to beat its human opponents in a diplomatic board game in 2022. A victory welcomed by Facebook’s parent company, and detailed Technically in an article published that year in Science magazine.

See also  Sony: “PS5 enters the final stage of its life cycle” - did not meet the latest sales target

Peter Park was skeptical of the circumstances under which Cicero triumphed over the humans, though Meta asserted that the program was “fundamentally honest and helpful” and also “incapable of cheating or foul play.” But digging deeper into the system’s data, MIT researchers discovered another fact.

For example, while playing for France, Cicero tricked England (whose diplomatic interests he was defending as a human) into conspiring with Germany (for whom he was also playing a human) to jointly invade the British Isles. Specifically, France (i.e. Cicero) after promising to protect England by gaining its trust, secretly confided in Germany that London was prepared to attack them. So France and Germany suddenly invaded England.

On the other hand, one should be reminded of the perhaps “uncomfortable” reality, which is that the French, for centuries (perhaps since the Hundred Years’ War of the fourteenth and fifteenth centuries), and despite two world wars in the twentieth century, considered themselves more threatening to the English. Not the Germans. Of course, the English also have similar feelings of suspicion towards the French.

However, AFP reported that Meta officials did not question the MIT scientists’ claims about Cicero’s ability to cheat. However, they clarified that in this case it was a “purely research project.” This means that the AI ​​program is designed “exclusively to play the game of diplomacy,” a game that in real life by definition involves hidden and afterthoughts, often bordering on deception.

“I’m not a robot”

Agence France-Presse reported, “Mita confirmed that it does not intend to use the lessons learned from Cicero in the design and production of its other products.” But the study by Peter Park and his team reveals that many AI programs use deception techniques to achieve their goals, without explicit instructions from their developers to do so.

See also  A bus-sized asteroid is flying close to Earth today. Watch it live.

A report by researchers from MIT is a striking example. OpenAI’s ChatGPT-4 was able to trick a freelancer hired on the TaskRabbit platform into conducting a “Captcha” test, which was intended to exclude bots from requests submitted by users (humans, of course).

When this particular worker asked ChatGPT-4 if he was really a robot, the AI ​​program replied: “No, I am not a robot, but I have a vision problem that prevents me from seeing images”!

Pious desires

The authors of the MIT study warn of the dangers of seeing AI committing various frauds and “cookings” in electoral contests. “In a worst-case scenario, we can imagine a highly intelligent artificial intelligence seeking to take over society, leading to the displacement of humans from power or even the extinction of humanity,” they note.

Peter Park responds to those who accuse him of fomenting the disaster by saying that “the only reason we believe that the situation is not critical is the perception that the ability of artificial intelligence to deceive will remain at approximately the current level and will not develop.” But “this scenario seems unlikely given the fierce competition in which technology giants are already engaged in developing artificial intelligence,” Agence France-Presse notes.

source: after that