November 22, 2024

Valley Post

Read Latest News on Sports, Business, Entertainment, Blogs and Opinions from leading columnists.

In the race for artificial intelligence, giants sacrifice security

In the race for artificial intelligence, giants sacrifice security

In March, two Google employees were responsible for evaluating products artificial intelligence of the company, tried to stop the launch of the AI ​​chatbot, warning that it was providing inaccurate and potentially dangerous answers.

Ten months ago, similar concerns were raised within Microsoft by ethics experts and employees of other departments. Various documents have highlighted how the AI ​​technology behind a chatbot can “flood” Facebook groups with misinformation, undermine critical thinking, and even erode the foundations of modern society.

Eventually, the two tech giants both launched their own chatbots. Microsoft introduced it first, at a big event in February, revealing a combination of artificial intelligence. chatbot with Bing search engine. Google followed about six weeks later with its own chatbot, Bard.

The New York Times notes that “aggressive moves by risk-averse companies have been spurred by a race to control technology that could be the industry’s ‘next big step’: artificial intelligence systems.”

Competition accelerated into a frenzy in November when OpenAI, a San Francisco startup in which Microsoft is investing billions, launched ChatGPT, an AI chatbot that, within months, smashed the hack record, attracting 100 million users worldwide faster than Tech. tok.

Blurring the moral line

The stunning success of ChatGPT has prompted Microsoft and Google to take greater risks with their own ethical guidelines, which It has been put in place over the years to ensure that technology does not cause social problemsAccording to 15 current and former employees and internal company documents cited by The New York Times.

The sense of “urgency” in developing AI applications was captured in an internal letter sent last month by Sam Slees, Microsoft’s chief technology officer. As he wrote in the email, which was read by New York Times reporters, It would be “a big mistake now to worry about things that can be fixed later”.

He wrote that when the technology industry suddenly shifts towards a new type of technology, the first company to introduce a product “will emerge as the winner in the long run simply because they got there first”. “Sometimes the difference is only weeks.”

See also  Have we mistranslated one of Newton's laws for 300 years?
In the AI ​​race giants sacrifice safety -1
Source Unsplash

About two weeks ago, the question of the potential dangers of artificial intelligence came up. It returned to the public domain when more than 1,000 researchers and technology leaders, including Elon Musk and Apple co-founder Steve Woniak, called for a six-month “pause” in developing powerful AI technology. In a public letter, they said this technology could be accompanied by “serious risks to society and humanity.”

In the race for artificial intelligence, giants sacrifice security -2

Regulatory authorities say they will get involved. The European Union proposed a regulatory framework for AI and Italy temporarily banned ChatGPT. In the United States, President Biden on Tuesday became the latest leader to openly question the safety of artificial intelligence.

“Technology companies have a responsibility to make sure their products are safe before they are released,” he said from the White House. When asked if AI was dangerous, he replied: “We’ll see. It could be.”

Even AI developers don’t fully understand it.

Researchers say Microsoft and Google risk releasing technology that even the developers behind it don’t fully understand.

Google launched Bard after years of internal debate about the benefits of artificial intelligence. outweigh the risks. t2020 had announced Meena, a similar chat program, but this platform was deemed too risky to launch.Three of the people of knowledge said the matter. These concerns were previously revealed by the Wall Street Journal.

Google was aware

That same year, Google banned two leading AI researchers and ethicists, Tinit Gebru and Margaret Mitchell, from publishing a paper warning that large language paradigms used in new AI systems might unleash offensive language.

The researchers were removed after Dr. Jabro criticized the company’s performance in terms of inclusion. Mitchell was accused of violating the company’s code of conduct when she saved some work emails to a personal Google Drive account.

Concerns remained. In January 2022, Google attempted to block another researcher, Dr. Al-Mahdi published a critical study, according to the New York Times.

In the race for artificial intelligence, giants sacrifice security -2
Source: Unsplash

doctor. Al-Mahdi, a university professor, used mathematical theories to warn about this The largest AI models are more vulnerable to cybersecurity attacks and come with extraordinary privacy risksBecause they are likely to have access to private data stored on various websites on the Internet.

See also  Why is the sky blue and why is it filled with colors at sunset?

Although a presentation to company executives warned of related privacy breaches, Google asked Dr. Mahdi makes major changes to his report. He rejected the paper and did not publish it through the École Polytechnique. He ended his partnership with Google that year, complaining of “censorship of his research”. According to him, the risks of modern artificial intelligence far outweigh the benefits. He commented, “It’s a premature development.”

After the launch of ChatGPT, Kent Walker, Head of Legal at Google, met with executives from the search and security departments From the company’s powerful Advanced Technology Assessment Board. He told them that Google CEO Sundar Pichai had been pushing hard to launch the tech giant’s AI.

Jane Jenny, Director of the Responsible Innovation Team at Google, participated in this meeting. She remembers Walker asking attendees to speed up AI projects. Some executives have made it clear that they will adhere to security standards, according to Jenai.

In March, two members of Jenai’s team provided a risk assessment of the Bard system, recommending that it not be released immediately, according to two people familiar with the operation. Despite the guarantees, they thought the chatbot wasn’t ready. Jenai intervened in the document, rescinding the recommendation and downplaying the Bard’s risks, according to New York Times sources.

Microsoft has not consistently followed the rules

Microsoft CEO Satya Nadella believes in artificial intelligence. In 2019 Microsoft invested $1 billion in OpenAI. Last summer, he deemed the technology ready and pushed every Microsoft team to “embrace” AI.

The Microsoft Office of Responsible AI has established the relevant rules and policies, But the guidelines have not been consistently implemented or followedFive current and former employees told the American newspaper.

Although there is a principle of transparency, MrEthicists who evaluate chatbots have yet to get answers about the data OpenAI used to develop its systemsAccording to three people involved in the work. Some have argued that integrating chatbots into a search engine was a particularly bad idea, given that sometimes false details are given, according to someone with first-hand knowledge of the conversations in question.

See also  Xiaomi shows what the foldable could look like if Samsung had competition

In the fall, Microsoft began dissolving one of its largest tech ethics groups. The Ethics and Society team has trained and advised the company’s product chiefs so they are designed and marketed responsibly. In October, most of the team members moved on to other teams, according to four people familiar with its internal structure.

The few that remained participated in daily meetings with the Bing team, and raced to launch a chatbot. John Montgomery, an executive in the AI ​​division, wrote to them in an email last December that their work remains vital and more teams will need their help.

After the AI ​​Ping-supported presentation, the ethics team scored unaddressed concerns. Users can end up being too dependent on the tool. Inaccurate answers could mislead users and some might think that the chatbot, which speaks in the first person and uses emojis, was human.

In mid-March, team members were fired, but hundreds of employees are still working on ethical issues.

Microsoft releases new products every week, “at a frantic pace to meet goals set by Mr. Nadella over the summer when he previewed the latest OpenAI model,” the New York Times writes.

He asked the chatbot to translate Persian poet Jalaluddin Rumi into Urdu and then transliterate it into English letters. “It worked out perfectly,” he said in an interview in February. Then I said: Oh my God, what is this thing?

⇒ News of the day

follow her kathimerini.gr on Google News And be the first to know all the news

Find out the latest news from Greece and the world on kathimerini.gr