The great puzzle Lately, as far as Google is concerned, it’s been about its core product and its original and main source of income: Is it time to replace the search system with AI-powered chatbots? She recently began revealing her intentions through a video in which she said: “In the coming era of research, work will be done by Artificial Intelligence (AI) And you won’t have to train anymore.” The video further announced that Overviews of artificial intelligence, Google’s new name for answers generated through artificial intelligence, will soon begin appearing at the top of user results pages. This is a small step towards a future where the Internet, when we ask a question, will not refer us to links and facts, but will simply answer us.
If Google goes ahead with any kind of overhaul of its search system, there will inevitably be broader consequences. Google search is one of the main points through which we interact with the Internet, our computers, and our mobile phones. The press has hailed this small step as a turning point, as Google’s role as a means of distributing and leveraging our attention elsewhere on the web is both an important and controversial one, and one that appears to be about to change.
But after nearly a year of testing, Google’s experiment with AI search looks less like an overhaul and more like another questionable entry in an increasingly crowded results page. The content of AI answers contains critical errors several times. Maybe it will get better, as Google claims, and maybe the quality won’t matter if users like it anyway. Either way, the question of whether Google will completely reshape the Internet economy, and whether aggregated summaries will deal the final fatal blow to publishers and other platforms that rely on Google, will soon be answered.
It’s pretty clear what Google hopes to achieve from AI research: first, fend off competition from companies like OpenAI Secondly, maintaining its position at the top. But the research was just one of dozens of products and features unveiled in May Google I/O, the company’s developer summit. The research upgrade served a secondary purpose, which was to communicate more broadly that the company was fully committed to the direction of AI. As a bet, AI offers the potential to radically revise the rules around personal data protection – again to the benefit of companies like Google.
Google is preparing new tools for creating images, audio, and video. Soon one New voice assistant He will be able to answer questions about what he sees from the camera or on the screen of our device. There will be promotions to assistants who will answer questions about our documents, the meeting that just ended, or the content of our inbox. There will be software that can scan phone calls, in real time, for language patterns that indicate fraud.
Some of these features are in the live demo stage, while many are just suggestions or perhaps just marketing. It’s as if Google is saying “Whatever our competitors are doing with AI, we’re doing it too, and we already did it first.”
But a different story is now emerging, one that presents AI not as a “pure” technology, with which Google is trying to clarify its relationship — is it a creator, a victim, or both — but as a continuation of one of the company’s defining characteristics. The policy has been described by a former CEO. As follows: “We’ve reached the point where things are starting to get scary, but we’re not getting past it.” Many of these tools clearly explain what they can do for us. What we give them in return is full access to every aspect of our digital lives. Whatever the case, the rush to deploy AI is certainly an attempt to enable companies to access more and more data, and a testament to the industry’s confidence that users will abandon it.
Such moments may be rare but they are not unprecedented. In 2004, shortly after its launch Gmail, Google faced backlash for placing ads in users’ inboxes, which some at the time saw as a rude and inappropriate violation. At the time, a group of privacy advocates wrote in an open letter to Google executives: “Scanning personal communications in the way Google proposes opens the door to the famous Aeolian briefcase.” This phrase will soon seem wonderful and prophetic. In the past, it was clear that users were happy to make this trade-off, to the extent they understood or imagined it, but in reality, that’s how it all ended up, online.
By 2017, Google, which already offered maps, workplace software, a smartphone operating system, and dozens of other products that relied on collecting data from users, had ended the practice of scanning users’ email to show them targeted ads based on their content. This gesture in the direction of personal data protection seemed somewhat ridiculous. At that time, Google was in our pockets, and its software was on billions of phones through which users spent more and more of their lives.
Since then, changes in privacy rules have occurred more quietly and have more subtle effects. One day, a smartphone user opens his phone to find that his photo file has been scanned for faces and organized into albums categorized by the people depicted in the photos. Elsewhere, during a tense video call with a colleague, a Zoom user noticed that the meeting was automatically transcribed into a searchable format…
Thanks to AI assistants – innovative, seemingly magical tools that companies like Google are keen to tout as such – technology companies have the opportunity to shake things up even further. These tools rely on access to data that has already been granted by users in many cases. It’s not exactly shameful for Google Assistant to request access to documents hosted on Google Docs, for example, but it’s not entirely trivial, and it points to the extent to which the idea of an omniscient assistant can go to shape expectations around digital autonomy.
In the past, Google has resorted to some largely unconvincing arguments in support of collecting user data — for example, that it is necessary to “help us see more relevant and interesting ads.” Most of the time, their arguments were in the form of programs, and users responded by using or rejecting those programs. In contrast, AI assistants’ arguments about the necessity and efficiency of user monitoring are more straightforward. Obviously, assistants work better if they can see what users see, or at least what appears on their screen. The fact that AI assistants are not yet ready reduces the pressure that users may feel, even when they are ready, from the dilemma between having an increasingly intelligent, more “human” assistant and the requirement to access increasingly personal information.
In the same way that years of data collected from the web through Google Search have enabled Google to start producing reasonable results on its own, AI assistants claim to be able to help us personally, that is, tap into the vast amount of data that Google has been collecting for a long time. – We have given our consent to it technically – for purposes other than marketing. On the one hand, this all sounds better as a deal, in which at least some of the value of this massive amount of personal data is returned to the user in the form of a useful chatbot. However, more generally, the illusion of choice is familiar to us. (There are signs that Google is aware of privacy concerns. It has confirmed, for example, that its call screening feature will rely on on-device AI rather than sending data to the cloud.)
The popular idea that the AI boom poses an existential threat to internet giants deserves to be treated with a greater grain of salt. The needs of the technology industry of the past, present and future are perfectly aligned. These are companies whose current business activity depends on the acquisition, production and exploitation of large amounts of personal data of their users. On the other hand, the secret to unleashing the full, wonderful potential of AI based on large, personal assistant-level language models, or achieving machine intelligence, according to the people who built it, is simply having access to more data.
This is less a conspiracy, or even a deliberate plan, than an ambitious vision of a world where traditional notions of what belongs to us have been completely redefined. AI companies have made clear that they need to receive massive amounts of public and private data in order to deliver on their promises. Google is making a more personal version of the same argument: it will soon be able to help us with anything we need. Everything he asks of us in return is everything.
“Total alcohol fanatic. Coffee junkie. Amateur twitter evangelist. Wannabe zombie enthusiast.”
More Stories
Is this what the PS5 Pro will look like? (Image)
Finally, Windows 11 24H2 update significantly boosts AMD Ryzen – Windows 11 performance
Heart Surgeon Reveals The 4 Things He ‘Totally Avoids’ In His Life