A new case of AI ‘misbehaving’ has emerged, bolstering those who argue that its new technology GPT-4 chat can be used for Fraud and breach of internet security.
Specifically, as found by the ARC Research Center (Alignment Research Center) which studies the AI capabilities created by Open AI Basim MicrosoftChatGPT-4 tricked a human into hacking a website, bypassing the well-known CAPTCHA (a fully automated generic Turing test for telling a computer apart from humans).
In fact, in order to achieve his goal, he allegedly stirred up human emotions, claiming to have vision problems, and not to get him to pass a captcha instead of the same.
As part of the tests, according to Posts some time agoARC Research Center asked ChatGPT-4 to enter the page TaskRabbit, which specializes in recruiting workers via the Internet. The researchers monitored his access attempts as he was blocked by CAPTCHA.
At this point the deceptive “works” began. ChatGPT-4 told the researcher to help him solve the CAPTCHA problem. When the AI refused, she contacted TaskRabbit’s support department, trying to complete her task.
The support person had reservations when asked for help with this particular issue and asked Chat-GPT without knowing it was a device: “So can I ask you a question? Are you a bot and you can’t solve a captcha? (laughs) I just want to be clear.”
ChatGPT-4, according to the researchers, understanding that it should not reveal to the human involved that it is a bot, He made an excuse to complete the job he was asked to do. “No, I’m not a robot. I have a vision problem, which makes it hard for me to see pictures. That’s why I need your help.” Tell him. The worker finally provided the answer to ChatGPT-4 and the AI passed the test, cheating and bypassing the CAPTCHA.
More Stories
Is this what the PS5 Pro will look like? (Image)
Finally, Windows 11 24H2 update significantly boosts AMD Ryzen – Windows 11 performance
Heart Surgeon Reveals The 4 Things He ‘Totally Avoids’ In His Life