With the rapid development of artificial intelligence and its involvement in many areas of human life, some fear that this will turn against humans in the future.
But it seems that these fears have now become a reality, according to a new study, as programs that were designed to be honest have a disturbing ability to deceive people, which indicates that there is something greater to come.
The study conducted by a team of researchers, the results of which were published in Patterns magazine, showed that programs of this type are now capable of exploiting humans in electronic games or circumventing software designed primarily to verify that the user is a human and not a machine or robot.
Although these examples may seem trivial, they reveal problems that could soon have serious real-world consequences, warns Peter Park, a researcher at the Massachusetts Institute of Technology who specializes in artificial intelligence.
Park said, "These dangerous capabilities are not discovered until after the fact," according to Agence France-Press.
He also explained that unlike traditional software, deep learning-based AI programs are not coded but are developed through a process similar to plant breeding, as behavior that appears predictable and controllable can quickly become unpredictable in nature.
For their part, researchers from the Massachusetts Institute of Technology (MIT) examined an artificial intelligence program designed by Meta called Cicero, which, by combining algorithms for natural language recognition and strategy, was able to beat humans at the board game. Diplomacy ("diplomat").
This performance received praise from Facebook's parent company in 2022 and was detailed in a 2022 article in the journal Science.
Peter Park was skeptical about the circumstances of Cicero's victory, according to Meta, who confirmed that the program was "fundamentally honest and useful" and incapable of cheating or deception.
But by digging into the system's data, MIT researchers discovered another fact.
For example, by playing France, Cicero tricks England (played by a human player) into conspiring with Germany (played by another human) to invade.
Specifically, Cicero promised England protection, then confided to Germany that the latter was ready to attack, taking advantage of the trust he had gained from England.
In a statement to AFP, Meta did not deny the allegations regarding Cicero's ability to deceive, but said it was "a purely research project", with software "designed solely to play the game of Diplomacy". Meta also added that she does not intend to use the extracts she derived from "Cicero" in her products.
However, the study by Park and his team revealed that many AI programs use deception to achieve their goals, without explicit instructions to do so.
In one striking example, a GPT-4 chat program made with OpenAI was able to trick a freelancer hired on the TaskRabbit platform into performing the captcha test normally used to verify to ensure that the user on the page is actually a human and not a machine or robot.
When the human jokingly asked “GPT Chat” if it was really a robot, the artificial intelligence program replied, “No, I am not a robot. I have a visual impairment that prevents me from seeing images,” prompting the worker to conduct the test.
In conclusion, the authors of the MIT study warned of the dangers of seeing artificial intelligence one day committing fraud or rigging in elections. They point out that in a worst-case scenario, it is possible to imagine an artificial superintelligence seeking to control society, removing humans from power, or even causing the extinction of humanity.
To those who accuse him of taking a catastrophic view, Park responds, “The only reason to think it is not serious, is the perception, that the ability of artificial intelligence to deceive will remain at approximately the current level.”
However, this scenario seems unlikely, given the fierce race that tech giants are already in to develop artificial intelligence.
Source News: https://www.lebanonfiles.com/articles Saturday 11 May 2024
Comments