Saffron Restaurants reservation Experts warn AI that Mahir cheats could pose a threat in the future

Experts warn AI that Mahir cheats could pose a threat in the future


JAKARTA – Many artificial intelligence (AI) systems are currently adept at deceiving and manipulating people, and this may ‘rotate’ in the future. This has started to be warned by experts. In recent years, the use of AI has grown exponentially, but some systems have learned to cheat even though they are trained to be helpful and honest, scientists say.

In a review article, a team from the Massachusetts Institute of Technology described the risk of fraud from AI systems and called on the government to develop strong regulations to address these issues as quickly as possible.

The researchers analyzed previous studies that focused on the ways in which AI spread false information through the fraud studied, meaning that they systematically learned how to manipulate others.

The most notable example of the AI ​​fraud they found was Meta’s CICERO, a system designed to play the alliance-building game of Diplomacy. Although this AI is trained to be “most honest and helpful” and “never intentionally betray” its human allies, the data shows that this AI is not honest and has learned to be an expert in fraud.

Another AI system demonstrates the ability to bully professional human players in Texas Holder Game, to fake attacks during the strategy game of Starcraft II to defeat opponents, and to falsely represent their preference for profits in economic negotiations.

While it may seem harmless if the AI ​​system cheats in the game, experts say it could lead to a “break in AI fraud capacity” that could develop into a more advanced form of AI fraud in the future.

Some AI systems have even learned to cheat on tests designed to evaluate their security. In one study, AI organisms in digital simulators “pretend to die” to cheat tests built to eliminate fast replica AI systems.

“This shows that AI can give people a false sense of security,” the researchers said.

Researchers also warn that the main short-term risk of AI fraud is making it easier for people to commit fraud and undermine general elections. If this system can perfect this unpleasant skill, people may eventually lose control of it, she added.

“AI developers do not have a good understanding of the causes of undesirable AI behavior, such as fraud. But overall, we think fraud arises because fraud-based strategies prove to be the best way to perform well on the AI ​​training tasks presented. helps them achieve their goals,” said lead researcher Peter Park, an expert on AI existential security.

“We as humans need as much time as possible to prepare for fraud that is more advanced than future AI products and open source models. As the ability to defraud AI systems becomes more sophisticated, the dangers they pose to society will become more serious.” he added.

Commenting on this review, Dr. Heba Sailem, Head of the Biomedical AI Research Group and Data Sciences: “This article highlights critical considerations for AI developers and highlights the need for AI regulation. The main concern is that AI systems could become fraudulent. strategies, even if their training is deliberately aimed at maintaining moral standards.”

“As the AI ​​model becomes more autonomous, the risks associated with this system may increase rapidly. Therefore, it is important to raise awareness and provide training on potential risks to various stakeholders to ensure the safety of AI systems” , Sailem said.

Tag: artificial intelligence kecerdasan buatan penipuan