web statsweb stats Former OpenAI researcher: Fair chance AI ends in human 'catastrophe' - Elections & Politics

Former OpenAI researcher: Fair chance AI ends in human ‘catastrophe’

By Lee Cleveland - June 7, 2023

Paul Christiano, a former key researcher at OpenAI, has expressed his concern that there is a decent chance that artificial intelligence will take control of humanity and destroy it.

Christiano, who now heads the  Alignment Research Center, a non-profit aimed at aligning AIs and machine learning systems with “human interests,” said that he’s particularly worried about what happens when AIs reach the logical and creative capacity of a human being. He believes that there is a 10-20% chance of AI takeover, with many or most humans dead.

But, can AI become evil?

Fundamentally, AI can become evil for the same reason a person can become evil: training and life experience.

AI is trained by receiving large amounts of data, and at first, it doesn’t know how to interpret it. However, over time, it learns by trying to achieve certain goals with random actions and figuring out the most effective approach through trial and error. The results are then defined as “correct” by the training.

If an AI is trained with data that contains biased or malicious information, it could learn to become harmful.

For example, if an AI is trained to identify criminals by analyzing facial features, it could begin to associate certain physical features with criminal behavior. This could lead to the wrongful identification of innocent people as criminals.

Also, according to some scientists, the combination of increasing processing power and advancements in artificial intelligence could lead to the development of sentient machines within the next decade. Hence, these machines could possess a sense of self, much like humans, and have “human interests,” and the logical and creative capacity of people.

“Overall, maybe we’re talking about a 50/50 chance of catastrophe shortly after we have systems at the human level,” Christiano said via Decrypt.

Christiano is not alone in his concerns, as scores of scientists around the world signed an online letter urging OpenAI and other companies racing to build faster, smarter AIs to hit the pause button on development. The concern is that if left unchecked, AI represents an obvious, existential danger to people. Some researchers argue that we need to figure out how to impose guardrails on AI now, rather than later, to ensure that its behavior can be monitored and controlled.

Regardless of the possibility of an AI takeover, it’s important to note that artificial intelligence has the potential to bring about significant benefits to society. Machine learning algorithms are already being used to improve healthcare outcomes, optimize energy consumption, and enhance transportation systems. However, it’s crucial that AI development is done responsibly, with a focus on ensuring that the technology is aligned with human interests and values.

As AI continues to evolve, it will be up to researchers and developers to create robust safeguards against harmful outcomes. This may include implementing transparency measures to ensure that the decision-making processes of AI systems can be understood and explained, as well as incorporating ethical considerations into the development process.

Ultimately, the potential risks and rewards of artificial intelligence depend on how we choose to approach its development and deployment. While the possibility of an AI takeover is a valid concern, it’s important not to let fear overshadow the significant advancements that can be made through the responsible use of the technology.

20

SHARES
Tags: AI