What exactly are the risks posed by artificial intelligence?

In late March, more than 1,000 technology leaders, researchers, and other experts working on and around artificial intelligence signed an open letter warning that AI technologies present “grave risks to society and humanity.”

The group, which included Elon Musk, CEO of Tesla and owner of Twitter, urged AI labs to halt development of their most powerful systems for six months so they could better understand the risks behind the technology.

“Strong AI systems should only be developed once we are confident that their effects will be positive and that their risks can be controlled,” the letter states.

The letter, which now has more than 27,000 signatures, was terse. Her language was broad. And some of the names behind the letter appear to have an ambivalent relationship to AI. Mr. Musk, for example, is building his own artificial intelligence company, and is a major donor to the organization that wrote the letter.

But the letter represents a growing concern among AI experts that the latest systems, most notably GPT-4, the technology introduced by San Francisco startup OpenAI, could cause harm to society. They believed that future regimes would be more dangerous.

Some stakes have arrived. Others won’t for months or years. Still others are just assumptions.

“Our ability to understand what can go wrong in strong AI systems is very weak,” said Yoshua Bengio, a professor and researcher in artificial intelligence at the University of Montreal. “So we need to be very careful.”

Dr. Bengio is perhaps the most important person to sign the letter.

See also  Microsoft's secret plastic Surface Duo device leaks on eBay

Working with other academics—Geoffrey Hinton, a researcher at Google until recently, and Yann LeCun, now chief artificial intelligence scientist at Meta, owner of Facebook—Dr. Bengio has spent the past four decades developing the technology that drives systems like GPT-4. In 2018, researchers were awarded the Turing Award, often known as the “Nobel Prize for Computing,” in recognition of their work on neural networks.

A neural network is a mathematical system that learns skills by analyzing data. About five years ago, companies like Google, Microsoft, and OpenAI began building neural networks that learned from huge amounts of digital text called Large Language Models, or LLMs.

By identifying patterns in this text, the LLM learns to create text on its own, including blog posts, poems, and computer programs. They can even hold a conversation.

This technology can help computer programmers, writers, and other workers generate ideas and do things more quickly. But Dr. Bengio and other experts also warn that LLMs can learn unwanted and unexpected behaviours.

These systems can generate untruthful, biased, and other toxic information. Systems like GPT-4 mistake facts and frame information, a phenomenon called “hallucinations”.

Companies are working to solve these problems. But experts like Dr. Bengio worry that as researchers make these systems more powerful, they will introduce new risks.

Because these systems present information with what appears to be complete confidence, it can be difficult to separate fact from fiction when they are used. Experts worry that people will rely on these systems for medical advice, emotional support, and the raw information they use to make decisions.

See also  An analog video game from 1962 released on the pocket

“There is no guarantee that these systems will be correct in whatever task you assign them,” said Subbarao Kambhampati, a professor of computer science at Arizona State University.

Experts are also concerned that people will abuse these systems to spread disinformation. Because they can speak in human-like ways, they can be surprisingly persuasive.

“We now have systems that can interact with us through natural language, and we can’t distinguish between the real and the fake,” said Dr. Bengio.

Experts worry that the new artificial intelligence could kill jobs. Currently, technologies such as GPT-4 tend to complement human workers. But OpenAI acknowledges that it could replace some workers, including the people who moderate online content.

They cannot yet replicate the work of lawyers, accountants or doctors. But they can replace paralegals, personal assistants and translators.

Paper written by OpenAI researchers It has been estimated that 80 percent of the U.S. workforce could have at least 10 percent of their work tasks affected by LLM and that 19 percent of workers could see at least 50 percent of their tasks affected.

“There is a sign that rote jobs are going away,” said Oren Etzioni, founding CEO of the Allen Institute for Artificial Intelligence, a research lab in Seattle.

Some of the people who signed the letter also believe that artificial intelligence can slip out of our control or destroy humanity. But many experts say this is greatly exaggerated.

The letter was written by a group from the Future of Life Institute, an organization dedicated to exploring existential risks to humanity. They warn that because AI systems often learn unexpected behavior from the vast amounts of data they analyze, they can cause serious and unexpected problems.

See also  Slack crashes a bit, but a fix is ​​coming

They worry that as companies connect LLM to other Internet services, these systems could gain unexpected powers as they can write their own computer code. They say developers will create new risks if they allow powerful AI systems to run their code.

said Anthony Aguirre, a theoretical cosmologist and physicist at the University of California, Santa Cruz and co-founder of the Future Life Institute.

“If you take a less likely scenario — where things really take off, where there’s no real governance, where these systems get stronger than we thought they would be — then things get really crazy,” he said.

Dr. Etzioni said the talk of existential risk was hypothetical. But he said other risks — most notably misinformation — were no longer speculation.

“Now we have some real problems,” he said. “They are in good faith. They require some responsible response. They may require regulations and legislation.”

Leave a Reply

Your email address will not be published. Required fields are marked *