Stanford University’s proposal on the “foundations” of artificial intelligence sparks controversy


Last month, Stanford The researchers announced that the new era artificial intelligence Has arrived, one built on a huge Neural Networks And a sea of ​​data.They say one New Research Center Stanford University will establish and study these “basic models” of artificial intelligence.

Criticism of this idea quickly surfaced-including at a seminar organized to commemorate the launch of the new center. Some people object to the limited functionality and sometimes weird behavior of these models; others warn of over-focusing on a way to make machines smarter.

“I think the word’basic’ is very wrong,” Gitandra Malik, Professor of Artificial Intelligence at the University of California, Berkeley, Tell seminar attendees In the video discussion.

Malik admits that a model identified by Stanford University researchers—a large language model that can answer questions or generate text based on prompts—has great practical uses. But he said that evolutionary biology shows that language is built on other aspects of intelligence, such as interaction with the physical world.

“These models are really castles in the sky; they have no foundation,” Malik said. “The language we use in these models has no foundation, is false, and has no real understanding.” He declined an interview request.

A research paper co-authored by dozens of Stanford University researchers described the “emerging paradigm for building artificial intelligence systems” and labeled it as a “basic model.” In recent years, increasingly large artificial intelligence models have made some impressive progress in the field of artificial intelligence, such as perception and robotics, as well as language.

Large language models are also the basis of large technology companies, such as Google with Facebook, And use them in areas such as search, advertising, and content review. Building and training large-scale language models may require millions of dollars worth of cloud computing capabilities; so far, this has been limited to the development and use of a few wealthy technology companies.

But there are also problems with large models. Language models inherit prejudices and offensive texts from the data they are trained on, and they have zero confidence in common sense or true or false.Given hints, large language models may Spit out unpleasant language or misinformationThere is no guarantee that these large models will continue to make progress in machine intelligence.

The Stanford University proposal has caused disagreements in the research community. “Calling them’base models’ completely messed up the discussion,” said Subarao Kambati, Professor of Arizona State University. Kambhampati said that there is no clear path from these models to more general forms of AI.

Thomas Dietrich, Professor of Oregon State University, former chairman Artificial Intelligence Promotion Association, Said he “very respect” the researchers behind the new Stanford Center, and he believes they really care about the problems these models bring.

But Dietterich wanted to know whether the idea of ​​the basic models was not in part intended to obtain funding for the resources needed to build and research them. “I was surprised that they gave these models a nice name and created a center,” he said. “It does have the flavor of flagging, which may have several benefits for fundraising.”

Stanford also proposed to create a National Artificial Intelligence Cloud Provide industry-scale computing resources for scholars engaged in artificial intelligence research projects.

Emily M. BenderA professor in the Department of Linguistics at the University of Washington said she is concerned that the idea of ​​the basic model reflects a bias against investing in data-centric artificial intelligence methods favored by the industry.

Bender said that it is particularly important to study the risks posed by large-scale AI models.She co-authored one Paper, Published in March, aroused people’s interest in large-scale language models and Facilitated the departure of two Google researchersBut she said that the review should come from multiple disciplines.

“There is also a lack of funding in all these adjacent, very important areas,” she said. “Before we put money in the cloud, I want to see money go to other disciplines.”


Source link