Stanford releases report on the current state of AI


SOURCE: PSYCHOLOGYTODAY.COM
SEP 22, 2021

Published every 5 years, the study cites present concerns and future directions.

Artificial intelligence (AI) has significantly advanced in the past half decade and is making major inroads across many industries and sectors worldwide. Earlier this month, Stanford University released The One Hundred Year Study on Artificial Intelligence (AI100) 2021 Study Panel Report.

The new Stanford AI100 report is the second in a series following the inaugural AI100 report published five years ago in September 2016. Stanford plans to continue to publish the A1100 report once every five years for a hundred years or longer.

“The field of artificial intelligence has made remarkable progress in the past five years and is having real-world impact on people, institutions and culture,” the researchers wrote.

The vision and direction of the 2021 AI100 report was set by the standing committee members consisting of Chairperson Peter Stone at The University of Texas at Austin and Sony AI, along with Erik Brynjolfsson, Russ Altman, and Percy Liang at Stanford University, Vincent Conitzer at Duke University and the University of Oxford, Mary L. Gray at Microsoft Research, Barbara Grosz at Harvard University, Ayanna Howard at The Ohio State University, Patrick Lin at California Polytechnic State University, James Manyika at McKinsey & Company, Sheila McIlraith at the University of Toronto, Liz Sonenberg at The University of Melbourne, and Judy Wajcman at the London School of Economics and The Alan Turing Institute.

“In the last five years, the field of AI has made major progress in almost all its standard sub-areas, including vision, speech recognition and generation, natural language processing (understanding and generation), image and video generation, multi-agent systems, planning, decision-making, and integration of vision and motor control for robotics,” wrote the researchers. “In addition, breakthrough applications emerged in a variety of domains including games, medical diagnosis, logistics systems, autonomous driving, language translation, and interactive personal assistance.”

According to the researchers, progress in neuroscience since the last AI100 report has helped further advances in computational modeling, as machine-learning algorithms that draw inspiration from an improved understanding of the biological brain.

“Over the past half decade, major shifts in the understanding of human intelligence have favored three topics: collective intelligence, the view that intelligence is a property not only of individuals, but also of collectives; cognitive neuroscience, studying how the brain’s hardware is involved in implementing psychological and social processes; and computational modeling, which is now full of machine-learning-inspired models of visual recognition, language processing, and other cognitive activities,” wrote the study authors.

As for artificial general intelligence (AGI), the study states that although there has been significant progress, the “field is still far from producing fully general AI system.”

However, the study does points out that there have been important advances towards AGI. For example, self-supervised models called transformers have greatly improved natural language processing. Also, machine learning is becoming more flexible and adaptable.

“AI is developing in ways that improve its ability to collaborate with and support people, rather than in ways that mimic human intelligence,” wrote the study authors. “The study of intelligence has become the study of how people are able to adapt and succeed, not just how an impressive information-processing system works.”

There have been increases in flexibility with deep neural networks learning general-purpose, transferable representations rather than narrow, inflexible point-solutions. Additionally, AI is able to adapt the learning acquired from training on one task towards entirely new situations using techniques such as intrinsic motivation.

AI is at an inflection point, according to the study. There is potential for biases and inequalities to not only be reinforced, but also amplified by machine learning algorithms that have been trained on historical data. The researchers warn that the ability to automate decisions at scale has its downside as “intentional deepfakes or simply unaccountable algorithms making mission-critical recommendations can result in people being misled, discriminated against, and even physically harmed.”

The study points out the need for government institutions to address the challenges that AI presents. “In addition to regulating the most influential aspects of AI applications on society, governments need to look ahead to ensure the creation of informed communities,” wrote the researchers.

“Whereas AI research has traditionally been the purview of computer scientists and researchers studying cognitive processes, it has become clear that all areas of human inquiry, especially the social sciences, need to be included in a broader conversation about the future of the field,” the researchers concluded.

Similar articles you can read