Language as a Litmus Test for AI

And the problem of bias

Illustration by OstapenkoOlena/iStock by Getty

Return to main article:

Language, which clearly played an important role in human evolution, has long been considered a hallmark of human intelligence, and when Barbara Grosz started working on problems in artificial intelligence (AI) in the 1970s, it was the litmus test for defining machine intelligence. The idea that language could be used as a kind of Occam’s razor for identifying intelligent computers dates to 1950, when Alan Turing, the British scientist who cracked Nazi Germany’s encrypted military communications, suggested that the ability to carry on a conversation in a manner indistinguishable from a human could be used as a proxy for intelligence. Turing raised the idea as a philosophical question, because intelligence is difficult to define, but his proposal was soon memorialized as the Turing test. Whether it is a reasonable test of intelligence is debatable. Regardless, Grosz says that even the most advanced, language-capable AI systems now available—Siri, Alexa, and Google—fail to pass it.

The Higgins professor of natural sciences has witnessed a transformation of her field. For decades, computers lacked the power, speed, and storage capacity to drive neural networks—modeled on the wiring of the human brain—that are able to learn from processing vast quantities of data. Grosz’s early language work therefore involved developing formal models and algorithms to create a computational model of discourse: telling the computer, in effect, how to interpret and create speech and text. Her research has led to the development of frameworks for handling the unpredictable nature of human communication, for modeling one-on-one human-computer interactions, and for advancing the integration of AI systems into human teams.

The current ascendant AI approach—based on neural networks that learn—relies instead on computers’ ability to sample vast quantities of data. In the case of language, for example, a neural network can sample a corpus—extending even to everything ever written that’s been posted online—to learn the “meaning” of words and their relationship to each other. A dictionary created using this approach, explains assistant professor of computer science Alexander “Sasha” Rush, contains mathematical representations of words, rather than language-based definitions. Each word is a vector—a relativistic definition of a word in relation to other words. Thus the vectors describing the relationship between the words “man” and “woman” would be mathematically analogous to those describing the relationship between words such as “king” and “queen.”

This approach to teaching language to computers has tremendous potential for translation services, for developing miniaturized chips that would allow voice control of all sorts of devices, and even for creating AIs that could write a story about a sporting event based purely on data. But because it captures all the human biases associated with culturally freighted words like “man” and “woman,” and what the ensuing mathematical representations might embody with respect to gender, power dynamics, and inequality when confronted with the associations of a word such as “CEO,” it can lead neural-network based AI systems to produce biased results.

Rush considers his work—developing language capabilities for microscopic computer chips—to be purely engineering, and his translation work to be functional, not literary, even though the goal of developing an AI that can pass the Turing test is undoubtedly being advanced by work like his. But significant obstacles remain.

How can a computer be taught to recognize inflection, or the rising tone of words that form a question, or an interruption to discipline kids (“Hey, stop that!”), of the sort that humans understand immediately? These are the kinds of theoretical problems Grosz has been grappling with for years. And although she is agnostic about whatever approach will ultimately succeed in building systems able to participate in everyday human dialogue, probably decades hence, she does allow that it might well have to be a hybrid of neural-network learning and human-developed models and rules for understanding language in all its complexity.

Read more articles by Jonathan Shaw

You might also like

Rachel Ruysch’s Lush (Still) Life

Now on display at the Museum of Fine Arts, a Dutch painter’s art proved a treasure trove for scientists.

What Happens When Infections Stop Responding to Antibiotics?

Harvard Medical School experts discuss the growing threat of antimicrobial resistance.

Green AI: Hype or Hope?

An expert panel explores AI’s climate impact, from emissions to water use.

Most popular

What Trump Means for John Roberts’s Legacy

Executive power is on the docket at the Supreme Court.

Human Impact On New England Ecology Was Minimal before Europeans Arrived

Before Europeans arrived in New England, local ecology was driven by climate shifts, not by human interventions.

This Harvard Scientist Is Changing the Future of Genetic Diseases

David Liu has pioneered breakthroughs in gene editing, creating new therapies that may lead to cures.

Explore More From Current Issue

Julie Riew, wearing a white dress, playing guitar and singing into a microphone on stage.

Bringing Korean Stories to Life

Composer Julia Riew writes the musicals she needed to see.

Illustration of scientists injecting large syringe with mitochondria into human heart.

Do Mitochondria Hold the Power to Heal?

From Alzheimer’s to cancer, this tiny organelle might expand treatment options. 

Public health dean Andrea Baccarelli wearing a white collared shirt and glasses.

The School of Public Health, Facing a Financial Reckoning, Seizes the Chance to Reinvent Itself

Dean Andrea Baccarelli plans for a smaller, more impactful Chan School of 2030.