“More Shots on Goal”

Researchers experiment with crowdsourced approaches to innovation.

Return to main article:

In citizen science, crowdsourcing has largely focused on relatively simple, discrete tasks: classifying images of galaxies, for instance, or tracking birds in the backyard. But both research and industry are beginning to explore how open-ended goals like innovation can benefit from the power of crowds. In 2006, DVD-rental and online video-streaming company Netflix sought to improve the accuracy of its video-recommendation algorithm, which suggests new movies based on users’ previous video preferences. Rather than tackle the problem internally, the company announced an open contest: it would publish an anonymized set of user data and challenge programmers all over the world to develop an algorithm that improved prediction accuracy by 10 percent. “We’re quite curious, really,” the contest statement begins. “To the tune of one million dollars.”

The Netflix Prize was a recent, high-profile example of what has been termed “open innovation,” a mindset that has prompted research enterprises from academia to industry to look beyond their institutional boundaries for solutions. The concept is not without historical precedent: in the eighteenth century, the British government famously established the Longitude Prize, seeking a solution to the seemingly intractable problem of east-west navigation at sea (see “Longitude: How the Mystery Was Crack’d,” March-April 1994, page 44). Its £20,000 (worth about $4.25 million today) were awarded in 1765 to clockmaker John Harrison for his invention of the chronometer; by allowing sailors to keep accurate time, the marine clock offered a horological solution to what had often been considered an astronomical problem.

“Somebody outside the field of the problem can oftentimes come up with a good solution,” says Karim Lakhani, Lumry Family associate professor of business administration at Harvard Business School and principal investigator of the Harvard-NASA Tournament Lab at the Institute for Quantitative Social Science. He conducts field trials to study conditions that promote open innovation. Echoing the ethos of the Netflix Challenge, Lakhani and researchers at Harvard Medical School challenged an online community of computer programmers to develop new algorithms for a computational-biology problem, offering a prize pool of $6,000. As reported in Nature Biotechnology in 2013, the team received more than 600 submissions of software code during the contest’s two-week duration; though the programmers had no known experience with biology, the best submissions ran with greater accuracy and speed than a publicly available benchmark algorithm from the National Institutes of Health.

Lakhani is not surprised by the success. One advantage of crowdsourcing is what he calls “more shots on goal”—“the sheer fact of many people trying [a problem] is going to increase the probability that you find a solution.” In the Netflix challenge, for instance, more than 5,000 teams entered more than 44,000 submissions, some exploring alternative data-mining methods that the company had not yet tried. Another advantage of crowdsourcing is the participants’ diversity. “Often, the more we can abstract the problem…the more likely we will be to find folks in other disciplines who can come through with a solution,” Lakhani explains.

Abstraction proved essential in the online bioinformatics challenge. “We’re all surprisingly inept at framing questions in a way that allows other people to help us,” says Eva Guinan, associate professor of radiation oncology at Harvard Medical School (HMS) and associate in medicine at Boston Children’s Hospital, who collaborated with Lakhani on the field trial. In the course of several discussions with programmers, the researchers stripped the problem of its biological context; the complex genetic recombination that takes places in the immune system was recast using nonspecific “strings” of letters—familiar computer-science concepts that could also represent DNA sequences. Guinan sees a role for specific skill-holders at this stage: “Expertise,” she says, “is necessary in both framing the question and analyzing the data at the end.”

As president of the nonprofit Sage Bionetworks, former HMS professor Stephen Friend is applying the principles of contests and games to traditional scientific research. His interest in collaborative, crowdsourced research approaches drew from his own career path: he left HMS in 1995 to found a bioinformatics company that was later acquired by the pharmaceutical company Merck, where he then headed the division of cancer research. “In the last 20 years, the most interesting questions kept getting bigger and bigger,” he explains. “Whereas before, you could say ‘I have my team—my team of 10, my team of 100’—the questions were outdistancing the scale and scope of the teams you could assemble.”

At Sage Bionetworks, Friend has developed a platform to encourage collaborative, open research through Netflix-like contests that challenge researchers to make medical predictions using large, publicly available datasets of clinical and genetic biomarkers. Critically, the participants have forums for discussion and are encouraged to view and borrow each other’s code, allowing them to adopt new strategies and avoid known pitfalls. As Friend and collaborators reported in Science Translational Medicine in 2013, the winning algorithm in a challenge to predict breast-cancer patient outcomes, developed by a team at Columbia University, made significantly better predictions than previous models built in traditional settings.

But contributions are not limited to the algorithmically adept: in another field experiment, Lakhani, Guinan, and collaborators examined how to crowdsource the development of research questions, from initial proposal to implementation. The study began with an e-mail from President Drew Faust in February 2010 announcing an “ideas challenge.” All members of the Harvard community—faculty, staff, and students—were encouraged to submit answers to the question: “What do we not know to cure Type I diabetes?” The goal was not to obtain fully fleshed-out, technical proposals, but rather to solicit new research directions from a broader community. Though only 9 percent of the nearly 200 respondents claimed close familiarity with type I diabetes research, nearly half self-described as patients, or reported having a friend or family member with the disease.

Experts representing fields from academia to venture capital evaluated the submissions and selected 12 ideas to be developed further by traditional research professionals. One of the winning submissions came from a human-resources professional with type I diabetes, another from a Harvard College senior. In the end, seven research teams received a total of $1 million in grant funding from the Leona Helmsley Trust to pursue projects developed from the selected ideas—and five of the research initiatives were led by scientists who had not previously worked with the disease. “A lot of people from outside the field came forward with new ideas,” says Guinan. “They were bringing their science to answer an important diabetes question, which is the best of all possible worlds.”

The Netflix challenge concluded in 2009 when a multinational team of researchers succeeded in meeting the target of a 10 percent improvement. Curiously, though, the company acknowledged in a blog post three years later that it never implemented the final, winning model. It had adopted some innovations developed by other teams early in the three-year contest, but, as two of the company’s engineers wrote, “the additional accuracy gains that we measured did not seem to justify the engineering effort needed to bring them into a production environment.” In part, Netflix was looking forward: as its business shifted toward on-demand video streaming rather than DVD rental, its algorithmic needs changed. (In another, unexpected development, the company faced a class-action lawsuit from users claiming that their privacy was breached by the publication of the anonymized usage data; the lawsuit was ultimately settled out of court.)

The outcome of the Netflix challenge highlights both the promises and challenges of open innovation. How might different crowdsourcing structures protect sensitive information—trade secrets and consumer privacy, for instance—while providing contest participants with the information they need? How should organizations evaluate more unusual, out-of-the-box ideas? These questions motivate Lakhani’s and Guinan’s continued research. As Lakhani says, “We’re still in the early days.”

For more on the Netflix Prize, see this 2008
article
in New York Times Magazine, written
while the competition was still ongoing.

You might also like

Reparations as Public Health

A Harvard forum on the racial health gap

Unionizing Harvard Academic Workers

Pay, child care, workplace protections at issue 

Should AI Be Scaled Down?

The case for maximizing AI models’ efficiency—not size

Most popular

AWOL from Academics

Behind students' increasing pull toward extracurriculars

Why Americans Love to Hate Harvard

The president emeritus on elite universities’ academic accomplishments—and a rising tide of antagonism

The Broken Social Contract

Danielle Allen on America’s broken social contract

More to explore

Darker Days

The current disquiets compared to Harvard’s Vietnam-era traumas

Making Space

The natural history of Junko Yamamoto’s art and architecture

Spellbound on Stage

Actor and young adult novelist Aislinn Brophy