Karim Lakhani says his work poses a provocative question: can a crowd of random people outsmart Harvard experts?
Lakhani, professor of business administration, seeks the answer in his role at Harvard’s Institute for Quantitative Social Science, where he is principal investigator at the Crowd Innovation Lab and NASA Tournament Lab.
Lakhani’s interest in crowds took root when he worked at General Electric in the mid 1980s and noticed that open-source software developers—groups of people volunteering their time to write code—were producing better software than GE itself. To explore this phenomenon further, he began studying at MIT’s Sloan School of Management, working with Eric von Hippel, a professor of management of innovation and engineering systems who investigates how users of products—like California hot-rod modifiers in the 1960s, or snowboarders more recently—often find ways to improve them. Lakhani’s twist on that theme examined how communities innovate to create products such as open-source software, and asked whether contests could be used to organize crowds that might outsmart small numbers of experts.
His research got a real-world boost when he began teaching at Harvard Business School (HBS). After he presented a case study on crowdsourcing at an HBS executive-education program, one of the attendees—NASA’s chief medical officer—asked whether such a contest could help the agency. “Give me a test case,” Lakhani responded.
NASA asked him to come up with an algorithm that would identify the ideal contents for a space emergency medical kit. Using Topcoder, a crowdsourcing company that brings random groups of developers and designers together to work on problems, and $25,000 in prize money, the contest led to a solution that worked better and faster than one NASA had developed internally. That led to the creation of the NASA Tournament Lab, which added economists to help design effective contests, as well as post-docs in physics and computer science to tackle the full range of problems NASA wanted to solve. In six years, the lab has run hundreds of competitions on Topcoder, addressing challenges ranging from solar-flare detection to the counting of asteroids. Almost all have produced effective code for the agency.
The Tournament Lab also addresses a crucial problem with competitions: lack of empirical evidence for why they work. Lakhani notes that a crop of good textbooks explains how to design competitions and other theoretical aspects of competitions, but “What’s been missing is field evidence” of what—apart from sports or internal contests—motivates crowds to solve problems.
The lab has provided answers. People form crowds to solve problems for three reasons, Lakhani says: extrinsic benefits (improved professional profile or rewards like cash); intrinsic benefits (it helps solve a problem, or it’s fun); and pro-social benefits (participants like being part of something bigger than themselves that makes the world a better place).
Many coders, drawn to interesting problems, participate in these competitions over long periods of time. Topcoder, formed in 2001, now has more than a million members. Another contest crowdsourcing site, Innocentive, has more than 500, 000. “Most people don’t get access to the types of problems that people at NASA or Harvard or Pfizer get to work on,” Lakhani points out. “Now all of a sudden there’s a rich flow of very interesting problems that people can put their minds to.”
A constant risk, of course, is that crowds won’t form. Asking people to cure cancer is too broad. Asking them to look at how a particular enzyme might work in a biomedical process is better. Rewards must be calibrated to the problem, which should be clearly defined. And organizations that want to run contests must be committed to implementing the solutions that are developed, or people will not take them seriously. “There’s not a magic pixie dust of crowds,” he says, but careful governance.
With the right structures in place, though, a very wide range of problems can be addressed, in part because crowdsourcing opens problems to cross-disciplinary approaches, as Harvard itself found when—in collaboration with Harvard Catalyst (a shared-resource center supporting clinical and translational health research)—it used a University-wide contest to generate ideas about diabetes research. Lakhani’s crowdsourced approach has also worked remarkably well even in the esoteric world of computational biology. In a matter of weeks, and for prizes of $20,000 or less, it created algorithms that solve data processing bottlenecks in the development of precision medicine applications at least an order of magnitude (or 10 times) better than those previously produced by teams of experts at places like the Broad Institute of MIT and Harvard.
Recently, the U.S. Agency for International Development approached Lakhani to ask whether an algorithm could be developed to predict atrocities. His group started by launching a contest to find appropriate data sources, and discovered GDELT, which aggregates news sources from around the globe and makes them available via an application-programming interface. Then it asked for machine-learning algorithms that could take the GDELT news feed and predict atrocities within subregions. The algorithm is being modified, and through crowdsourcing, may become a publicly available predictive service.
Is the crowd smarter than a Harvard expert? Crowds that compete in esoteric contests or in writing open-source software usually draw people with real expertise. But often they don’t define problems well. “The real role for Harvard experts,” Lakhani says, “is to help define the problem, think hard about how to evaluate the solution, and then to take the solution from the crowd and implement it.”