John Harvard's Journal
In 2006, Thomas Kane went to Joel Klein, chancellor of New York City’s public schools, with some unsettling news: teachers from the New York City Teaching Fellows program (which supplied nearly 30 percent of Klein’s new hires between 2003 and 2005) were on average no more effective than traditionally certified teachers. In fact, the professor of education and economics at Harvard Graduate School of Education (HGSE) had discovered, no certification program—neither NYCTF, nor Teach for America, nor the Peace Corps Fellows Program, nor traditional education schools—turned out better teachers than any other (see “Grading Teachers,” November-December 2006, page 18).
This did not mean, Kane pointed out, that the district’s choices were unimportant. The real variance was within the programs: each trained some stellar teachers, each trained some duds. A teacher’s abilities, or lack thereof, become clear only over time. Thus, Kane argued, tenure review should begin only after the district has enough data to tell whether a novice teacher could ever become an old pro. Kane wouldn’t remove the certification barrier entirely, he says, but he does advocate “moving the dam downstream, to where we actually have some information.”
Nevertheless, Kane remembers, Klein pointed out that it would be more convenient to separate the wheat from the chaff during recruitment. The chancellor further suggested that Kane and his colleagues (Jonah Rockoff from Columbia and Douglas Staiger from Dartmouth) set up an experiment that asked the sort of questions the school district wasn’t already asking applicants. Perhaps the researchers could find something to predict teacher performance better than a standard résumé. Kane agreed. He wrote up a survey and then sent it out to teachers who had been on the job for less than a year. Klein “sold us on that study,” Kane marvels.
Photograph by Harvey Wang
Kane’s Project for Policy Innovation in Education (PPIE; see www.gse.harvard.edu/~ppie), slated to become a University-wide center, is one of several groups that are bringing Harvard’s analytic resources to bear on the problems besetting the nation’s public schools. From the Kennedy School, Shattuck professor of government Paul Peterson directs the Program on Education Policy and Governance (PEPG; see www.hks.harvard.edu/pepg/index.htm), edits the policy and opinion journal Education Next, and studies the impacts of vouchers and charter schools. Within the Faculty of Arts and Sciences, professor of economics Roland Fryer heads the Education Innovation Laboratory (or EdLabs; see www.edlabs.harvard.edu), where he designs experiments that offer cash incentives to students who excel academically. Together, their projects illustrate the opportunities, and the challenges, researchers meet when they try to better public education.
The questions a researcher can answer depend, at least in part, on the data available. And because school districts have traditionally been reluctant to share data with outsiders, studies have often focused on national numbers from the Census Bureau or the Bureau of Labor Statistics (BLS). “The key to the game was coming up with some new approach to the same basic data,” says Kane. “People were rediscovering the same fact over and over and over again.” For example, the Current Population Survey (run jointly by the BLS and Census Bureau) measures both income and years of schooling. As a result, Kane says, there are more scholarly papers on the economic benefit of extra years of education than anyone could possibly need. More recently, the No Child Left Behind Act, which requires math and reading tests between third and eighth grade, has provided a new pool of data for researchers to dive into.
Still, professors have to convince a district to open its files. “In fairness to the researchers,” points out Thomas Payzant, former superintendent of schools in Boston and current professor of practice at HGSE, “people in my world weren’t always the most welcoming. They were afraid the research might make them look bad.” Now, he says, schools are more eager to evaluate their programs using their actual data. The key, argues Kane, is to approach schools with an offer to solve the puzzles they’re already working on.
He realizes, though, that he may not find what his sponsors want. They get a private briefing of his results before he publishes them, he explains, but “there’s no opportunity to censor things.” Once researchers make their findings public, Kane warns, they need to brace themselves for hostile reactions. “If you’re doing well-respected but irrelevant work, where nobody really cares about the outcome, nobody’s going to accuse you of being an advocate for one point of view or another,” he says. “But the moment your work starts to have implications, there will be people who will start to question your motives.”
The frequently rapid pace of leadership turnover in public schools also presents a challenge for academics. By the time professors finally corral enough grant money, their partners in the school administration may already be gone, hired by another district or fired. Or, “If you’ve got a superintendent excited about doing something,” says Jon Fullerton, executive director of Kane’s PPIE, “and you say, ‘Great, we’ll be back to you in a year to start the project,’ you may not capture their imagination in the way you wanted to.”
Fullerton would also like to see more researchers working with the same information. When assembling administrative data for a district (linking students and their test scores to particular teachers, for instance), he returns the newly legible data to the district. “If they want to redistribute it themselves, that’s fine by us,” he says. “That’s one of the things that we’re trying to see happen.” The Kennedy School’s Peterson, in fact, says that proprietary relationships between school districts and researchers make him nervous. “It’s much better to do [what] the U.S. Department of Education does,” he says. “They create a data set that’s clean and available to everybody. You’ll get competing interpretations and analyses, but it’s going to clear the air. In the long run, things begin to clarify, even if the debates are intense initially.”
Few subjects are more politically fraught than school choice—which encompasses issues of charter schools and school vouchers. Peterson began studying school choice in a serious way in 1995, around the time that he launched his policy group. Most recently, he entered the debate surrounding Philadelphia’s 2002 decision to turn over more than 40 of its troubled middle and elementary schools to a mix of non- and for-profit managers.
A 2007 study by the RAND Corporation found no differences between the nonprofit and privately operated schools and the schools that remained under district control. Peterson, objecting to the way RAND handled the data, designed his own test. RAND compared the test schools to all of the schools under district control and included only students who stayed put throughout the test period, while Peterson compared the test schools to struggling district schools and kept in those students who changed schools. “To our surprise,” Peterson says, “the nonprofits did much worse than the district’s schools. And the for-profits did better.” Students at the for-profit schools had learned the equivalent of an extra two-thirds of a year of math. Students in the nonprofits appeared to lag behind in both math and reading (although those results weren’t statistically significant). But Peterson’s scholarly findings didn’t sway the district: last summer, Philadelphia decided to take back six of the for-profit schools and warned 20 other schools (both non- and for-profits) that they had only a year to show clearer results.
Peterson calls his research in Philadelphia “quasi-experimental”: he could compare different managers, but the students weren’t randomly assigned among them. He considers Roland Fryer’s EdLabs more purely experimental. Funded in part by a grant from the Broad Foundation, Fryer is testing the effects of monetary incentives on students. In both Washington, D.C., and New York City, some middle-school students can earn money for academic success; in D.C., good attendance and behavior count, too. In Chicago, Fryer’s program gives high-school students a percentage of their earnings at five-week intervals and withholds the rest until the students receive their diplomas. “If we aim to establish true equality of opportunity in education, we must be willing to take risks and explore innovative strategies,” Fryer said in a Broad Foundation press release. “The ‘same-old’ strategies have failed generations of students.” (Fryer declined to be interviewed, saying he would like to wait until he has gathered his results.)
Kane, for his part, hoped to offer chancellor Klein and the New York City public schools a new hiring tool with his seven-part survey for new teachers. The 90-minute survey, more than 200 items long, included everything from an IQ test to a measure of how much time an applicant has spent with children (coaching, babysitting, etc.). “We call it our kitchen-sink paper,” Kane jokes. Although no single factor separated the good teachers from the bad with pinpoint precision, the survey did have some predictive power. Especially promising was a sample math test with answers, designed by Kane’s HGSE colleague Heather Hill, that required teachers not only to locate any incorrect responses, but also to find the source of the errors. Kane plans to keep looking for ways to spot good teachers before hiring them, offering his analytic expertise to public educators. “Working with quantitative data, and trying to answer questions with quantitative data, is something people around here know a lot about,” Kane says. “It’s what we do best.”