AI Anxiety

Can writing at Harvard coexist with new technologies?

Illustration suggesting how technologies may influence undergraduate writing

Illustration by Eva Vázquez

I recently copy-pasted an essay I’d written on Boston abolitionist movements into ChatGPT. “Chat,” I commanded, “please list three ways this essay is successful, and three areas for potential improvement.” The machine spat out an answer instantly, and as I watched it unfold, I was mesmerized. The computer pointed out that my thesis could have been more argumentative, suggested areas that could be more concise, and highlighted phrases I’d inadvertently repeated. In short, it did everything that I, a tutor at the Harvard Writing Center, am paid to do. And it took only 10 seconds.

Not all the advice was useful, but as a semi-professional writer and editor, I felt reasonably confident that I could separate the good tips from the chaff. The best part about my AI tutor was that it never tired, and I could correct it all I wanted. “That’s bad advice,” I typed, not caring that it was rude. It promptly revised its suggestions.

I feel obligated to issue the disclaimer that this paper was years old, and I used AI to revise it out of curiosity. I abide by the Harvard honor code, and all the AI policies (which vary by course). As a history and literature concentrator, most of my humanities courses strictly prohibit generative AI, viewing it as a shortcut that undermines learning. But it seems that AI is here to stay—so how can generative AI coexist with the goals of humanities education, and what does this mean for the future of writing? AI policy at the University generally appears driven by fears about cheating, but when used under certain conditions, AI has the potential to make good writers even better.

AI and Academic Dishonesty

Harvard is in the middle of a techno-panic, with AI at its center. No longer a novelty, AI usage is increasingly quotidian, and cheating has become a simple matter of copy-paste. AI can summarize readings and create study guides, but it can also write full sections of code, draft essays or response papers, and conduct “research” (though the sources it uses often do not exist—they’re what specialists call “hallucinations”). In 2023, Maya Bodnick ’26 conducted an experiment by sending ChatGPT-generated essays to be graded in each of her classes. She told instructors that she would send them an essay written either by her or by AI. In truth, all of them had been written by AI. The instructors, who knew Bodnick was conducting an experiment and graded the papers after the semester had ended, nonetheless granted mostly passing grades. Chat ended the year with a respectable 3.57 GPA. Bodnick found that students could potentially sail through Harvard on a wave of instantly generated text, earn a degree, and launch into the world with no intellectual lifting necessary.

Since 2022, faculty members have implemented a host of new policies to circumvent such abuses of generative AI. In a recent course on nineteenth-century Russian literature, my professor opted to administer an oral exam on Anna Karenina. In Gen Ed 1200: “Justice,” 800 students (myself included) sat for a blue book exam consisting of 10 long-answer questions. After my sweaty-palmed, sore-wristed experience with these forms of examination, I can’t endorse them wholeheartedly as the anti-cheating solution, but professors are right to begin thinking about ways to test individuals’ knowledge in an age when the universe is only a click away.

Studying for an oral exam on Anna Karenina revealed how much learning comes from unpredictable questioning. I could memorize plot points and symbols, but in the end, I couldn’t entirely prepare for the analytical questions my professor would ask about the 800-page novel. I had to use the information I’d been trained on to come up with well-reasoned responses, instantly, almost like a large-language model. I found the process extraordinarily difficult, but rewarding. The oral format, though, is transient and improvisational. It is most successful in assessing knowledge and overall analysis. When it comes to testing students’ abilities to develop interesting, original reasoning, the essay is king. It is also under siege.

AI’s ubiquity and the subsequent heightened institutional anxiety about cheating alert us that not all writing is alike. First, there is the art of writing. This is aesthetically valuable writing that challenges people’s thinking and advances knowledge. (As a Crimson writer and a columnist for this magazine, I devote the majority of my time to writing, and likely will in the future, because I believe it can genuinely move people.) Then there’s writing as a skill—writing that is simply meant to communicate information. Rote, formulaic writing is particularly ripe fodder for AI. A large proportion of my formal emails are partially drafted via generative AI. I am a human plagued by indecision, conflicting emotions, and distraction; AI is actually a much more efficient, succinct, and clear-headed emailer. In the time it might take me to think of the appropriate greeting for an email to a professor, ChatGPT can draft several versions of the whole email for me to choose among. If it knows what I want to say, AI can banish writer’s block or compulsive wordsmithing as easily as hitting the enter key.

The problem lies in the fact that it generally takes years of producing mediocre writing to become a good writer. For Soleil Saint-Cyr ’25, a double concentrator in history and literature and computer science, using AI negates the point of a humanities education. Though some of her computer science courses sanction the use of AI—a ChatGPT transcript can be submitted as a means of showing student work—Saint-Cyr opposes its use in the humanities, even for editing already completed essays. “If I want to be a better writer,” she says, “I have to practice; practice involves those minute checks.”

It is a Harvard College Writing Center policy that tutors are not meant to help students line-edit or tinker with witticisms. Rather, we help them structure logical arguments to clearly convey complex ideas. Often the bulk of a tutoring session consists of discussions of ideas and overall structure, not of individual sentences and grammar. But AI is pretty good at generating clear arguments, and it’s even better at logic. What a student might toil on for hours takes a large language model mere seconds. And as the technology continues to improve, it’s only a matter of time before an AI-generated essay becomes indistinguishable from a student-written one.

Effective writing requires a lot of time and toil, and it wouldn’t be fair to evaluate AI-generated writing alongside essays written the original way—with hard work and a few tears. There’s also pedagogical value in reviewing and discussing what students are capable of producing all on their own.

Redefining an AI-Age Education

Maybe that’s why, in all the conversations I’ve had on this topic, students and faculty members understandably seem less concerned about using AI to tinker with a partially formed idea than with generating a top-to-bottom AI essay. This gets at the fundamental question of what a Harvard education is. Most would agree that when we graduate, we should be able to think critically. The ability to churn out essays is not as strong a priority. If Harvard’s goal were to produce skilled writers, then training students to generate writing alongside AI from the start, equipping them to manipulate AI in the workforce, would suffice. But learning how to think takes serious study, practice, and effort. AI may someday craft a flawless essay, but as long as we still value independent reasoning, that will need to be taught. Analog writing has long proven effective.

I don’t doubt that some students might read about Bodnick’s experiment and use it for nefarious ends. But I would like to believe most students who cheat do so not because of laziness, but because of pressure. Writing and thinking are stressful and (speaking from experience) the stress doesn’t necessarily abate with practice. Students who don’t concentrate in humanities may not consider it useful to spend several hours restructuring an essay when there are problem sets to finish, or extracurricular activities, jobs, and other obligations bidding for attention.

But writing is not always about the outcome. For something like coding, AI can be incredibly useful, producing code that achieves results. “It might be ugly, but it’s functional,” said Saint-Cyr. “That cause-and-effect relationship doesn’t exist for literature and essay writing.”

When I coach students in writing, one of the first things I try to help them understand is that all writing is iterative, sometimes exhaustingly so. (This column will have been drafted at least four times and edited by at least five different intelligences, four human and one artificial, by the time it goes to press.) To write something excellent, one must be willing to delete almost all of it. This isn’t something many people are taught in high school—I know I wasn’t.

My freshman year, I spent hours unable to get words onto a page because I believed they had to be perfect. I took introductory writing before generative AI was widely available and assumed that writing meant struggling to turn an idea into linked sentences. Now, I know that the crucible of all writing is revision. I was only able to learn how to improve my writing through arduous trial and error, a process that drove my intellectual and creative development.

Making Good Writers Better

As a more experienced writer, I believe that AI assistance in revising can help me see my work in a new light. For me, the urge to use ChatGPT often stems from a frustrating mental block or a desire to see my text from a more objective perspective. AI doesn’t have “darlings,” so it can mercilessly kill mine. To use a truly antiquated simile, the AI intermediary is like a telegraph operator—well versed in the shorthand that will get my point where it needs to go. Best of all, everything is a suggestion. It seems to me, then, that the value in using AI for writing increases as one improves as a writer. But this is possible only because I know what original, succinct, thoughtful writing looks like. I’ve put in the time to be able to save time.

So, how much of this column was written by ChatGPT? The answer is: not much. It suggested edits and flagged redundancies, but the ideas, structure, and voice are mine. AI didn’t ask the questions that shaped this column. It didn’t have the epiphanies or the doubts.

One of the most human things we do at the writing center is deduce what students want to say—that spark of intention or the glimmer of a burgeoning idea—and guide them toward realizing it. AI may be intelligent, but it is not omniscient. It’s not even perceptive. The truth is, it might partially take my job. It can accurately identify many of the most common revisions we instruct students to look for: arguable thesis, repetition, logical analysis. If you’re asking AI the right questions, it can be enormously helpful. It never tires, it’s available in the wee hours of the night, and it never has another appointment waiting. But what AI can’t do is answer the question you never even thought to pose.

Serena Jampel ’25 is one of this year’s Berta Greenwald Ledecky Undergraduate Fellows.

Click here for the March-April 2025 issue table of contents

Read more articles by Serena Jampel

You might also like

Harvard Responds to Protests

Smaller campus actions draw administrative and online responses.

Harvard Announces Hiring Freeze

Amid funding uncertainties, the University prepares for austerity

Roche to Be Tenant in Allston

First lease for Harvard’s commercial enterprise zone

Most popular

Harvard Announces Hiring Freeze

Amid funding uncertainties, the University prepares for austerity

Harvard Responds to Protests

Smaller campus actions draw administrative and online responses.

How Measles Destroys Immune Memories

Michael Mina explains “immune amnesia” and the lasting impact of infection.

Explore More From Current Issue

Harvard's Tom Kane on Effective School Reforms

Tom Kane deploys data to help improve education.

Teen "Grind" Culture and Mental Health

Teens need better strategies to cope with lives lived partly online.

“AI Anxiety”

The Undergraduate on the uneasy collision of technology and writing