Press "Enter" to skip to content

Should I let ChatGPT write my papers?

Evolution is a fact of life and 99% of our DNA sequence is shared with chimpanzees. The earth is round, and the ostensibly called celestial body we inhabit is not at the center of the universe, and neither is the galaxy, nor the minuscule cluster of planets that is the solar system. We weren’t created or designed, but were merely a result of cosmic coincidences that didn’t salvage us from our entrenched, deep-rooted animal nature. We might not be anything but tiny insignificant specks of something amidst an ever expanding universe and according to Professor James Miller, with whom I had a conversation regarding the future of academic writing, “ChatGPT will make us obsolete, and most likely kill us.”

But what is ChatGPT? I asked ChatGPT that exact same question and according to itself and everything it has learned from us, it answered in a way that felt almost condescending: “ChatGPT is a computer program that can understand and respond to human language. It uses a lot of information it has learned from reading many books and websites to understand what people are saying and give helpful answers. It’s like having a really smart and patient friend who can answer all your questions!”

Humanity has been subjected to several blows to its ego throughout history; monkeys are our ancestors, God is dead and heliocentrism is a result of our own self-absorbed perceptions. Our naive, pompous selves have had to repeatedly acknowledge our existence as trivial. I can only imagine what it must have been like to gain awareness of my inconsequentiality at a time when heaven was a guarantee and a deified star revolved around me and me alone, but I’m pretty sure it’s close to how I felt the first time I typed in a prompt to ChatGPT and a fully-fledged paper with MLA citations materialized on my computer screen.

And now, here is a confession: I was not the author of my final paper for one of my classes last semester; ChatGPT was. This did not constitute a violation of the honor code, however, as the requirements for the paper were fairly simple: be clear, be interesting, be concise and you can use an AI writing tool to help you write the paper. Professor Miller has now incorporated the use of ChatGPT in his seminar “The Economics of Future Technology,” even encouraging doing so. Since “I” wrote that paper, the use of ChatGPT has exploded and become incredibly widespread, to the point where it is impossible for me to escape overhearing conversations about it on campus.

I have witnessed nervous students using ChatGPT several times, slightly turning their laptop screens, secretive and arguably ashamed. When I asked Professor Miller about what colleges should do, he mentioned that “Colleges aren’t organized that way. There is going to be student demand for it, though; it’s going to depend on the professor.” The fact that a lot of students are using ChatGPT is an open secret and it is hard to say if this is a consequence of a strict adherence to an academic moral and ethical code or our own collective narcissism.

Throughout our conversation, Professor Miller mentioned several times that “the implementation of ChatGPT is going to get better and better, until it kills us’.” He was also optimistic, however, about its pedagogical applications, and said, “It’s going to improve educational outcomes to the point where it exterminates us all. Educational theory says the best way to learn is through one-on-one tutoring. To ask questions to an expert, ChatGPT has that potential. The educational outcomes would be fantastic.” 

Language models such as ChatGPT have been implemented all around us. Recently, the office for Diversity, Equity and Inclusion at Vanderbilt University has had to apologize after using ChatGPT to write a sympathy statement following a shooting at Michigan State University. The backlash was expected; the authenticity one would expect from human interaction feels threatened. When I asked Professor Miller, he stated that it seemed like “a reasonable thing to do, we are moving towards that,” he continued, “If you want me to use a calculator so I get accurate results, why wouldn’t I use ChatGPT to accurately get a point across in writing.” Professor Miller also mentioned the use of language models as a companionship tool, and said “companionship will be important, therapy, things like virtual girlfriends or boyfriends.” What happens then, when humans become an irrelevant component of language?

Currently, most of my classes could be considered “writing-intensive” and the entirety of my grades rely on my ability to write papers. I have a choice to make: bend my knee and poignantly relent to the optimization machine that is ChatGPT, or prolong my maybe naïve yet egocentric belief that a language model will never be able to replicate the essence of my voice as a writer. So far, I have mostly been a spectator; it has been easier that way. I’ve chosen to be removed from the equation and think of tools like ChatGPT as something that belongs to a future I am not a part of, rather than to commit an act that I feel strips me of my self-importance. But the consequences of ChatGPT are going to define this era, and they are relevant, affect us and we have to partake in the discussion.