What to Believe about ChatGPT

I believe in science. Trust the science.

What do these liberal dicta mean? Immediately for most people they mean “anti-Trump” in some form or fashion. They might also mean “not a conservative.” These phrases are polarized and perhaps worse than useless – harmful without their nutrients, like Wonder Bread rhetoric – stripped of the fiber of their being in order to gain attention at a lower price point.

These phrases are fantastic if made a bit more complex than “I’m intelligent and you are not, O sad conservative!” If we look at what we are meant to trust and believe, we get a shocking insight: Scientists are not results-driven, at least in the way we imagine – they are not after the truth per se, but after everything related or assumed to be in there with the truth.

Scientists love failure and success in experimentation and research because it all adds up to . . . well something. Something is better than nothing and definitely better than assuming or guessing. Science’s truth is not the outcome that works, it’s the process that produces all outcomes.

This is all, to quote President Obama, above my pay grade. I’m a humanities professor but I can’t help but think about “trust the science” in relation to the big threat of the hour, ChatGPT. My colleagues across the country in the humanities and any subject that uses writing as a form of assessment or grading bemoan ChatGPT as a criminal. It’s a grifter, it takes and takes and never pays anyone back; it’s a home invader, knocking at the door for help then keeping you prisoner in your own space while it robs you; It’s the unscrupulous politician, sounding great and deep but never able to act on anything it utters. It just utters and utters and can’t seem to understand that it’s bloviation has unintended meanings all over.

Kenneth Burke reminds us that humans are “nervously loquacious.” I feel ChatGPT is a very good reflection of that. It isn’t looking at data but human discourse, siphoning it and churning it and re-presenting it (and representing our collective utterance quite well I might add). AI isn’t anything other than a skimming off the bottom, middle, top, and the sides of human discourse – like this post – that is hovering as a magnetic field in some disk in some server miles from where you and I are.

The solution to ChatGPT’s criminality is to remove what it profits from. Students believe that writing a paper is a result-oriented task: That is, their whole lives as students they have been assessed on the final paper. They are deeply worried about getting it wrong. It also doesn’t help that most professors grade grammar, syntax, punctuation, and even word-choice, arguing that a particular vocabulary is necessary for a “college paper.” These same people turn around and decry the colonial and classist university curriculum, oddly enough.

Approaching students as I have with “I really just want your opinion on these readings” isn’t good enough to stop them from associating with the criminal ChatGPT. It offers a perfect product, so long as you double check the sources it cites and also make sure to take out the self-depreciations (“As a language model AI, I am not able to . . .”). I have been using ChatGPT quite a bit to see how it would put together phrases about theories, relate them to one another, and what examples it would use for an argument theory from sport, something I am notoriously bad at knowing anything about. These moments help me with my process, with what rhetoricians call invention and arrangement, and sometimes style. They are not a substitute for what I write and what you read, but they help out a lot – like a kitchenaid or a bread machine, two things my mother couldn’t praise enough when she attained them. They didn’t do the work for her; they assisted her immensely in the work she enjoyed, and allowed her a “discount” to try new, complex things to bake.

Humanities professors freaking out about ChatGPT only need to return to the nutrient-full phrasing of trust the science. Perhaps we can tell our students to trust the method – the putting together of the ideas about the readings or the class need not be perfect, and a failure of an essay (origin French, from “to attempt” by the way) is a possible A. One has only to communicate, to try to get across the thoughts and feelings they have about a complex text. That’s a challenge enough. Making it more a process of engagement rather than an evaluation of what comes out will put ChatGPT in its place, a resource for invention that sometimes helps you figure out what you want to say about something, or give you examples your audience could connect with.

ChatGPT means to give up assessing and grading the final essay, and becoming much more interested in process. This is the gift of ChatGPT, it allows us an immediate reason to change our too comfortable and often questionable pedagogy. Why is writing a final essay in isolation for a professor to read and evaluate the best way to understand understanding? Does it even make a top 10 hermeneutic list? Breaking that assignment up into various reaction and reconsideration parts help students see that essays are, like their French origin, tries. If we emphasize that not just in our kind words to nervous students in office hours, but in our rubrics, we will find the threat of ChatGPT to be no more devastating than a student talking to others about their writing ideas.

Focusing on the final essay as the thing instead of believing in the process regardless of the outcome is the reason ChatGPT causes us so much trouble. Change the focus, change the meaning. We’d like our students to be as loquacious as the rest of us, and, understand we are all scribbling away nervously at the edge of an abyss.

Tags: