Writing robot

Yesterday at a Babson Writing Program meeting, my colleagues and I had an engaging conversation about AI-generated writing in the college classroom: an ongoing discussion we’ve had since ChatGPT has dominated headlines.

At this meeting, a colleague shared a news item about a ChatGPT-generated email that Vanderbilt University sent in response to the mass shooting at Michigan State. I wasn’t surprised that a university administrator had sent a bot-generated email that included a tag marking the text as AI-generated: I’ve always suspected administrative responses to tragedy are more canned than sincere. Instead, what struck me about this story was how perfectly the bot nailed administrative condolence-speak. The email wasn’t convincing because the bot sounded human; the email was convincing because this sort of communication always sounds robotic.

After yesterday’s conversation, I spent some time playing with ChatGPT. Inspired by the assignment my Research Writing students are currently working on, I asked the bot to generate a discourse community analysis of the Brookline Bird Club, an example of a discourse community I’ve mentioned in class.

It was thrilling and a bit frightening to see the words quickly appear on the screen: the bot types much faster than I do, and without pauses to think and sip tea. But while the bot accurately described what the Brookline Bird Club is and how it fits the general parameters of a discourse community–namely, a group of people with a shared goal or interest who use a shared lingo to communicate–the “analysis” the bot generated was exactly the kind of bland, obvious generalizing I don’t want my students to produce.

Yes, I already know the BBC is a discourse community: that’s why I suggested it as a topic. What I want to hear from an analysis, then, is how well does the BBC use specific examples of discourse to create community and further group goals?

That, of course, is a thinking-style question, so exactly the kind of thing a bot isn’t good at. What I want my human students to do in their writing isn’t simply to churn out text: the bots are already better than us at that task. Instead, I want students to do the things that robots can’t do: namely, think deeply and critically.

In a discourse community analysis, I don’t want students to repeat the same bland platitudes about how groups bring like-minded people together. Instead, I want students to look closely at specific examples of discourse to determine how well those texts work.

Do these texts bring community members closer together? Do they encourage lively discussion–a true sharing of ideas–or do they stymie or even quash communication? Do these texts encourage healthy dialogue among all members or only between a few? In a word, do the utterances produced by this community encourage actual human interaction–the kind of communication we all crave–or do they serve to turn us all into robots: mere cogs in the discourse machine?

When they work, discourse communities (like lively meetings with colleagues) make us feel more human, not less. ChatGPT might be good enough to write an insipid analysis or canned-sounding condolence, but it isn’t smart enough (yet) to know whether it is wise to click “Submit.”