In my first-year writing classes, I typically start with five minutes of freewriting. Since some folks don’t know where to start when they set pen to paper or fingers to keys, I use a random word generator to give students a nudge if they need it.
The fish listened intently to what the frogs had to say.
Today, I realized the random word generator I use also has a random sentence generator. According to the FAQ on that page, the sentences are not computer-generated; instead, the site draws from a database of human-authored sentences. (It isn’t clear where these sentences come from, although the FAQ says it’s possible to “donate” your own sentences to their database.)
Pat ordered a ghost pepper pie.
Next week, a handful of my Framingham State colleagues and I will start planning this year’s retreat for first-year writing instructors. The topic of this year’s retreat will be the impact of ChatGPT and large-language models (LLMs) in composition classrooms. Although much of the media coverage of LLMs focuses on plagiarism and cheating, I’m equally interested in the ways tools such as ChatGPT can be used ethically, as a way to kickstart (not replace) creative and critical thinking.
I used to live in my neighbor’s fishpond, but the aesthetic wasn’t to my taste.
Earlier this week, I heard an NPR story in which a college student described the ways he uses ChatGPT as a brainstorming tool in his academic work. In a textual analysis of The Iliad, for example, he used ChatGPT to generate possible thesis statements, then he chose a thesis he agreed with and asked ChatGPT to write an outline. Given that outline, he went back to the text to find illustrative quotes, then he wrote his own paragraphs to flesh out the argument, creating an essay that would be difficult to flag using existing plagiarism-detection tools.
Carol drank the blood as if she were a vampire.
Using ChatGPT to write an entire essay is clearly wrong, but is it wrong to use LLMs to help with brainstorming, organization, or other composition tasks? I had an international student this past semester tell me he uses ChatGPT to correct the grammar of his essays, for example, and I (personally) don’t have a problem with that. Is relying upon spell- or grammar-check (or hiring an editor) unethical? What about tools such as Grammarly and auto-correct? Does every single idea in a given essay have to come from your own brain, or is it okay to use a random word generator or quick Google search to jumpstart your thinking?
The fifty mannequin heads floating in the pool kind of freaked them out.
We encourage students to ask their professors and writing tutors for help, and we know students sometimes ask their friends, roommates, or even parents to read their essays. How many brilliant essays started as thought-provoking conversations where multiple people contributed ideas? Does asking for help or conferring with peers count as cheating? If asking a human for help is okay, why is collaborating with a computer different?
I can’t believe this is the eighth time I’m smashing open my piggy bank on the same day!
When it comes to the impact of LLMs in the first-year writing classroom, I have more questions than answers. I know tools such as ChatGPT are here to stay, and I know this generation of students will use generative AI in the workplace of the future. Given those realities, teaching students how to use technology responsibly and transparently is more helpful than banning technology outright. Sometimes allowing (and admitting) the randomness of real life leads to something creative and curious.
Although I myself wrote these paragraphs (with occasional grammar and usage corrections from Google Docs), I did not write the random sentences in between.
May 24, 2023 at 1:38 pm
Using ChatGPT to write an entire essay is clearly wrong, but is it wrong to use LLMs to help with brainstorming, organization, or other composition tasks?
Well, you’re talking about a spectrum of ethicality here, and if we stick to just extreme cases, then it’s relatively clear, from a teacher’s perspective, where using AI is ethical or not. I think, though, that using AI/LLMs to help with brainstorming, organization, etc., is going too far. The cognitive (and creative) effort made in organizing one’s thoughts arguably contributes to a useful life-skill.
I’m especially wary of anything that deprives people of the chance to think because I live and work in East Asia, where education often amounts to little more than rote memorization and the ability to zero in on the one right answer on multiple-choice questions. One-right-answer thinking is a huge problem in East Asian education. (To be fair, it’s something of a problem in the West, too.) Not teaching thinking—higher-level cognition and not just memorization and fact-regurgitation—is a huge problem everywhere.
In my own work (English-language textbook content creation), we’ve dabbled with ChatGPT to mixed results. The AI has some obvious limits. We asked it, “Who are you?” some months ago, and got nothing coherent. (When I asked the AI the same question just now, it had a slicker answer* that still felt like a bit of an evasion. The AI has been “learning.”) My boss used ChatGPT to try to create an alphabetized list of vocab words, and the list contained goofy errors. When the errors were pointed out, the AI apologized, redid the list, and there were still errors. The thing is useless for a variety of seemingly simple tasks that are easily done by, say, Microsoft Excel. None of this shores up my trust in the AI, which I think is still very much in its infancy. This won’t stop others from anthropomorphizing it and blindly relying on it, though, so I realize I’m just shouting into the wind.
__________
*”I am ChatGPT, an AI language model developed by OpenAI. I’m here to assist and provide information on a wide range of topics to the best of my abilities. How can I help you today?”
LikeLike
May 25, 2023 at 9:08 am
I believe it’s Alexander Chee who recommends a variation on bibliomancy as a writing exercise – closing your eyes and picking up a book within reach, and – eyes still closed – opening it and pointing at a sentence, and using that sentence not to foretell your future but as a writing prompt, just as you’re using those randomly generated sentences above. I suppose if you ended up wanting to turn any of these essays into something you wanted to publish (or turn in for credit) you’d need to find a way to cite that prompting sentence. Otherwise, I’d say – bibliomance away! And now consider – one could start using the random sentence generator to foretell the future!
Also heard the interview with the Columbia student writing about the Iliad with (essentially) a prompt drawn from ChatGPT. His exercise (once begun) was no different from responding to a slightly more directed writing assignment (or prompt on an essay exam). For the type of student who, 20 years ago, was pasting together paragraphs from Wikipedia and the like to ‘generate’ a paper, life has gotten much easier. For the contrarian, it’s gotten more interesting.
LikeLiked by 1 person