Choosing Our GPT Adventure

Choosing Our GPT Adventure
Damien WIlliams
In many of my courses I use a version of the “Choose Your Own Adventure” (CYOA) assignment structure where the instructor creates a grading model which adds up to either a set point or a 100% value, and then creates a range of potential assignments which can be chosen from and combined to reach that value.
My very first GPT/LLM-related CYOA assignment for my classes on “Philosophy of Technology and Disability” and “Disability, Technology, and AI” asked students to generate a ChatGPT output on topics related to class, with a full submission consisting of the prompt used; the original output; their corrections of the output; and—most importantly— their reflections on the process. However, when it was revealed that OpenAI was grossly underpaying Kenyan content moderators to correct ChatGPT’s outputs, I felt that I needed to change the framework of the assignment itself, which would also provide an opportunity for class discussion around the changes themselves.
The new version read as follows:
ChatGPT Output Evaluation…:
To avoid directly contributing to the operations of OpenAI’s GPT system, find an example of an existing ChatGPT output related to one or more themes or topics of this course and then comment on and correct the output with specific references and citations from our class readings and lectures.
Your full submission will consist of:
- A link to the original output and prompt;
- Your corrections of the GPT output;
- And your reflections on the experience (also using citations and references to the course).
Additionally, and in light of the then-new conversation around LLMs’ power consumption, I also created this assignment:
[LLM] Power and Cost Evaluation…:
Taking a cue from works like Bender, et al.’s “Stochastic Parrots…,” find whatever data you can about the power consumption involved in training and using web-based LLM-based tools such as ChatGPT, Galactica, Codex, LaMDA, or others, and generative art tools such as DALL-E 2, Midjourney, or Stable Diffusion, including the costs involved in the work of human content moderators. Write up your evaluation of that data as well as your reflections on the process of doing the research to search for said data. What can you find, what can’t you find, and what might explain the difference?
Unchecked automated algorithmic applications are very often created from within and in service of systems of power, punishment, and profit motives, meaning they can easily act as force multipliers for the worst behaviours of people seeking those goals. And while many will argue that it is only by playing with and testing these technologies that we can truly come to understand them, neither “testing” nor “play” are neutral categories, either. That is to say, these tools could be beneficial, but they will not be mostly so until we change the motivations behind them and make and use them in ways which safeguard the most marginalized and vulnerable.
Today’s LLMs are trained on unethically sourced and prejudicially biased data, and they operate by means of structures that require vast amounts of natural resources. But they could and should be made differently. Built on sustainable and renewable principles, trained carefully on ethically sourced data, and used in ways which acknowledge the systems’ realities and actively encourage critical thinking, “AI” tools might help far more than they harm.
We can then engage these new understandings of “AI” across a whole semester’s readings and discussions, allowing us to build a classroom culture of honest, good faith engagement, rather than feeding instructors’ suspicions and students’ uncertainty over what counts as “acceptable” use of “AI.”
Today’s plethora of “AI” tools are deeply value-laden, and as educators we must work to actively understand for ourselves and teach our students about how and why that is. We know that uncritical use of “AI”— and even some of currently proposed remedies thereto— can do real harm; but we also know that these tools can be engaged and built in very different ways.