Approaching AI Critically
Min Jiang
Critical Thinking, Ethics, Creativity, Plagiarism, Course Development
This story captures my evolved understanding of AI and accordingly adapted usage and approaches towards AI. I am Professor of Communication Studies in the College of Humanities, Earth & Social Sciences (CHESS). My research lies at the intersection of technology, policy and geopolitics with a focus on China and the Global South (especially BRICS countries) and I teach graduate and undergraduate classes in Media and Technology Studies concentration.
A few years ago, before the popularization of ChatGPT, I came across Kaifu Lee’s 2018 book AI Superpowers: China, Silicon Valley and a New World Order for my research on US-China AI competition. This accessible book provided a forward-looking industry take on AI development and competition in the coming years. Several key academic texts on AI also emerged shortly after such as Stuart Russell’s Human Compatible: Artificial Intelligence and the Problem of Control(2020), The Oxford Handbook of Ethics of AI (2020), and Frank Pasquale’s New Laws of Robotics: Defending Human Expertise in the Age of AI (2020). These texts helped me develop a more informed understanding of AI. If we don’t understand what AI is, it’s hard to formulate a rational approach to it in research or teaching.
After ChatGPT emerged in 2022, plagiarism quickly became a central concern in the academe. Naturally, AI-assisted plagiarism was an integral part of class discussions about plagiarism and creativity in both graduate and undergraduate classes in the first week. Students navigate case studies to draw the lines between legitimate use of AI (e.g. develop ideas) vs. illegitimate uses of AI (e.g. copy and paste or slightly revise ChatGPT outputs) in assignments. Moreover, we discuss concepts of voice and personality in creative works as well as sourcing and citations in writing. The role of anti-AI-plagiarism software such as Canvas’s SimCheck also factors into our discussions. Given new studies that confirm the loss of critical thinking abilities by users overly reliant on AI, I plan to integrate such popular science readings into the lesson plan for plagiarism and creativity.
After ChatGPT gained popularity, I experimented with classroom projects that integrate an AI component in 2023 Spring. In an introductory undergraduate class of media and technologies with 100+ students, students formed small groups to tackle an AI-related topic. Some chose to make a video about the potential impact of AI on art and artists; others elected to explore the gap between human creativity and AI creativity and compare the two. Students submitted and presented their work to the Critical Media Literacy Collaborative (CMLC) conference hosted by the Atkins Library at UNC Charlotte in 2024 Spring to share their experiences.
In my own work, AI also occasionally made an entrance in quite unexpected ways. As my co-authors and I tried to develop a book cover for our project Digital Sovereignty in the BRICS Countries: How the Global South and Emerging Power Alliances are Reshaping Digital Governance, we struggled to find the right cover image. Forced to come up with a drawing of what we wish to see, I turned to DALL-E, an AI image creator, to provide a prototype based on the team’s inputs and specifications. In the end, the design team at Cambridge University Press put together a book cover based on our DALL-E prototype. In this example, AI can be a useful tool for prototyping, something not as accessible as before.
I have also been writing and thinking more about AI, politics and geopolitics lately. After joining the Digital Futures Task Force of Planetary Politics at a DC-based think tank New America, I have been developing a thought piece about AI and algorithms for New America, linking my previous research on search algorithms to AI technologies and politics. It is clear that older patterns of economic, social, political and cultural biases will persist into the age of AI where a degree of regulatory interventions is needed.
Just as racial and biases were deeply entrenched in search as Sofiya Noble detailed in Algorithms of Oppression: How Search Engines Reinforce Racism and socioeconomic biases were rampant in social welfare program documented by Virginia Eubanks in Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor, such inequalities continue to seep into AI today as Joy Buolamwini argued in her recent book Unmasking AI: My Mission to Protect What Is Human in a World of Machines. At a recent AI panel on campus, faculty in Arts and Architecture shared similar concerns of overrepresentation of certain artists and styles based on AI training data. Biases in training data, design logics, political censorship, and marketing priorities will continue to shape future AI applications and be a focal point of research and human interventions.
As I develop new courses integrating a bigger AI component, I envision taking into account different topics and traditions:
- Historicity: Plato’s Phaedrus features a prescient warning by Socrates of writing as a corrupting technology. We can link that ancient conversation to the present moment of AI.
- Ecology: Having read Atlas of AI by Kate Crawford, I believe it is critical to invite students to concretely examine the layers and inputs of AI and understand its environmental impact, including the fact that AI tends to consume 23–30 times the energy of a normal search.
- Monopoly: I teach issues of monopoly in media and technology classes. Recent Chinese AI upstart DeepSeek can serve as a case study for discussions on monopoly and innovation.
- Regulation: Given my own work on technology policy, I plan to let students read and examine US, EU and Chinese regulatory frameworks and approaches.
- Geopolitics: Given the current AI race and a broader competition between China and the US, I also plan to include a discussion on AI and geopolitics in my new courses.