AI Coding Assistants in Physical Chemistry Laboratory

In Fall 2024, we piloted an approach in CHEM 3141L (Physical Chemistry I Laboratory) where we used large language models (LLMs) and a process called Design Recipe to make learning more engaging and foster creativity while mastering these tough topics. We utilized one 3-hour session in the computer lab in Klein Hall to lead this interactive session with sections of roughly 12 students, each at a computer. One of the instructors was screensharing key parts of the activity from a podium computer, while the other instructor was fielding student questions. We gave each student red and green sticky notes to affix to their monitors to indicate when they were able to successfully complete a given step (green), or struggling (red).

This is why we paired the introduction of LLM coding assistants with a process-oriented approach to writing code and algorithmic problem solving called “design recipe”. Coding can be daunting, but design recipe offers a structured approach that simplifies the process. It’s like a blueprint for problem solving and code development, ensuring clarity and accuracy. This method, detailed in a helpful resource from the University of Toronto, breaks down function creation and testing in code into manageable steps that foster engagement from students even while they are utilizing LLMs to assist with code writing. We briefly reiterate these steps here:

  • Header: Define the function’s name, inputs (parameters and their types), and the output (return value and its type).
  • Purpose: Write a concise, one-line description of the function’s goal.
  • Examples: Provide concrete examples of how to call the function and the expected results. These become your test cases!
  • Body: Write the code that implements the function’s logic.
  • Test: Run your function against your examples and other tricky cases.
  • Debug/Iterate: If tests fail, revisit your code, logic, and syntax. Add more test cases if needed.

That’s where the Design Recipe comes in. By carefully planning and documenting their functions before turning to the LLM, students maintain control and ensure the code aligns with their goals. The LLM becomes a tool to enhance students’ creativity and efficiency, not replace their thinking. We performed this exercise within the google colab environment, which has the Gemini LLM integrated into it to assist with code composition (view a copy of a completed colab notebook).

Let’s dive into the first specific example we turned our students in CHEM 3141L loose on: computing the bra-ket in quantum mechanics. The bra-ket, representing the overlap between quantum states, is a fundamental concept. Our activity focuses on creating a Python function to calculate this value given two arbitrary states. Students are asked to walk through the design recipe to think through what the function should be named, how it should be described, what inputs it will take (bra and ket states), what outputs it will produce (the bra-ket value), what types of data each are before writing any code. They are also asked to think through two or three standard use cases of the function with expected outputs, which they will later use to test their code. Going through these steps encourages their planning and critical thinking. This helped them to both anticipate the code they should write, and also create more precise prompts for the Gemini code assistant.

Our students continued with several more types of common calculations performed in quantum mechanics, gradually increasing in complexity so that they could see how to leverage earlier pieces of code. We plan to build on this approach in future semesters, and are actively thinking of ways to assess student learning and engagement with these activities. Any comments or suggestions on these activities or their assessment is welcome!