How does ChatGPT change teaching in college? A view from the brain and the teaching of Psychology 101.

December 12, 2023 at 3:35 PM
Posted by
Categories: Uncategorized

How does ChatGPT change teaching in college? A view from the brain and the teaching of Psychology 101.

By Arun Venkitanarayanan UA’26 and Jim Stellar

This issue popped into our thinking last spring when AV took an introductory psychology course that JS was teaching.  AV is a cybersecurity major and volunteered to work with JS to deal with this issue in the coming term. Hence this blog. We also are doing an experiment this Spring (2024) with a company, InStage, to repeat a pilot experiment where students took an alternative to the classic Monday short quiz in this course, not by writing a brief essay on the learning management system, but by talking to AI-powered avatars about the same question, e.g. “What is the role of prediction in both classical and operant conditioning?” 

So, where to begin? Let’s start with AI.

History: Over the past few years Artificial Intelligence (AI) has become more prevalent in our society. This has especially been true since the launch of Chat GPT last November. In essence AI refers to any computer that has the ability to “think” by itself beyond what is explicitly programmed. This is very useful for analyzing data and finding complex patterns. Chat GPT is a special type of AI called a large language model (LLM). LLMs are AI models that have a specific neural network architecture that use a tool called a transformer which is suited for language processing. Transformers link text together and can predict how words relate to each other. LLMs such as Chat GPT are fed with a large amount of publicly available data. This is the basis for the responses LLM’s give. While these responses may appear human-like at first glance, upon further analysis one can determine something is off. LLM are not human and therefore cannot understand language like we do. At the end of the day, their output is a result of merely linking similar words together, not understanding.

While today’s AI tools, based on artificial neural networks, are not yet as powerful as the 86 billion neurons of the human brain, they can still present some challenges in academia. In the last year K-12 schools and universities have been scrambling to define ethical usage policies for AI. In many institutions there still isn’t a clear AI use policy and the definition of “cheating” varies from professor to professor. Despite this, it has become abundantly clear that AI is here to stay. AI will be used in the future workplaces of students and has the potential to increase productivity greatly. Despite this, we must ensure that it does not detract from students’ learning. A good way to think about AI tools would be to compare them to calculators. When calculators first became popular, educators were forced to fundamentally change their assignments to focus more on problem-solving and conceptual understanding, acknowledging that calculators could handle routine calculations effectively. Similarly, we believe that if educators adjust their assignments to work with AI tools, they may actually bolster student learning.

One way educators can do this is by using AI itself to create new assignments. In our Spring 2024 Intro to Psychology class, we will be testing a new AI powered tool for quizzes but we are also experimenting with using ChatGPT to have teams of psychology students explore answers to simple questions, like (but not the same as) the one mentioned at the opening. Then they will discuss whether the answer is technically correct, and post that for discussion to the students in an active-learning Zoom-group based discussion on Thursdays that follows a Tuesday interactive lecture on that topic.  That structure, which has been around for a while in this Psychology 101 course, completes a week on a topic (e.g. Learning, Social Psychology, Clinical Psychology, Behavioral Neuroscience, etc.) that composes the entire semester-long course. Our very general hope is to engage the students in more active concept learning.  Oh! We only use essay tests on the two midterms and the final so we can keep that learning more at the conceptual level rather than the recognition level that happens with a multiple-choice test.

The Student Perspective:

Over the last year AI tools such as Chat GPT have become prevalent in universities. While some students have unfortunately used this technology unethically, many try to use it as a way to supplement their learning. I (AV) believe that this is a game changer. Back in 1984 a professor named Benjamin Bloom considered how one-on-one tutoring would impact academic performance. He found that there is a two sigma or 2 standard deviation improvement in students who receive 1-1 tutoring compared to those who don’t. Essentially your “average” C student, suddenly turns into an A student with one on one tutoring. The problem was that in 1984, there was no practical solution to the Two-Sigma problem. It was simply not feasible to give every student one-on-one tutoring.  Almost 40 years later, have Chat GPT and other AI tools changed this? Only time will tell but as of right now it certainly seems like things are going that way. At the least, Chat GPT is very useful in helping students find the resources they need and summarizing them to improve understanding. Chat GPT is there for students when they need a quick but quality solution. Unlike search engines like Google, it emulates a tutor because it can be personalized to a user. Furthermore, Chat GPT is already being used in industry and our students need to know how to use it.

Despite this, what Chat GPT and other AI tools powered by LLMs will continue to lack is the human touch. While they can produce seemingly sophisticated responses, they have no understanding or even basic common sense. Chat GPT is certainly not perfect. Even Open AI, Chat GPT’s parent company gives users this warning before they start typing prompts into Chat GPT: “ChatGPT can make mistakes. Consider checking important information.”. Ultimately, while Chat GPT has some characteristics of a teacher, it is not a substitute for one. Unlike a real person, Chat GPT cannot provide meaningful guidance or mentoring. This may change as AI continues to develop. For example Khan Academy founder, Sal Khan, now claims in a TED talk that he has a mentoring application running on AI called Khanmigo that emulates a tutor. For now AI tools like Chat GPT will straddle the line between a tool and a tutor. While it seems unlikely the Two-Sigma problem will be fully solved, perhaps we can expect a one sigma (one standard deviation) shift in student performance.

Along with chatbots like Chat GPT, another type of AI we see having a huge impact is speech-based AI’s like the platform we are using for our upcoming experiment named InStage Practice. InStage Practice allows students to talk with humanoid avatars in custom simulations. While this technology has mostly been used to help students prepare for public speaking events or job interviews, we believe that it can be a great resource in the classroom. Last semester when JS performed the pilot experiment with InStage, he observed that the students demonstrated more orbital thinking in their responses than students who took the standard written quiz. This meant that students speaking out their answers set up a broad foundation for their response and slowly worked their way into a specific answer. This resembles how you would explain something to someone and we think it improves student understanding. Last semester, AV was part of the pilot experiment with InStage and he can definitely see what professor Stellar means when recounting his experience. When AV was talking with the humanoid avatars, he was almost trying to teach them and in the process ended up using orbital thinking.

For our little experiment next semester there are a few changes. First of all, students will be randomly assigned to either taking the avatar quiz or written quiz instead of volunteering. This will hopefully reduce any selection bias that was present in the pilot study. In addition, InStage Practice has made some improvements since the pilot experiment in January 2023. Now, the humanoid avatars can respond to students and probe them with questions to better assess understanding. This is a gamechanger, and in our opinion, has the potential to significantly improve student achievement in APSY 101.

So our own active learning continues and we will get back to you with the results!

NEXT
Preparing for graduation – the ultimate internship?
0 Comments

Leave a Reply