The Racquet Press was able to sit down with the Associate Chair of the English Department, Brian Kopp, and Communications Studies Professor Weixu Lu, to ask questions regarding Artificial Intelligence (AI), specifically ChatGPT, concerning academic integrity and integration into Higher Education.
Both professors are involved in an AI Community of Practice, a group of professors and faculty that meet monthly here at the University of Wisconsin-La Crosse to discuss AI and its effects on the higher education community as well as how educators can use programs such as ChatGPT to their advantage.
“Tell me a little about yourself and your position here at UWL and how AI has affected your role.”
Lu: I am a Professor in the Communication Studies department and I started in 2019. My research usually focuses on digital media so I teach media-related courses and then I also teach some of the digital media courses. I myself personally use generative AI in teaching and daily communication using my own research, so I have a little bit of experience, I don’t know too much, but I have a bit of experience and I have been talking through other faculty members through the Community of Practice.
Kopp: I am an English Professor and I teach a variety of courses in the department, I am currently teaching a course this semester on AI in writing, it’s a senior capstone course, so it’s a lot of thinking on ChatGPT and similar applications.
“Have you had any issues with students using ChatGPT to cheat in your courses?”
Lu: I think the focus is shifting very quickly from students cheating with ChatGPT to how can we integrate it. There are essentially two camps, one camp really hates students using ChatGPT and the other camp thinks that ChatGPT and similar technologies are something that we need to accept and integrate into our instruction. For example, I don’t mind if a student sends me an email generated by ChatGPT, but I know many professors hate it. I think what I hate more is when a student sends me an email saying, “Hey, what is this?” in one sentence, I don’t want that. I would rather a student send me something generated by AI with relevant content.
Kopp: I have heard reports of students using it to not do assignments. But I’ve also heard a lot of concern about how to use it and what it’s okay to use for. There was a survey, Tesia Marshik in Psychology, Christopher McCracken in English and I did some surveying last semester and found that there is a lot of apprehension in students and faculty about over-reliance and there are concerns about undercutting learning. There is some interest in it, but more anxiety. It seems to me that that’s another argument for why we need to talk about it. I think some of the conversations we’ve had have been focused entirely on academic integrity and cheating or not cheating, I think that the uses of AI really go beyond that, students may be using it the same way they use Wikipedia or Google, and even as an assistant. Some report that it helps them understand something while you have some people trying to ban it, and I think that there is confusion about what it is that we’re banning. Where is the line between using it to enhance learning and using it to undercut learning.
“Do you think ChatGPT can be a helpful tool for faculty and staff as well as students?”
Lu: One thing I need to emphasize is that I think AI is a marketing term, whether it really is intelligence or not, we are not 100% sure, they seem smart. But the point is that I treat it as what it is, which is a large language model, what it is good at is producing human-like language. So I use it more like an assistant. The things I feel could be done by an assistant, I usually offload to it. Sometimes when I write assignment instructions, I’ll say, “Here are some key instructions, can you help me expand on it and make it more readable,” especially as a non-native English speaker it has been very helpful.
Kopp: I know faculty are using it in some cases, I have no idea how many faculty, but some faculty are using it as an assistant to help plan a class or assignments and so forth, and certainly generating images to put in slides. If companies like Microsoft have their way and products like Copilot get fully rolled out, it’ll be integrated into many software packages. I mean Adobe Acrobat already has AI functionality and Microsoft 365 has a certain level of AI processing being added to it. When we get to the level of Copilot, it’s a higher level of integration, so I think at least a percentage of generative AI like ChatGPT will be integrated into our normal learning and teaching workflows.
“What are some of the cons to ChatGPT?”
Lu: The current generation of AI is really good at generating mediocre content, from an expert’s perspective, we just feel that we have to go in and fix many things. Especially in my own work, I feel increasingly that it is like this, I feel frustrated that it seems like I can offload work to the AI, but in fact, I have to go in and fix everything. I don’t usually use it to generate content from nothing, for some assignments, I like to give very extensive feedback and sometimes I can come off as very harsh, so I can give ChatGPT a training session and tell it, “Here’s what you need to do and here’s the assignment prompt and every time I give you a comment, please help me to rephrase it so that it reads more supportive.” But sometimes it’s just so dumb I have to go fix it, for example, if it is using a word too many times, I can tell it to never use that word, but sometimes it forgets. Somehow it just forgets, I think there is something limiting this, not from the technological standpoint, but from the business end. It’s definitely limited by cost, and how much they can charge users.
Kopp: I think there is a real risk in the classroom, within a teaching and learning context, that it would completely undercut learning. It isn’t just simply that students wouldn’t do the work, it’s that they wouldn’t learn the subject. So to me, that is concerning, you can literally take any assignment and plug it into ChatGPT and if you’re paying for a premium account, there’s kind of an access and equity issue there, not all students can pay for a premium account, but for those who do, they can get a response instantly, and if they’re really good at using ChatGPT they can get some really impressive things from it. While I think that being able to produce the assignments is a problem, to me, the greater risk is the loss of learning opportunities and those things will have all sorts of ripple effects. Like when people don’t understand the basics of their discipline or their subject and in order to get the best usage out of ChatGPT you need to have some sort of knowledge or expertise and if you’re using ChatGOT all the way through then you’re not going to develop that knowledge. People are focusing on cheating, but my focus is really on learning. I think related to that, there is this uncertainty, having a tool like ChatGPT but not knowing whether you can or can not use it or whether you can talk about using it or not.
“Do you have anything else to add?”
Lu: One of my most persistent arguments is that we are actually facing a dilemma. Now, it seems like these lower-level tasks can be replaced by AI. For most students, I don’t mind if you use AI to fix up your writing, use that. I don’t even mind if say, certain parts of your essay are AI, it doesn’t really matter as long as you are understanding the basic concepts. The problem is, should we move to assess a higher order of critical thinking? That’s a challenge, because the current higher education, and it’s not the fault of any student, has been going on for decades, especially since the No Child Left Behind Act, there has been a notable decline in critical thinking skills coming out of primary and secondary education. Most of this was not the fault of the students, but a product of their environment. So now the challenge is that we are starting to change our way of teaching to accommodate what students are used to while slowly training students to embrace critical thinking and creativity if they didn’t get that training in High School. However, with AI, if it raises the threshold, do we make the jump? Students may feel too much pressure. We are going to see this dilemma where we should be teaching students much more challenging stuff, but can we and should we, especially with so much access to AI.
Kopp: I think we ignore AI at our own peril. We need to be able to articulate what it can do and what it can’t do. We need to be able to articulate what we can do with it and what we can’t do with it. In some cases, argue and push against it. None of those things will happen if we don’t engage with it. So I think getting to know it, learning about it, is an important thing. I also think having conversations, frank conversations, with students and faculty about it is a really important first step. A lot of people aren’t really aware of how AI works, how they are interacting with it and the results they get from it. So I think there’s a need to put our attention on what is obviously an extremely important technological development that will affect most students in some way during their post-graduate lives and some students it’s affecting right now. To me, there’s a kind of urgency to talk about it.
There is an AI Summit put on by the Center for Advanced Teaching and Learning (CATL) happening here at UWL on Fri. May 17, from 9:00 am until 2:00 pm in the Great Hall in Cleary Center. At the Summit presenters will give presentations on topics such as teaching with AI in higher education, AI-driven assessment in universities, student uses and perceptions of AI and many others.
If you have any questions about the AI Summit, feel free to reach out to CATL at [email protected].