Thomas Chan holds little envy for his son of 2 months. His world will be different from ours due to technological advancements currently developing. Chan is an assistant professor of psychology at CSUN working to improve cognitive function of older adults through artificial intelligence.
Teaching since 2006, Chan welcomes AI in all its glory, particularly ChatGPT. OpenAI released the chat box service for public use on Nov. 30, 2022. It has since gained over 100 million users, as reported by Reuters.
ChatGPT is a tool for task simplification, a function appealing to college students. Those who abuse it make plagiarism almost indistinguishable. At the extreme, AI perpetuates fears about human replicability.
I spoke with Chan about the role of ChatGPT and AI in higher education. As he puts it, “They’re our greatest partners yet.”
Q: There has been a lot of talk about ChatGPT in academia, particularly among professors hesitant to engage with this new technology. I found it interesting that you felt differently. Why is that?
A: “Yeah, especially in higher education. A lot of people were hesitant to engage with technology, as we saw in the pandemic. I used that time to really dive into it.”
“I don’t blame them for being reluctant to use AI technologies. It’s like their balance is being disrupted. I’m a little biased to the benefits because I work with AI. However, some people are like, ‘Oh wow, this is Terminator, you know, Skynet kind of thing.’”
“It’s not the end of the world and it’s not going to replace everybody’s job.”
Q: A major argument against AI is driven by its potential to replace people, which perpetuates some level of fear. Can you speak more on this?
A: “Yeah, especially when you think about copywriting or editorial writing.”
“You could tell ChatGPT to write you an article with five paragraphs and it will do it. However, there is no human flair, which in my opinion, can never be replaced.”
“So, in terms of replacing people, you have remedial stuff like transcribers. Back in the day, we would record transcripts of people and send it for edits. That one process would cost a lot. So those jobs are going to be replaced.”
“It becomes more about evolving the self.”
Q: So, would you agree that more students should engage with AI technologies, with the intention to simplify a task?
A: “Yeah, a thousand percent. AI can provide wrong information. It’s up to the individual to avoid the propagation of wrong information.”
“ChatGPT is just a tool. A tool is anything a human uses to get something accomplished, but when you partner with something such as artificial intelligence, it’s like combining their information and our information to come up with the decision.”
“That is where the future is moving to. So, training students that way is essential.”
Q: Can you speak more about the negative effects of ChatGPT on students?
A: “I have seen ChatGPT reinforce lazy students to be more lazy. For example, they can ask ChatGPT to write them a whole paper on Alzheimer’s, right?”
“Moreover, it reinforces professors to stick in their ways. Instead of changing their assignments, they punish students for using technology. Fostering laziness is harmful to the learning process in higher education.”
Q: How do you think plagiarism affects professors who have no expertise on ChatGPT?
A: “Yeah, it’s a great question because it’s hard.”
“When I see a student cheat in a way that I’m like, ‘Oh, that’s pretty smart.’”
“I love getting challenged. Being at CSUN is all about inviting challenges, but this is hard for many professors.”
“I have been able to figure out when a student plagiarized. A really great answer is a red flag. Especially if it’s too technical. I know how students answer, and if I see something really inconsistent, something is up.”
Q: As you mentioned, ChatGPT is changing the game. How do you think AI will affect the future of academia?
A: “Interacting with this technology is undoubtedly the future and the future is now. If you don’t know how to use ChatGPT or other AI-infused technologies, then you’re already behind.”
“Knowing the building blocks, such as the languages it scraped and processed, is the first step. We have to know the training data that goes with it and ask how we interact with that. Then, ask if the answers I am getting are biased. I have to understand the biases that come from it.”
Q: Would you say that, in the upcoming years, the data used in ChatGPT will be more diverse?
A: “Yeah, it has to be. When newer things are being incorporated, the model is going to change and become more diverse. ChatGPT draws on information from 2021 and below. Just think about research, right?”
“A lot of research is done with more rich and white affluent populations by white researchers. The data is a little biased because that’s what science is right now. It’s only in recent years that it has really changed and diversified.”
Q: Any final thoughts?
A: “Yeah, I think we have to just be aggressive for CSUN. To really find a way, perhaps through a course, to train our students to interact with AI. We don’t have to train students how to use a computer, right? They grew up with it. Now, AI is starting to develop, and we have to train our students or they’ll lag behind once they enter the workforce.”