From Bing to Google Bard, artificial intelligence chatbots are suddenly emerging as a powerful communication tool with impacts across society, but how they work is not broadly understood. Nor are the ways in which AI can be misused.
Four University of Oregon faculty members will chat about the rise of chatbots and artificial intelligence at an upcoming interactive forum Thursday, May 11. The event is hosted by the Teaching Engagement Program; register online.
The panel — which includes professor Colin Koopman, assistant professor Ramón Alvarado, senior instructor Phil Colbert and clinical professor Rebekah Hanley — will discuss how AI systems work, their relationship with big data and emerging considerations for the future of teaching and learning.
The event is from 1 p.m. to 2:30 p.m. at the EMU ballroom. The four faculty members offered their views on the benefits and dangers of AI and ChatGPT:
Department of Philosophy
Alvarado sees the current uses of generative AI, such as large language models, as a series of patterns and errors that appear as text and images. Humans may be able to understand what AI generates, but Alvarado wonders what the impact could be if used throughout society.
Programs such as ChatGPT function because of AI systems. Types of technologies that inform chatbots include transformers, deep learning, gradient analysis, matrix completion and more. Alvarado has been pondering how such applications are used in scientific environments — including chemistry, biology and physics — but he is also researching ways they could spread further in society.
“Think about the implications to quantitative consumer behavior or microtargeted marketing. How will AI ‘imagine’ our profiles and trajectories?” Alvarado said. “Even more importantly, I am thinking about their use in socially consequential contexts such as policing, insurance and credit risk assessments.”
With the use of AI in consequential areas of everyday life, Alvarado wonders what sort of errors AI could face, pointing to the use of the image generator MidJourney.
“MidJourney, for example, had a hard time imagining human hands and would give otherwise perfectly rendered human figures strange limbs and extra fingers,” Alvarado said. “What will errors look like in these other contexts? And how will we recognize them?”
Department of Computer Science
Colbert is director of the Computer Information Technology minor program. He is also a software consultant and engineer, giving him a stake in two different areas being transformed by AI. To illustrate the power of chatbot AI technology, Colbert used ChatGPT (3.5) to edit his comments.
“I fed the chat AI a lot of my own thoughts and content before the system rendered a final version,” Colbert said, “and the final content does reflect the intent and even much of the wording of what I originally submitted.”
What follows has been edited by ChatGPT:
“The recent emergence and increasing availability of generative and natural language processing AI models present both promising opportunities and potential risks for higher education. While these models can offer constructive benefits such as enhancing language learning, enabling personalized feedback, and automating administrative tasks, they also raise concerns about the ethical use of AI and potential for misuse, such as plagiarism and academic dishonesty. As such, it is important for higher education institutions to carefully consider the implications and ethical implications of using AI in their operations, while also recognizing the potential benefits it can bring.”
ChatGPT had more to say in response to the question, “How will AI impact teaching and learning in higher ed?”
The bot said that an advantage is its ability to operate 24/7 and provide students with instant access, which would be beneficial for distance learners and students who work nontraditional hours. And chatbots could lessen the workload for faculty, even grading assignments.
“As chatbots become more sophisticated, there is a risk that they may be used to replace human instructors and support staff,” the bot said, adding that could have a negative effect on the quality of education and student learning experience. “It is important to ensure that chatbots are used in a way that complements, rather than replaces, human teaching and support.”
School of Law
Since Hanley was introduced to ChatGPT and similar AI technology, the clinical professor of law has been thinking about what it means for teaching and learning legal writing.
“Students now have at their disposal a powerful tool that can help them research, synthesize, analyze, compose and revise,” she said. “After students graduate, they will, inevitably, rely on AI-powered tools to practice law with efficiency and precision.”
AI can also be a useful resource for underserved students, who may use it for brainstorming and proofreading, Hanley said.
One question that AI technology presents for legal educators is whether students should learn how to use it effectively while in law school or avoid it to ensure that they adequately develop foundational knowledge and skills. To answer that question, educators need to understand how using the technology might facilitate learning while also saving time and effort.
As the Galen Scholar in Legal Writing, Hanley will investigate in the coming year the use of AI technology and how it’s changing legal writing norms and expectations, both in academia and in practice.
Hanley will help law faculty members consider ways to adjust longstanding policies and practices for students and the school. This includes practices related to the school’s upper-level writing requirement, the three student-run law journals, and more.
Prohibiting all student use of AI technology, whether to avoid difficult conversations or to preserve longstanding academic traditions, may be tempting, she said, but such a decision would be “impractical, short-sighted and misguided.”
“Students are already using it, as are practicing lawyers,” she said. “We need to prepare students not for yesterday’s workplace, but for tomorrow’s.”
Department of Philosophy
Student plagiarism and automated education: That’s what Koopman, Department of Philosophy Chair, has been hearing in response to the rise of AI technology.
The rise of AI technology may have faculty members wondering how to address plagiarism, but that’s an issue that academia has responded to in the past. Koopman said we should be confident that software will soon be written in response to ChatGPT-written papers, which will then be embedded in systems such as the online teaching platform Canvas.
In addition to instructors’ ability to flag full-scale plagiarism, classroom curricula can be designed to make chatbot technology irrelevant. Koopman identified assignments such as reflective writing, revisions to written work or exhaustively cited papers.
Koopman is skeptical about the use of large language models, a type of AI algorithm, for classroom work such as teaching students to write. Existing teaching methods used today have been supported by research. Chatbot technology might one day become a useful tool for helping students to learn to write, Koopman said, but implementing it now just because of its existence is “pretty thoughtless.”
“Educators and education administrators can do better than to follow Silicon Valley's thoughtless hype about whatever latest new technology they are looking to leverage,” Koopman said. “We should instead trust ourselves as educators and foster in each other the respect for our teaching practices that our work deserves.”
—By Henry Houston and Anna Glavash Miller, College of Arts and Sciences