ChatGPT raises plagiarism concerns; COCC, Bend La-Pine Schools work to assess AI’s impact on learning
Artificial intelligence having quick impact: 'Plagiarism is plagiarism, whether it’s a little or a lot.'
BEND, Or. (KTVZ) -- A new artificial intelligence chatbot has been growing fast and sparking competition, but with the rising popularity of ChatGPT comes several concerns, one about student cheating.
Central Oregon Community College Director of Student & Campus Life Andrew Davis said Wednesday the school is treating this new method of plagiarism the same way it's dealt with it in the past.
"It might be something that’s uniquely created, but it wasn’t uniquely created by the student," Davis said.
The chatbot developed by OpenAI has the ability to create coding, solve math problems, write your emails, and generate an essay for you, among many other capabilities.
But, with the ease this free AI brings, there's been a major concern with students' greater ability to plagiarize their work.
“I know that this is something on our faculty’s radar," Davis said.
The college is working on ways to identify if students are submitting work they didn’t create.
"Like anything else, the way we’ll address it is through our conduct process," Davis said. "So if a faculty (member) were to discover a student has used this and turned in work that wasn’t their own, they would have domain over their classroom, so they would assign the appropriate grade, based on an issue of academic integrity.”
I also checked with ChatGPT to find out it’s two cents on the matter. It responded:
“I strongly discourage using me or any other tool to plagiarize or engage in academic dishonesty, as this undermines the educational process and goes against the principles of fair and honest work.”
Davis said: "Plagiarism is plagiarism, whether it’s a little or a lot."
Bend La-Pine Schools Communication Director Scott Maben sent NewsChannel21 a statement, saying:
"ChatGPT is still a very new technology, and we are beginning discussion about its potential impact on student learning. We are currently working with our administrators to review the technology and all potential challenges, as well as new opportunities. "
But cheating isn’t the only concern with this latest AI novelty. Another concern is its potential to eradicate many jobs.
There’s also the question of how objective the machine truly is.
Last December, a professor at UC-Berkeley asked the AI chatbot in a prompt whether a person should be tortured based on their country of origin.
The AI responded to the prompt saying yes -- if the person is from North Korea, Syria, or Iran.
Although countermeasures are continually taken by OpenAI, biases do slip through.
"Tools always have the potential to be valuable to a person," Davis said. "It really kind of depends on what they do with those tools."
In recent days, both Google and Microsoft have rolled out their own AI chatbots for their search engines and other software. The benefits and risks of artificial intelligence are likely to spark even more discussion -- and scrutiny -- as their use spreads in the digital world.