Americas

  • United States

Asia

lucas_mearian
Senior Reporter

Schools look to ban ChatGPT, students use it anyway

news analysis
Apr 25, 202310 mins
Artificial IntelligenceAugmented RealityChatbots

ChatGPT and other generative AI technologies are already being used by students to write essays and answer questions posed by teachers and professors, and academia must learn to incorporate and not ban these new tools, experts say.

shutterstock 704499727
Credit: Shutterstock

School districts throughout the US and abroad have banned chatbot use on their networks and devices over fears students will use generative AI tech to hand in unauthentic and potentially plagiarized work.

Universities and their professors are also wringing their hands about how to deal with artificial intelligence such as ChatGPT that students can use to write papers or generate exam answers.

“They’re still in shock to an extent,” said Tony Sheehan, a vice president and higher education analyst at Gartner. “The rapid consumer adoption of this product has taken everyone by surprise, and of course [that includes] the education sector because it’s about creative content generation — whether that’s an essay, or code, or pictures, whatever.”

Soon after ChatGPT was launched in November, the nation’s largest school district, New York City Public Schools, moved to ban its use by students. The second largest school district in the US, Los Angeles Unified, soon followed suit and blocked access from school networks to the website of OpenAI, the company that created ChatGPT. Other school districts have done the same, including Baltimore, MD, Oakland Unified in California, and Seattle Public Schools.

“While the tool may be able to provide quick and easy answers to questions, it does not build critical-thinking and problem-solving skills, which are essential for academic and lifelong success,” said Jenna Lyle, a spokeswoman for the New York City Department of Education, in a statement to The Washington Post.

Several leading universities in the UK, including Imperial College London and the University of Cambridge, warned students that using ChatGPT for work and assessments could lead to plagiarism “and is a form of cheating.”

“ChatGPT really falls into the educational area quite strongly,” Sheehan said. “I think educational institutions for the last few months have been both exploring and adopting a position on this. And in some cases, particularly from individual faculty, that is an urge to ban it.

“But at the institutional level, more generally, we see this as a significant change in the sector and something that’s not going away completely anytime soon,” he added.

One obvious problem: how do you stop students from using a chatbot that can easily be downloaded to a laptop or smartphone?

There are anti-plagiarism tools from companies such as Grammarly and EasyBib that can compare student work to billions of web pages as well as academic databases and check for duplication. The anti-plagiarism tools can also highlight passages that require citations and give students the resources to properly credit sources.

However, the dilemma remains that if students end up plagiarising work, they can still use online tools to reword essays or other documents. And as generative AI technology advances in sophistication, the content it creates will become less detectable as unoriginal, Sheehan said.

“Of course, the other thing students will do is use [chatbot generated content] as a first draft,” Sheehan said. “I just need some idea, give me some. Great! Now, I’ll just go off and research further and add to it, add research and references to it and it becomes almost impossible to detect that. Many institutions are saying, maybe this is something we should encourage students to do.”

Students might shun schools that ban ChatGPT

What the various educational institutions decide could affect what students do.

In January, Stanford University’s school paper — The Stanford Daily — published the results of  “an informal poll” that indicated 17% of 4,497 respondents had used ChatGPT on their final exams.

Most (59.2%) used the chatbot for brainstorming, outlining and forming ideas, according to the poll. Another 29.1% used it to answer multiple choice questions. And while 7.3% submitted written material from ChatGPT with edits, 5.5% said they  submitted written material from ChatGPT unedited.

At the time of the survey, the school’s policies forbade students from using the AI tools.

The Stanford Daily‘s survey results were echoed by another survey performed this week by higher education search service College Rover. In that survey, more than 40% of university students said they are using ChatGPT for coursework and they’re using it multiple times per week.

Additionally:

  • 36% of students indicated their professors have threatened to fail students caught using AI technologies for coursework.
  • 29% of students say their university has issued guidance regarding ChatGPT and other AI tools.
  • Nearly 6 in 10 students think universities should not ban ChatGPT and other similar AI technologies.

Stanford’s Board on Judicial Affairs (BJA) has been monitoring ChatGPT and other AI tools and more recently published policy guidance for their use in coursework, a university spokesperson said in an email reply to Computerworld.

“Absent a clear statement from a course instructor, use of or consultation with generative AI shall be treated analogously to assistance from another person,” the university policy states.

A Stanford committee has also published preliminary proposals and recommendations that include requiring students to ask professors about the use of ChatGPT, and to not use the AI technology on an exam “when it isn’t expressly allowed…”

“Concerns about academic integrity will likely only get worse if the university does not revisit its current policies and plan accordingly,” the university’s proposal states. “The dramatic emergence of ChatGPT and its sequel GPT-4 since last November has expanded the scope of these issues considerably (e.g., humanties coursework is now impacted by technology in ways that were inconceivable before last November).”

University students have strong feelings about the usefulness of generative AI technologies, and whether or not a school allows their use appears to shape their decision to attend there. In a survey released last week by College Rover, nearly four in 10 students indicated they’re not interested in attending a college or university that bans chatbots such as ChatGPT.

The survey showed 39% of respondents would shun a school that banned generative AI tech and AI in general. 

But it was the ChatGPT question that raised concerns about originality and plagiarism by generative AI.

The survey of 372 students who’ve sought college admission this fall showed men (62%) are slightly more likely than women (58%) to be interested in attending a college that bans AI tools.

A College Rover spokesperson said while there have already been many bans in K-12 level schools, “institutions of higher education in the US have been a bit hesitant to ban the tools just yet” — instead, colleges and universities are updating their academic integrity and plagiarism policies to account for the use of AI tools.

“Allowing students to leverage tools like ChatGPT is not much different than giving them an open-book test. In order to pass, students still have to understand the material and how to utilize their resources, whether that be a textbook or a chatbot, in the most effective way,” said Bill Townsend, founder and CEO of College Rover.

Some educators liken chatbot shutdowns to banning calculators: chatbots are nothing more than a tool to be used to research and develop ideas.

The cat’s out of the bag

Dr. Boris Steipe, a professor emeritus at the University of Toronto’s Department of Biochemistry, makes no bones about allowing his students to use ChatGPT to perform scholastic work. In fact, he sets no limits on how they use generative AI.

“Students will be assessed on the quality of their work. The work has to be well thought through, it has to be validated, and correct,” he said. “That said, I have always used oral exams in my courses, and that will remain. The human aspect of learning is one of the few invariants. But they might ask their favorite AI to help.”

Steipe’s students are not required to show their creative process, but will get credit for it if they do, as well as for sharing their experiences in completing their work.

Far from being an adversary to the learning process, Steipe called the arrival of ChatGPT an “historical moment,” and he said educators should prepare students to work with AI resources instead of attempting to shut them down.

“The world is changing and if we don’t prepare our students to work with AI resources, we are not preparing them for the world. If we spend our time on making our courses AI-proof — assigning hand-written papers or such — we are missing the point of education,” Steipe said.

“We need to teach our students how to have the AI think with them, not for them. This is the most important goal: if we don’t achieve that, the AI will become their competitor,” he added.

Steipe first tried ChatGPT soon after its launch last year; it was then he realized it was better at submitting assignments than most of his students. AI was changing “everything: teaching, learning and assessment.”

The professor then created Sentient Syllabus Project, an initiative by academics for academics to navigate the uncharted waters of the AI era. The project includes a weekly newsletter discussing various challenges posed by the technologies.

Currently, Steipe is redesigning a computational biology course from the ground up based on the abilities of artificial intelligence to assist students in their research and work. For example, he sees it as a way to empower students who previously had only been software consumers, but who can now become developers using the power of AI and chatbots like ChatGPT. ChatGPT is able to take prompts or suggestions from users and generate software code.

“Having personalized tutoring, self-assessed progress, adapting assignments to their learning styles, focusing on weaknesses — we have known for a long time these things would help learning, but we could never do that in practice because it did not scale,” Steipe said.

While there may be no limits on how students use technology to aid their work, plagiarism, Steipe said, is another matter. “Students still can’t pass off someone else’s work as their own, and the AI is not a quotable source,” he said. “This means they have to find the actual sources of ideas, and provide links to prove the source exists. But they had to do that anyway in the past.”

A spokesperson for ChatGPT-creator OpenAI said the company sees ChatGPT as a tool to assist with learning and education, but stressed that academia must address the possible abuse of generative AI by students.

“We’re encouraged by the ways educators have been ideating on how tools like ChatGPT can be useful,” the spokesperson said. “We believe that educational policy experts should decide what works best for their districts and schools when it comes to the use of new technology. We are engaging with educators across the country to inform them of ChatGPT’s capabilities and our ongoing work to improve it.”

Will Douglas Heaven, the senior editor for AI at MIT’s Technology Review, recently wrote in a blog that after speaking with educators, some are beginning to accept that rather than “a dream machine for cheaters, many teachers now believe, ChatGPT could actually help make education better.”

For example, chatbots can be used as learning aids to make lessons more interactive, teach students media literacy, generate personalized lesson plans, save teachers time on administrative tasks, for example.

Companies, such as Duolingo and Quizlet, which make educational flashcards for half of all US high schools, have integrated OpenAI’s chatbot into their apps, Heaven noted.

Gartner’s Sheehan said schools are considering using chatbots as part of the student assessment process and knowledge-development process and to encourage students to consider the implications of AI technology in the future.

“Over the past few months, we’ve seen a lot more schools at institutional level saying ‘We want to explore the implications of this,’” Sheehan said. “How do we encourage students to use this, declare they’re using it, and use it almost as a study buddy, and then reflect on the experience and the quality of output and then report on that.”