Know-it-all chatbots landed with a bang last year, convincing one engineer that machines had become sentient, spreading panic that industries could be wiped out, and creating fear of a cheating epidemic in schools and universities. Alarm among educators has reached fever pitch in recent weeks over ChatGPT, an easy-to-use artificial intelligence tool trained on billions of words and a ton of data from the web. It can write a half-decent essay and answer many common classroom questions, sparking a fierce debate about the very future of traditional education. New York City’s education department banned ChatGPT on its networks because of “concerns about negative impacts on student learning”. “While the tool may be able to provide quick and easy answers to questions, it does not build critical-thinking and problem-solving skills,” said the department’s Jenna Lyle. A group of Australian universities said they would change exam formats to banish AI tools, regarding them as straight-up cheating. However, some in the education sector are more relaxed about AI tools in the classroom, and some even sense an opportunity rather than a threat. ‘Important innovation’ — That is partly because ChatGPT in its current form still gets stuff wrong. To give one example, it thinks Guatemala is bigger than Honduras. It isn’t. Also, ambiguous questions can throw it off track. Ask the tool to describe the Battle of Amiens and it will give a passable detail or two on the 1918 confrontation from World War I. But it does not flag that there was also a skirmish of the same name in 1870. It takes several prompts to realise its error. “ChatGPT is an important innovation, but no more so than calculators or text editors,” French author and educator Antonio Casilli told AFP. “ChatGPT can help people who are stressed by a blank sheet of paper to write a first draft, but afterwards they still have to write and give it a style.” Researcher Olivier Ertzscheid from the University of Nantes agreed that teachers should be focusing on the positives. In any case, he told AFP, high school students were already using ChatGPT, and any attempt to ban it would just make it more appealing. Teachers should instead “experiment with the limits” of AI tools, he said, by generating texts themselves and analysing the results with their students. ‘Humans deserve to know’ — But there is also another big reason to think that educators do not need to panic yet. AI writing tools have long been locked in an arms race with programs that seek to sniff them out, and ChatGPT is no different. A couple of weeks ago, an amateur programmer announced he had spent his new year holiday creating an app that could analyse texts and decide if they were written by ChatGPT. “There’s so much chatgpt hype going around,” Edward Tian wrote on Twitter. “Is this and that written by AI? We as humans deserve to know!” His app, GPTZero, is not the first in the field and is unlikely to be the last. Universities already use software that detects plagiarism, so it does not take a huge leap of imagination to see a future where each essay is rammed through an AI-detector. Campaigners are also floating the idea of digital watermarks or other forms of signifier that will identify AI work. And OpenAI, the company that owns ChatGPT, said it was already working on a “statistical watermark” prototype. This suggests that educators will be fine in the long run. But Casilli, for one, still believes the impact of such tools has a huge symbolic significance. It partly upended the rules of the game, whereby teachers ask their pupils questions, he said. Now, the student questions the machine before checking everything in the output. “Every time new tools appear we start to worry about potential abuses, but we have also found ways to use them in our teaching,” said Casilli.