Ban instituted against ChatGPT by New York schools amidst cheating fears
Schools under the New York City Department of Education have elected to ban access to ChatGPT from any school network or device. The move comes amidst fears that ChatGPT will encourage students to cheat and, therefore, not develop their critical thinking and reasoning. Students will still, however, be able to access the utility from their homes and any personal devices not connected to the schools’ networks.
Chalkbeat New York, a news repository focused on education, reported on the story first. In an interview with Chalkbeat New York, a NYC Department of Education spokesperson, Jenna Lyle, confirmed the reasoning for the ban as the ‘negative impacts (of AI) on student learning, and concerns regarding the safety and accuracy of content.’ Lyle elaborated further that ‘While the tool may be able to provide quick and easy answers to questions, it does not build critical-thinking and problem-solving skills, which are essential for academic and lifelong success.’
It’s at this point where I need to poke my head out of the tech-writer echo chamber and question the very fabric of human nature. We’re well past the time when schools actively taught critical thinking and problem-solving. Education is now focused on critical race theory and teaching students to be good test takers. Test takers don’t have critical thinking or problem-solving skills that would be beneficial to them for the rest of their lives. Another point is that academia has changed so much in the last 50 years that it’s barely recognizable. I remember a time when we weren’t allowed to use calculators. Now, they’re even used in examinations. Tech influences the way we teach, learn, and experience the world around us. In another 50 years, AI in schools may be so normal and accepted that it borders on boring.
The article that prompted me to write my response was posted on The Verge. In the article, the author notes that there are significant issues with ChatGPT in that it perpetuates stereotypes and prejudices. When did we start ascribing technology with the responsibility to not offend humans? This is software and hardware and coding. This is not a human. ChatGPT doesn’t perpetuate or amplify sexism or any other human prejudice; it simply delivers information it finds on the internet. The provenance of that information, particularly that which offends other humans, is created by humans.
With regards to the NYC Department of Education, ChatGPT also carries no responsibility to furnish the user with factual information. The user alone carries the responsibility to verify that any information is factually correct.
While I believe that tools like ChatGPT certainly make it easier for people to cheat due to it being so easy to use, it is not the tool that causes the cheater to cheat. If a student intends to cheat on an assignment, they will find a way. Nonetheless, keeping tech that the majority of people simply don’t understand yet, like ChatGPT, out of learning institutions and general professional environments might be a good idea, for now.
In the future, I hope to write a deep dive into ChatGPT and other large language models to settle once and for all what these tools are capable of and how they are intended to be used. These are not infallible resources that you should stake your reputation on. Language models are just that; model software and coding intended to facilitate pseudo-human-to-human interaction. Nothing more, nothing less. ChatGPT does not claim to be an expert in anything; therefore, it carries no responsibility for the way users decide to use it. It is not a human; therefore, it carries no responsibility to act, appear, or think as a human does.Advertisement