ChatGPT gets schooled by Princeton University
At this point, we’re all at least a little bit familiar with ChatGPT, the large language model developed by OpenAI to increase productivity and provide a little laugh every now and then. What you may not know, is that people are using this language model to cheat on college assignments and work projects. No judgement, this is an incredible tool, and we may be looking at the future of productivity and information transmission. However, a student at Princeton University has decided to take ChatGPT to task by creating his own tool.
The new utility is called GPTZero, and it was created by Princeton student Edward Tian. Tian has already posted numerous proof-of-concept videos that seek to demonstrate the capabilities of this new tech. The first demonstration involved GPTZero determining that a particular article on the New Yorker was written by a human. It then took its skills to LinkedIn, where it verified that a particular post had been created with ChatGPT. Tian posted his findings regarding the LinkedIn post with a short caption: ‘here's a demo with @nandoodles's Linkedin post that used ChatGPT to successfully respond to Danish programmer David Hansson's opinions.’
According to Tian, he was motivated purely by academia and its usually infallible nature. He didn’t like the idea that students were using ChatGPT to commit what he termed ‘AI plagiarism.’ Tian posted a short tweet elaborating on his concerns wherein he stated that he thought it was unlikely ‘that high school teachers would want students using ChatGPT to write their history essays.’
These are valid concerns that the creators of ChatGPT share. OpenAI is, at this moment, working on a watermark to instantly show whether or not something was generated using ChatGPT, but it isn’t ready yet.
We won’t know how effective Tian’s utility is until it’s tested out in the field properly. The reality is that so many of these utilities that claim to be able to detect AI in written works simply do not work. I recently explored a few of these utilities myself, and the results were frankly shocking. I used two pieces of text; one that I had written myself, and one generated by ChatGPT. Running both texts through so-called AI detectors revealed that while these utilities were able to pick out the AI 100% of the time, my own original work came back as written by AI as well.
The problem is this; in this day and age, any written content posted to the internet has to conform to SEO standards. These search engine optimization standards are set up by algorithms to make works attractive to other algorithms. While university assignments aren’t in this same class of writing, humans are still taught every single day to write like computers. The more we have to implement SEO in our writing, the less ‘human’ it becomes.
In closing, we’re never going to be able to launch a flawless AI detector because we humans ourselves are programming ourselves and each other to write, act, and think like algorithms. The monsters that brilliant minds like Tian want to decimate are, unfortunately, humans.Advertisement