ChatGPT: What is all the fuss about?
A lot of people have been taking to the internet to discuss their experiences with OpenAI’s latest innovative product, ChatGPT ever since it launched on Wednesday November 30. What is striking about these people, is not just how many of them there are, but who they are. We have heard from many different types of respected experts including physicists, programmers and developers, start-up founders, AI researchers, and more. Many have been blown away by their experiences with the new AI chatbot; some have even lined it up alongside other technological innovations like the PC, the internet, and smartphones. Let’s look at what has been going on, what the ChatGPT is capable of, and a few of the issues and problems related to the technology.
ChatGPT is a new chatbot that has been developed by OpenAI, the AI specialists who have also developed other well-known and impressive AI tools such as DALL-E 2. It has been developed to interact with human prompts in a dialogue format that makes it possible to have open-ended conversations that can almost feel human. In just over 5 days, it has reached 1 million users and, at the time of writing, is currently at capacity meaning OpenAI’s servers can’t accept new users.
As the model has been trained on vast datasets it has a broad depth of knowledge, which makes it able to converse on all manner of subjects and even offer feedback on computer code and suggest new lines of code based on a user’s text prompts. It is this openness that has been blowing people’s minds as their imaginations run wild based on the seemingly omnipotent powers of the new chatbot.
Following a 20-minute conversation on the history of modern physics, one CEO and former physicist believes that we can now basically re-invent the concept of education at scale and even proclaims that colleges, as we know them now, will cease to exist. Other users have proclaimed that Google is done when comparing the quality of output between ChatGPT and a Google search and others have even developed a Chrome plugin that will serve up a ChatGPT response to every Google search you make. Another user explained how he used the tool to develop a weight loss plan, which includes a calorie target, a workout plan, tailored meal plans, and even a grocery list of all the ingredients needed to follow it. Other interesting uses that have been popping up include asking it to create prompts to generate AI art using the text-to-image AI generators that have been proliferating recently and an ingenious way of generating both a list of SEO blog titles and then the articles themselves.
Things can get abstract too when talking with ChatGPT, both through the scale of the information used to train it and feedback loop rabbit holes that come into play when asking it to imagine virtual things and virtual worlds. For example, one developer, familiar with Linux coding, was able to create a virtual machine within ChatGPT that was able to create and serve files, program, and even browse the internet. Taking it to the logical conclusion, for a particular type of brain at least, he, via the virtual machine, went to the OpenAI website and interacted with ChatGPT and created a virtual machine on the virtual machine he had created. A rather ingenious 11-year-old was also able to use ChatGPT to create virtual worlds and even build text-based adventures out of them by prompting the tool to act as a text-based Harry Potter video game and offer 4 potential responses to the scenes it created.
As exciting as all this sounds, the limitless imagination of the internet hive mind and the weird and wonderful ways it can think of using technology like this leads us to the first major issue that we need to consider when interacting with AI-based tools. Artificial intelligence needs data to learn the skills that we can often see as being magic. Historically, this data has been collected in a variety of ways that normally result in certain biases being present. Over long periods of time, whoever may be collecting data and no matter how objectively they try to record it as truly as possible, it will be tainted by their methods as well as any other contextual influences that may be going on around them. An example of this can be seen with the crash test dummies used to test seatbelts in cars, they are built to represent the size and weight of the average man and this is also the same for the testing of many other products too. This means that AIs trained on that data would be sure that seat belts are completely safe for women, when in fact the truth is that we are not 100% sure. Furthermore, as much of the data used to train these tools comes from the internet, we can’t even say that it has been collected in scientific circumstances where they have at least been trying to record the data objectively. We saw just how bad it can be in 2016 when it took just 24 hours for Twitter to turn a Microsoft chatbot into an offensive participant.
Clearly, things have moved on a lot from Microsoft Tay, but the data being used to train these AI models is still tainted with inherent bias, which means that deep down the tools themselves will hold these biases too. According to OpenAI, they are using another of their tools called Moderation API to prevent these biases from breaking through and creating harmful responses to user prompts. However, unfortunately, it seems as though it isn’t too hard to override the blocks that OpenAI is trying to place on ChatGPT and for the tool to start responding with harmful biases that could lead to truly frightening consequences. Of course nobody is saying that ChatGPT should be allowed to decide whether people should be incarcerated based on their race and gender but it is important to highlight these types of inherent issues related to AI as well as consider what effect they could have on users who are exposed to them.
Another key issue, that users should be aware of is that ChatGPT is not serving up facts in its responses to user prompts and that it is your prompts themselves that will have a huge influence over the quality of the responses you get. That is why the physicist we spoke about earlier may be getting ahead of himself when he says we can all give up on expensive trips to college. As an expert in his field, his conversations with ChatGPT will likely have been rich in information and able to trigger more insightful research into the vast depths of the ChatGPT training data in order to trigger knowledgeable responses. The same could most likely not be said for a first-year undergrad. Also, as it plays off us, this means that Chat GPT will likely confirm what we expect, which has other repercussions related to the point outlined above.
The same can be said for the coding outputs GPTChat is putting out too, with Stack Overflow having had to temporarily ban all AI responses on the site. The mods there say that although these responses may look as though they could be correct, in fact, they have a “high rate of being incorrect”. In other words, ChatGPT responses look good, but they are not quite right.
As we have seen throughout this piece then, the value of ChatGPT very much comes from how you are using it and what you are using it for. It is an incredibly useful and even entertaining tool in some instances, but you shouldn’t be leaning on it too heavily as it will likely creak and snap when put under any real pressure. Although, if for some reason you find yourself under the yolk of a new overzealous manager who is demanding regular code reviews without actually knowing too much about coding himself, it could be just what you are looking for.