ChatGPT: What is all the fuss about?
A lot of people have been taking to the internet to discuss their experiences with OpenAI’s latest innovative product, ChatGPT ever since it launched on Wednesday November 30. What is striking about these people, is not just how many of them there are, but who they are. We have heard from many different types of respected experts including physicists, programmers and developers, start-up founders, AI researchers, and more. Many have been blown away by their experiences with the new AI chatbot; some have even lined it up alongside other technological innovations like the PC, the internet, and smartphones. Let’s look at what has been going on, what the ChatGPT is capable of, and a few of the issues and problems related to the technology.
ChatGPT is a new chatbot that has been developed by OpenAI, the AI specialists who have also developed other well-known and impressive AI tools such as DALL-E 2. It has been developed to interact with human prompts in a dialogue format that makes it possible to have open-ended conversations that can almost feel human. In just over 5 days, it has reached 1 million users and, at the time of writing, is currently at capacity meaning OpenAI’s servers can’t accept new users.
As the model has been trained on vast datasets it has a broad depth of knowledge, which makes it able to converse on all manner of subjects and even offer feedback on computer code and suggest new lines of code based on a user’s text prompts. It is this openness that has been blowing people’s minds as their imaginations run wild based on the seemingly omnipotent powers of the new chatbot.
Following a 20-minute conversation on the history of modern physics, one CEO and former physicist believes that we can now basically re-invent the concept of education at scale and even proclaims that colleges, as we know them now, will cease to exist. Other users have proclaimed that Google is done when comparing the quality of output between ChatGPT and a Google search and others have even developed a Chrome plugin that will serve up a ChatGPT response to every Google search you make. Another user explained how he used the tool to develop a weight loss plan, which includes a calorie target, a workout plan, tailored meal plans, and even a grocery list of all the ingredients needed to follow it. Other interesting uses that have been popping up include asking it to create prompts to generate AI art using the text-to-image AI generators that have been proliferating recently and an ingenious way of generating both a list of SEO blog titles and then the articles themselves.
Things can get abstract too when talking with ChatGPT, both through the scale of the information used to train it and feedback loop rabbit holes that come into play when asking it to imagine virtual things and virtual worlds. For example, one developer, familiar with Linux coding, was able to create a virtual machine within ChatGPT that was able to create and serve files, program, and even browse the internet. Taking it to the logical conclusion, for a particular type of brain at least, he, via the virtual machine, went to the OpenAI website and interacted with ChatGPT and created a virtual machine on the virtual machine he had created. A rather ingenious 11-year-old was also able to use ChatGPT to create virtual worlds and even build text-based adventures out of them by prompting the tool to act as a text-based Harry Potter video game and offer 4 potential responses to the scenes it created.
As exciting as all this sounds, the limitless imagination of the internet hive mind and the weird and wonderful ways it can think of using technology like this leads us to the first major issue that we need to consider when interacting with AI-based tools. Artificial intelligence needs data to learn the skills that we can often see as being magic. Historically, this data has been collected in a variety of ways that normally result in certain biases being present. Over long periods of time, whoever may be collecting data and no matter how objectively they try to record it as truly as possible, it will be tainted by their methods as well as any other contextual influences that may be going on around them. An example of this can be seen with the crash test dummies used to test seatbelts in cars, they are built to represent the size and weight of the average man and this is also the same for the testing of many other products too. This means that AIs trained on that data would be sure that seat belts are completely safe for women, when in fact the truth is that we are not 100% sure. Furthermore, as much of the data used to train these tools comes from the internet, we can’t even say that it has been collected in scientific circumstances where they have at least been trying to record the data objectively. We saw just how bad it can be in 2016 when it took just 24 hours for Twitter to turn a Microsoft chatbot into an offensive participant.
Clearly, things have moved on a lot from Microsoft Tay, but the data being used to train these AI models is still tainted with inherent bias, which means that deep down the tools themselves will hold these biases too. According to OpenAI, they are using another of their tools called Moderation API to prevent these biases from breaking through and creating harmful responses to user prompts. However, unfortunately, it seems as though it isn’t too hard to override the blocks that OpenAI is trying to place on ChatGPT and for the tool to start responding with harmful biases that could lead to truly frightening consequences. Of course nobody is saying that ChatGPT should be allowed to decide whether people should be incarcerated based on their race and gender but it is important to highlight these types of inherent issues related to AI as well as consider what effect they could have on users who are exposed to them.
Another key issue, that users should be aware of is that ChatGPT is not serving up facts in its responses to user prompts and that it is your prompts themselves that will have a huge influence over the quality of the responses you get. That is why the physicist we spoke about earlier may be getting ahead of himself when he says we can all give up on expensive trips to college. As an expert in his field, his conversations with ChatGPT will likely have been rich in information and able to trigger more insightful research into the vast depths of the ChatGPT training data in order to trigger knowledgeable responses. The same could most likely not be said for a first-year undergrad. Also, as it plays off us, this means that Chat GPT will likely confirm what we expect, which has other repercussions related to the point outlined above.
The same can be said for the coding outputs GPTChat is putting out too, with Stack Overflow having had to temporarily ban all AI responses on the site. The mods there say that although these responses may look as though they could be correct, in fact, they have a “high rate of being incorrect”. In other words, ChatGPT responses look good, but they are not quite right.
As we have seen throughout this piece then, the value of ChatGPT very much comes from how you are using it and what you are using it for. It is an incredibly useful and even entertaining tool in some instances, but you shouldn’t be leaning on it too heavily as it will likely creak and snap when put under any real pressure. Although, if for some reason you find yourself under the yolk of a new overzealous manager who is demanding regular code reviews without actually knowing too much about coding himself, it could be just what you are looking for.
This is a good and informative article. So much better than what this Shaun guy is throwing at us lately.
Agree completely. Big plus for your comment.
I can’t wait to give this a try but am on that damn waiting list.
There’s a waiting list? You can try it out at https://chat.openai.com/chat
I wish I had this when I was in college lol. In my test, ChatGPT wrote an 800 word essay for me in matter of a few seconds. All I was give it an outline of my points on a topic.
I can’t register at all because my country is unsupported, but I really want to try it.
The real question is, now that the kids have these tools, will they grow up and be smarter than us “old” or not…
I have not yet fully read this article.
I would point out that it appears to me that Patrick is intending oftentimes to refer to the Internet, but misspells it by failing to capitalize the first letter in “Internet”. “[i]nternet” (with a lowercase “i”) can be used to refer to a set of interconnected networks (which consequently forms another network or internet). There is a specific and proper (noun) networks set known as “the Internet” that is commonly accessed across the world and it appears to me that it is that network set that is intended to be referred to in most the article.
You never made a mistake?
More Patrick, less Shaun please!
I remember playing with one of these “chatbots” back in the ’70s. Everyone was so impressed. It took me five minutes to throw it into confusion.
ChatGPT is undoubtedly a great achievement – but it’s not “AI” by a long shot. It’s just another in a line of language interpreters connected to a database of facts,
Every generation we get a huge AI hype, money is spent on it, some good results are obtained for some specific business and technical areas. Then the hype fades until next time.
My guess is we’re still twenty-thirty years away from something the MIGHT be called “AI”. It’s going to require the ability to conceptualize from facts. It’s not clear the current models can do that in any way similar to what the human brain does.
So relax: Terminators are still a ways away (except in San Francisco).
Very good overview. I’d seen ChatGPT mentioned but didn’t really understand what was being referred to. Thanks.
I couldn’t understand a thing but props to Patrick, it was well written. Lol
Probably paid bots made fuzz about this, because it is stupid and nobody really cares about it.
Here is an example of a positive comment about a great article about ChatGPT:
“I just read the article about ChatGPT and I was blown away by how informative and well-written it was. The author did a fantastic job of explaining the technology and its potential applications in a way that was easy to understand. I especially appreciated the examples and the insights into the future of language processing. Great job on the article – keep up the excellent work!”
A very interesting article. Like Mr. Hack, I am quite skeptical of the AI hype. While these database front ends are ingenious and often useful in particular cases, we as a society should be very careful of how they are deployed and used. Kudos to Mr. Devaney for a well written and interesting article.
A bit tangentially; those with a science fictional frame of mind should hunt up “Dreamships” and “Dreaming Metal” by Melissa Scott. the two works deal with just what is true artificial intelligence. One is a direct sequel of the other, and I found them most enjoyable.
Oh great. Bad enough we tend to rely on information gathered from Facebook, Twitter, Tik-Tok, and other dubious sources, now people are depending on the totally unreliable answers from a chatbot. Really, people? The stupidity of our globalist age is unfathomable.
I asked ChatGPT to reply to you (lol):
There are a few points in this comment that are worth addressing. First, it’s important to recognize that not all information on social media platforms is unreliable. While it’s true that some sources on these platforms may not be entirely trustworthy, there are also many reputable sources that provide accurate information. Additionally, chatbots like Assistant are not intended to be a substitute for thorough research and critical thinking. Instead, they are designed to provide quick answers to simple questions and help people find information more easily. Ultimately, it’s up to each individual to evaluate the reliability of the information they receive, regardless of the source.
Warning: ChatGPT is heavily biased when it comes to political topics. This happens because its programmers won’t allow “problematic” facts from ruining the narrative.
Biased when you disagree. When you agree you don’t notice.
and “fake news” was never a thing either. Got it.