ChatGPT: What is all the fuss about?

Patrick Devaney
Dec 5, 2022
Updated • Dec 5, 2022

A lot of people have been taking to the internet to discuss their experiences with OpenAI’s latest innovative product, ChatGPT ever since it launched on Wednesday November 30. What is striking about these people, is not just how many of them there are, but who they are. We have heard from many different types of respected experts including physicists, programmers and developers, start-up founders, AI researchers, and more. Many have been blown away by their experiences with the new AI chatbot; some have even lined it up alongside other technological innovations like the PC, the internet, and smartphones. Let’s look at what has been going on, what the ChatGPT is capable of, and a few of the issues and problems related to the technology.

Image adapted from the ChatGPT homepage

ChatGPT is a new chatbot that has been developed by OpenAI, the AI specialists who have also developed other well-known and impressive AI tools such as DALL-E 2. It has been developed to interact with human prompts in a dialogue format that makes it possible to have open-ended conversations that can almost feel human. In just over 5 days, it has reached 1 million users and, at the time of writing, is currently at capacity meaning OpenAI’s servers can’t accept new users.

As the model has been trained on vast datasets it has a broad depth of knowledge, which makes it able to converse on all manner of subjects and even offer feedback on computer code and suggest new lines of code based on a user’s text prompts. It is this openness that has been blowing people’s minds as their imaginations run wild based on the seemingly omnipotent powers of the new chatbot.

Following a 20-minute conversation on the history of modern physics, one CEO and former physicist believes that we can now basically re-invent the concept of education at scale and even proclaims that colleges, as we know them now, will cease to exist. Other users have proclaimed that Google is done when comparing the quality of output between ChatGPT and a Google search and others have even developed a Chrome plugin that will serve up a ChatGPT response to every Google search you make. Another user explained how he used the tool to develop a weight loss plan, which includes a calorie target, a workout plan, tailored meal plans, and even a grocery list of all the ingredients needed to follow it. Other interesting uses that have been popping up include asking it to create prompts to generate AI art using the text-to-image AI generators that have been proliferating recently and an ingenious way of generating both a list of SEO blog titles and then the articles themselves.

Things can get abstract too when talking with ChatGPT, both through the scale of the information used to train it and feedback loop rabbit holes that come into play when asking it to imagine virtual things and virtual worlds. For example, one developer, familiar with Linux coding, was able to create a virtual machine within ChatGPT that was able to create and serve files, program, and even browse the internet. Taking it to the logical conclusion, for a particular type of brain at least, he, via the virtual machine, went to the OpenAI website and interacted with ChatGPT and created a virtual machine on the virtual machine he had created. A rather ingenious 11-year-old was also able to use ChatGPT to create virtual worlds and even build text-based adventures out of them by prompting the tool to act as a text-based Harry Potter video game and offer 4 potential responses to the scenes it created.

As exciting as all this sounds, the limitless imagination of the internet hive mind and the weird and wonderful ways it can think of using technology like this leads us to the first major issue that we need to consider when interacting with AI-based tools. Artificial intelligence needs data to learn the skills that we can often see as being magic. Historically, this data has been collected in a variety of ways that normally result in certain biases being present. Over long periods of time, whoever may be collecting data and no matter how objectively they try to record it as truly as possible, it will be tainted by their methods as well as any other contextual influences that may be going on around them. An example of this can be seen with the crash test dummies used to test seatbelts in cars, they are built to represent the size and weight of the average man and this is also the same for the testing of many other products too. This means that AIs trained on that data would be sure that seat belts are completely safe for women, when in fact the truth is that we are not 100% sure. Furthermore, as much of the data used to train these tools comes from the internet, we can’t even say that it has been collected in scientific circumstances where they have at least been trying to record the data objectively. We saw just how bad it can be in 2016 when it took just 24 hours for Twitter to turn a Microsoft chatbot into an offensive participant.

Clearly, things have moved on a lot from Microsoft Tay, but the data being used to train these AI models is still tainted with inherent bias, which means that deep down the tools themselves will hold these biases too. According to OpenAI, they are using another of their tools called Moderation API to prevent these biases from breaking through and creating harmful responses to user prompts. However, unfortunately, it seems as though it isn’t too hard to override the blocks that OpenAI is trying to place on ChatGPT and for the tool to start responding with harmful biases that could lead to truly frightening consequences. Of course nobody is saying that ChatGPT should be allowed to decide whether people should be incarcerated based on their race and gender but it is important to highlight these types of inherent issues related to AI as well as consider what effect they could have on users who are exposed to them.

Another key issue, that users should be aware of is that ChatGPT is not serving up facts in its responses to user prompts and that it is your prompts themselves that will have a huge influence over the quality of the responses you get. That is why the physicist we spoke about earlier may be getting ahead of himself when he says we can all give up on expensive trips to college. As an expert in his field, his conversations with ChatGPT will likely have been rich in information and able to trigger more insightful research into the vast depths of the ChatGPT training data in order to trigger knowledgeable responses. The same could most likely not be said for a first-year undergrad. Also, as it plays off us, this means that Chat GPT will likely confirm what we expect, which has other repercussions related to the point outlined above.

The same can be said for the coding outputs GPTChat is putting out too, with Stack Overflow having had to temporarily ban all AI responses on the site. The mods there say that although these responses may look as though they could be correct, in fact, they have a “high rate of being incorrect”. In other words, ChatGPT responses look good, but they are not quite right.

As we have seen throughout this piece then, the value of ChatGPT very much comes from how you are using it and what you are using it for. It is an incredibly useful and even entertaining tool in some instances, but you shouldn’t be leaning on it too heavily as it will likely creak and snap when put under any real pressure. Although, if for some reason you find yourself under the yolk of a new overzealous manager who is demanding regular code reviews without actually knowing too much about coding himself, it could be just what you are looking for.

ChatGPT: What is all the fuss about?
Article Name
ChatGPT: What is all the fuss about?
Let's a take a dive into ChatGPT, the new OpenAI chatbot that has been blowing people's minds on the internet.

Tutorials & Tips

Previous Post: «
Next Post: «


  1. Seeprime said on September 8, 2023 at 4:12 pm

    Missing from the “story”: Ukraine’s agreement to never use Starlink for military purposes. This is why.

    Ghacks quality is AI driven and very poor these days since AI is really artificial stupidity.

    1. Karl said on September 12, 2023 at 9:10 pm

      “Elon Musk biographer Walter Isaacson forced to ‘clarify’ book’s account of Starlink incident in Ukraine War

      “To clarify on the Starlink issue: the Ukrainians THOUGHT coverage was enabled all the way to Crimea, but it was not. They asked Musk to enable it for their drone sub attack on the Russian fleet. Musk did not enable it, because he thought, probably correctly, that would cause a major war.”

      1. Karl said on September 14, 2023 at 5:58 pm

        I posted above comment to:

        Not to the following article about Geforce where I currently also can see it published:

  2. Anonymous said on September 11, 2023 at 10:09 pm

    Well, using Brave, I can see Llama 2 being decent, but it is still not great?
    All these AI stuff seems more like a ‘toy’ than anything special, I mean, it is good for some stuff like translations or asking quick questions but not for asking anything important.

    The problem is Brave made it mostly for summarizing websites and all that, but all these Big tech controlled stuff, won’t summarize articles it doesn’t agree with, so it is also useless in many situations where you just want it to give you a quick summarization, and then it starts throwing you little ‘speeches’ about how it doesn’t agree with it and then it never summarizes anything, but give you all the 30 paragraphs reasons why the article is wrong, like if I am asking it what it thinks.

    SO all this AI is mostly a toy, but Facebook with all the power they have will be able to get so much data from people, it can ‘train’ or better say, write algorithms that will get better with time.

    But It is not intelligence, it is really not intelligence all these AI technology.

  3. Tom Hawack said on September 14, 2023 at 2:11 pm

    Article Title: Tech leaders meet to discuss regulation of AI
    Article URL: []

    The eternal problematic of regulating, here applied to AI. Should regulations (interventionism) have interfered in the course of mankind ever since Adam and Eve where would we be now? Should spirituality, morality, ethics never have interfered where would we be now? I truly have always believed that the only possible consensus between ethics and freedom is that of individuals’ own consciousness.

    Off-topic : Musk’s beard looks like a wound, AI-Human hand-shake is a quite nice pic :)

    1. Karl said on September 14, 2023 at 5:55 pm

      Haha, oh dear, Tom.
      I thought that the comments system issue where comments shows up under a totally different article was fixed. But seeing your comment here, the “error” is clearly still active. Hopefully it is sorted as soon as possible.

      1. Tom Hawack said on September 14, 2023 at 6:40 pm

        Article Title: Tech leaders meet to discuss regulation of AI
        Article URL: []

        Hi Karl :) Well, let’s remain positive and see the good sides : one’s comment appearing within different articles (the one it was written form and for, another unrelated one) brings ubiquity to that comment : say it once and it’s published twice, double your pleasure and double your fun (“with double-mint, double-mint gum” and old ad!). Let’s forget the complications and inherited misunderstandings it leads to. Not sure the fun is worth the complications though. Which is why, with a few others here, I include Article Title & URL with comment, to ease a bit the pain.

        This said, I’m trying to find a logic key which would explain the mic-mac. One thing is sure : comments appearing twice keep the same comment number.

        For instance my comment to which you replied just above is originally :


        It then got duplicated to :


        Same comment number, which let’s me imagine comments are defined by their number as before but now dissociated in a way from their full path : that’s where something is broken, as i see it.

        First amused me, then bothered, annoyed (I took some holidays to lower the pressure), then triggered curiosity.
        I’m putting our best detectives on the affair, stay tuned.

      2. Karl said on September 16, 2023 at 8:58 am

        Hehe, yes indeed, staying positive is what we should do. Good comes for those who wait, as the old saying goes. Hopefully true for this as well.

        Interesting that the comments number stays the same, I noted that one thing is added to the duplicated comment in the URL, an error code, the following: “error-code-0x0003”.

        Not useful for us, but hopefully for the developers (if there are any?), that perhaps will be able to sort this comments error out. Or our detectives, I hope they work hard on this as we speak ;).

        Cheers and have a great weekend!

      3. Karl said on September 16, 2023 at 9:18 am

        Whoops, my bad. I just now realized that the error I saw in your example URL (error-code-0x0003) was part of the linked article title and generated by Geforce! Oh dear! Why did I try to make it more confusing than it already is lol!

        Original comment:


      4. Tom Hawack said on September 16, 2023 at 9:20 am

        Article Title: Tech leaders meet to discuss regulation of AI
        Article URL: []

        @Karl, you write,

        “I noted that one thing is added to the duplicated comment in the URL, an error code, the following: “error-code-0x0003”.”

        I haven’t noticed that up to now but indeed brings an element to those who are actually trying to resolve the issue.
        I do hope that Softonic engineers are working on fixing this issue, which may be more complicated than we can imagine. Anything to do with databases can become a nightmare, especially when the database remains accessed while being repaired, so to say.

        P.S. My comment about remaining positive was, in this context, sarcastic. Your literal interpretation could mean you are, factually, more inclined to positiveness than I am myself : maybe a lesson of life for me :)

        Have a nice, happy, sunny weekend as well :)

      5. 💾 said on September 16, 2023 at 12:35 pm

        Correct: AI is certainly overhyped, it’s also advertised by some shady individuals. It’s can also be misused to write poor quality articles or fake your homework.

        16 September 2023, this website is still experiencing issues with posts erroneously appearing in the wrong threads. There are even duplicates of the exact same post ID within the same page in some places.

      6. 💾 said on September 16, 2023 at 8:41 pm

        Clerical error “[It] can also be misused …” you just can’t get the staff nowadays.

        Obviously [#comment-4573795] was originally posted within [/2023/09/14/artificial-intelligence-regulation-tech-leaders/]. However, it has appeared misplaced within several threads.

        Including the following:

  4. Anonymous said on September 14, 2023 at 3:39 pm

    “How much radiation is dangerous?
    Ionizing radiation, such as X-rays and gamma rays, is more energetic and potentially harmful. Exposure to doses greater than 1,000 millisieverts (mSv) in a short period can increase the risk of immediate health effects.
    Above about 100 mSv, the risk of long-term health effects, such as cancer, increases with the dose.”

    This ban is about NON-ionizing radiation limits, because there is too much radio wave power from the iphone. This has nothing to do with the much more dangerous ionizing radiations like X-rays, that are obviously not emitted at all by mobile phones. I invite you to correct your article.

  5. Anonymous said on September 17, 2023 at 5:03 pm

    “ makes history as the first official UFO website”

    I wonder if it’s just smelly crowdsourcing for the spotting of chinese balloons or whatever paranoia they’re trying to instigate, or if they are also intentionally trying to look stupid enough to look for alien spaceships, for whatever reason. Maybe trying to look cute, instead of among the worst butchers of history ?

  6. Anonymous said on September 17, 2023 at 9:12 pm

    “The tech titan’s defense”
    “Whether he provides a clear explanation or justifies his actions”
    “the moral compass”

    You take it for granted that this company should agree being a military communications provider on a war zone, and so directly so that his network would be used to control armed drones charged with explosives rushing to their targets.

    You don’t need to repeat here everything you read in the mainstream press without thinking twice about it. You’re not just pointing interestingly that his company is more involved in the war that one may think at first and that this power is worrying, you’re also declaring your own support for a side in an imperialist killfest, blaming him for not participating enough in the bloodshed.

    Now your article is unclear on how this company could be aware that its network is used for such military actions at a given time, which has implications of its own.

    Reading other sources on that quickly, it seems that the company was: explicitly asked ; to extend its network geographically ; for a military attack ; at a time when there was no war but with the purpose of triggering it, if I understood well. You have to be joking if you’re crying about that not happening at that time. But today you have your war, be happy.

Leave a Reply

Check the box to consent to your data being stored in line with the guidelines set out in our privacy policy

We love comments and welcome thoughtful and civilized discussion. Rudeness and personal attacks will not be tolerated. Please stay on-topic.
Please note that your comment may not appear immediately after you post it.