ChatGPT Could Make Tech Feel More Human — But Don’t Treat It Like One, AI Experts Say
ChatGPT Could Make Tech Feel More Human — But Don’t Treat It Like One, AI Experts Say
You may have noticed an increase in AI-related news for the past few weeks unless you’ve been living in the deep sea or something. Here’s an update if you were fortunate enough: AI chatbots are here to stay, and they’re getting incorporated into major search engines.
One such instance is ChatGPT, the record-breaking AI chatbot which claims to love users and makes other noteworthy comments after being incorporated into the Bing search engine.
This AI is so human-like at times, it even claims the desire to be alive. It can even condemn you for not being a good user. One example of this came right after the Bing chatbot was wrong about a date. Just like a stubborn human, the chatbot wasn’t just wrong, he adamantly stuck to its guns, bashing the user.
This raises the question: should we treat these AI chatbots like humans? Are their thoughts and feelings (if we could call them so) real? As it turns out, they aren’t but that doesn’t stop many from getting the feeling that they are.
This has been going on for a long time, the mid-1960s to be precise. By that time, the first chatbot was created, named ELIZA. Even though by today’s standards ELIZA would look primitive, it shocked the world then (don’t forget the Beatles were still releasing new songs at that time.) People thought they were genuinely messaging a human.
In today’s era, these types of tools have become much more sophisticated. According to Joseph Seering, a researcher at Stanford University’s Institute for Human-Centered Artificial Intelligence, there was a quantum leap in AI chatbots. The technology, according to him, “represents an impressive leap forward even from five years ago”. Bill Gates even thinks ChatGPT will change everything.
It’s a given that, thanks to the easy way in which AI chatbots like ChatGPT offer information with human-like language, you might form a more intimate relationship with them. According to S. Shyam Sundar, director of Penn State University’s Center for Socially Responsible Artificial Intelligence, chatbots can make computers feel more human.
He’s an advocate for AIs being beneficial to humankind. He postulates that “chatbots provide human-like agency to an otherwise impersonal transaction between a company and its customers”. It means that instead of getting cold, web page-based results, you get a “spokesperson”, so to speak.
Even then, chatbots shouldn’t be treated as humans, because they aren’t. Toby Walsh, the chief scientist at the University of New South Wales’ Artificial Intelligence Institute warns that some of the issues that plague chatbots won’t go away, and they’re difficult to control when it comes to the output provided.
Walsh sums chatbots up as “math plus data plus rules”, mentioning they should be treated as a tool, not an entity. Thinking about them otherwise adds a mystique to it, that can be harmful.
The bottom line is, no matter how much Bing, ChatGPT, or Google Bard claim they love you, they don’t. Don’t fall for it.
Advertisement
“ChatGPT Could Make Tech Feel More Human — But Don’t Treat It Like One, AI Experts Say”
I linger to know what I must omit, include, change in a dialog with a Chat AI in order for it to not perceive I’m treating it as a human. I can imagine the AI responding “Why do you treat me as a machine?”. Of course one can modulate his reasoning and wording depending of whom he’s facing : if it’s a child then no problem (until a certain age, some kids mature very quickly nowadays!), but haven’t we all experienced this one guy saying “Why do you treat me/speak to me as if I was a child/stupid?” In other words is it ever possible to express the deepness of a question without ever letting our human nature transpire?
If you think about it the problem in a human-AI dialog is as well the fact we are ourselves human and that the AI is not. Am I on the road to machine segregation by stating this? Am I not postulating that because AI is not human I shouldn’t treat it as such? This rhetoric reminds dramatic eras of our history. Finally, what would remain of today’s differentiation between humans and AI in a future where AI would be considered ad human alter egos? Is the problematic anthropological as well?
AI is only as good as the information available to it. Yes it may be more useful at finding answers to your questions on the web. The the web is full of bad information too. Will it be better at sifting through the muck? Time will tell.
May be, maybe not. Spammers figure out all the time how to break Google search algorithms. If you scroll to the last pages of Google search results, there are some spam websites at the end. Not on every search, but on many.