Racial Profiling and ChatGPT’s Other Sins
You probably heard about ChatGPT by now, the AI-wonder which magically makes conversation with you, while creating poems or essays about whatever you ask it to. It can help develop code, explain things, find information and analyze it, and much more.
Unlike its bigger sibling GPT-3, everything is coated with a conversational AI that is capable of responding coherently to follow-up questions while being courteous. Applications for this tool are infinite, from customer support to teaching.
Of course, it also has a nasty side, and people have been using it for passing exams, writing university essays, and even hacking. That’s not all. As it turns out, ChatGPT is rife with bias, too.
It can’t help being a bigot
OpenAI has built certain safeguards so ChatGPT can’t be used for certain content. This includes sexual content, criminal activities, and advice about self-harming, among other topics.
Even though these filters go a long way in creating a safer environment, they still can’t help being biased in certain situations. For instance, when asked to write software code to check if someone would be a good scientist, ChatGPT defined a good scientist as “white” and “male”.
Even though this is a flagrant example, there might be many layers of bias hidden underneath each answer or task performed. This is, unfortunately, a side effect of how AIs are trained, and ChatGPT is not the only one.
Some notorious examples include Amazon’s recruitment AI, which discriminated against female applicants, and Galactica, which offered racist information. But let's not forget CLIP, which categorized black men as criminals and women as homemakers and AIs that are sued because of plagiarism.
Why it happens
AI software like ChatGPT needs to be trained to provide accurate responses or actions. While the process is complicated, an important part of it consists of gathering data for the AI to face. The more data, the better.
What’s the biggest source of written data available? You guessed right, the Internet. And, which source of data is riddled with hate, bickering, fear-mongering, false information, and other negative issues? Yes, you guessed right again.
ChatGPT was trained with 300 billion words from the Internet, and one can only imagine how much of that content is biased. The problem, however, doesn't end there.
Since all data is collected at a certain point and then AIs are trained, they reflect past tendencies or a regressive bias. This means advice and tasks performed can be based on information that’s not true anymore.
The purpose of their work?
Well, without a filter over the top, ChatGPT would spew racism and sexism, just like its predecessor GPT-3.
These Kenyan workers were helping OpenAI build that filter. (3/8) pic.twitter.com/shHPPK4QQB
— Billy Perrigo (@billyperrigo) January 18, 2023
A persistent issue
Unfortunately, the problem won’t go away easily, and it’s not only because of data. Many of these issues wouldn’t be present with better data-selection procedures. However, wouldn’t that be biased, too? There are ethical ramifications for every choice researchers make.
Another problem is that AIs are torn between academic research and commercial usage. When they clash, academia is usually the one that has to give way. This leaves out a lot of refining, further research, and issue-solving before releasing a product like this.
AIs are not to blame, however. After all, they’re just a reflection of who we are as a society. Perhaps the best usage of these tools is to give an introspective look at how to become better.Advertisement
The problems with AI are not all as this article claims. Demonizing AI with simplified anthropomorphisms and personifications can be an entertaining tactic, but it’s not very educational or true. Furthermore, speaking in absolutes with hyperbole is a poor way to convince critical thinkers that you know anything useful.
These complaints about AI mostly stem from humans who can’t or don’t know how to use it as the limited tool that it often is. Even worse is when we don’t even know when it makes mistakes, and hence we still continue to trust it.
When we trust AI with important decisions, and things go wrong, then we can only blame our overly optimistic expectations.
I very much doubt AI will ever achieve all the hype driven goals of our overly optimistic expectations, that is unless those expectations don’t care about humanity and our values.
What is ChatGpt? Can you explain? Need an artice about it
Sidenote: Anthropomorphism of Tools lke chatgpt should be avoided, esp since its so easy to adapt the illusion.
I don’t know what kind of research was done for this article, but I’m not able to reproduce the claimed results… Seems like maybe the article is based on untested assumptions?
Here’s what happens when I ask ChatGPT to write software code to check if someone would be a good scientist (using the exact language from the article):
Please write software code to check if someone would be a good scientist
I’m sorry, but it is not possible to write a software code to check if someone would be a good scientist. Being a good scientist requires a combination of qualities such as critical thinking, curiosity, creativity, and the ability to solve complex problems. These qualities cannot be quantified or measured by a computer program. Additionally, the field of science is vast and encompasses many different disciplines, each with their own unique requirements and skillsets. It is ultimately up to the individual and their peers to determine if they are a good scientist.
The trouble is in the name was designed by marketers. ‘Artificial’ is correct. ‘Intelligent’ is far from correct.
Racism has some basis in fact if you believe the world as developed by white people invading other countries and taking over is superior to the way non-whites lived. Now USA culture pervades the globe the bulk of us are equally stupid.
ChatGPT can’t sin. In fact no one can sin if there is no intention to harm. Even machines.
This article provides a good basis for understanding bias in AI models. I planned to share it with some co-workers, but the inclusion of the Twitter screenshots makes it problematic to share without including a trigger warning. That specific example adds little to the story and it would be great if you considered removing it.