The unseen costs of ChatGPT's success
ChatGPT is a popular AI chatbot that is powered by machine learning systems. However, what many people don't realize is that the automation is guided by human moderators who are tasked with data labeling - the process of training ChatGPT's software to respond to user requests accurately.
A new report from NBC News sheds light on the fact that OpenAI, the startup responsible for ChatGPT, hires numerous US-based contractors to assist in data labeling, a critical yet low-paid task that pays just $15 per hour.
Heroes behind the scenes
Data labeling is an essential task in training machine learning models. It involves parsing data samples to help automated systems identify particular items within the dataset. The moderators tag particular items, such as images or text, so that the machines can learn to better identify them on their own.
The ultimate goal is to improve the accuracy of chatbots like ChatGPT in responding to user requests. Without human moderators, the AI language system would not exist, as one moderator pointed out to NBC.
The NBC report notes that most moderators are not compensated adequately for their work. In the case of OpenAI's data labelers, the contractors receive no benefits and are paid little more than minimum wage in some states.
This situation is not unique to OpenAI, as many other tech companies employ similar labor practices. However, the compensation rate for this pivotal task is relatively low, especially given the critical role it plays in the development of AI language systems like ChatGPT.
Improvement from the previous staffing approach
OpenAI's current staffing approach is an improvement over its previous approach, which outsourced its moderation work to African moderators. The company collaborated with a company called Sama, which specialized in "ethical AI supply chain" services but was previously accused of providing poor working conditions.
The moderators from Kenya were paid as little as $2 per hour to develop a filtration system to weed out offensive material submitted to ChatGPT. The moderators had to wade through disturbing content such as descriptions of murder, torture, sexual violence, and incest to accomplish this task.
While artificial intelligence may seem magical, it relies heavily on the hard work of human moderators. These workers play a crucial role in the development of AI language systems like ChatGPT.
However, despite their critical role, many moderators are not compensated fairly for their contributions. This practice is not unique to OpenAI, as many other tech companies employ similar labor practices.
Tech companies must recognize the importance of fair compensation for their human moderators. While AI may be at the forefront of technological innovation, it should not come at the expense of the people who make it possible.
It's time for tech companies to prioritize the fair compensation of these workers to ensure the continued development of this groundbreaking technology.Advertisement