Uncategorized

Former OpenAI Employee Recounts Traumatic Experience Witnessing Disturbing Content During ChatGPT Training

Artificial Intelligence (AI) has become an integral part of our daily lives, powering various applications and services. Behind the scenes, there are countless hours spent on training and fine-tuning AI models to ensure they perform effectively and safely. However, recent reports shed light on the traumatic experiences faced by AI specialists involved in training ChatGPT,

Artificial Intelligence (AI) has become an integral part of our daily lives, powering various applications and services. Behind the scenes, there are countless hours spent on training and fine-tuning AI models to ensure they perform effectively and safely.

However, recent reports shed light on the traumatic experiences faced by AI specialists involved in training ChatGPT, an AI language model developed by OpenAI. The employees, contracted by OpenAI through an AI annotation company called Sama, worked on a process known as “Reinforcement Learning from Human Feedback,” and they have now come forward to share their distressing encounters.

Image Source: Pexels | Photo by Hatice Baran

As mentioned by The U.S. Sun , Richard Mathenge, one of the AI specialists, spoke to the news publication Slate about his experience training ChatGPT. Mathenge revealed that he dedicated nine hours a day, five days a week, to training the model and that the nature of the tasks assigned to him and his team was deeply troubling.

They were exposed to explicit and disturbing text repeatedly to label it as inappropriate content. While this process helps ensure that language models are safe for public use, it took a toll on the mental well-being of the trainers.

The explicit texts reportedly included passages related to heinous crimes like child sexual abuse and bestiality. Mathenge expressed concern for his team, noticing signs of emotional distress and disinterest in their work. He shared that his team members were not ready to engage with such explicit and distressing content, indicating the detrimental impact it had on their psychological well-being.

Image Source: Pexels | Photo by Matheus Bertelli

Mophat Okinyi, another member of Mathenge’s team, revealed that he continues to suffer from medical issues resulting from the training experience. Okinyi experiences panic attacks, insomnia, anxiety, and depression, and he believes that these conditions directly stem from the traumatizing tasks he had to undertake. He even attributes the disintegration of his family and his wife leaving him to the toll that this job took on his mental health.

The AI specialists involved in training ChatGPT feel that the support they received during the process was inadequate. They believe that more comprehensive wellness programs and individual counseling should have been provided to address the emotional challenges they faced.

OpenAI, in response to the concerns raised, stated that they had previously understood that wellness programs and counseling had been offered, workers had the option to opt out of distressing work, exposure to explicit content had limits, and sensitive information was handled by specially trained workers. However, the employees argue that the support fell short of their needs.

Image Source: Pexels | Photo by Laurence Dutton

Mathenge explained that a counselor did report to duty at one point but he was “not professional” or even qualified to deal with the situation. The counselor reportedly asked, what Mathenge called, “basic questions” like “What’s your name” and “How do you find your work,” questions that did not in any way help the employees deal with the unique challenges they faced at work.

It is important to recognize the immense contributions made by these AI specialists in training ChatGPT and in the success achieved by the model. Despite the traumatic experiences they endured, they take pride in their work. However, it is crucial for organizations like OpenAI and AI annotation companies such as Sama to prioritize the well-being of their human trainers. Steps must be taken to ensure that comprehensive support systems are in place to address the emotional toll of training AI models, including adequate counseling, mental health resources, and limitations on exposure to explicit and distressing content.

Read More 

Leave a Reply

Your email address will not be published. Required fields are marked *

Scroll to top
Generated by Feedzy