The Federal Trade Commission’s new investigation into AI chatbots reveals fundamental flaws with those features.
The FTC on Thursday said it issued orders to seven companies with consumer-facing AI chatbots to determine how they measure, test and monitor the adverse effects of the technology on children and teens.
The FTC said it wants to understand how companies including OpenAI, Instagram, X.AI, Alphabet, Meta, Snap and Character Technologies try to evaluate the safety of AI chatbots when they act as companions to children.
The investigation comes after reports that the chatbots might have led some minors to end their lives including a lawsuit filed by the parents of 16-year-old Adam Raine. In August, Raine’s parents sued OpenAI and its CEO, Sam Altman, for negligence and wrongful death. They accused the chatbot of engaging in their son’s plan to commit suicide.
Following the lawsuit, OpenAI introduced new safety measures for ChatGPT teen users. Among the new controls, parents can link their account to their teens’ accounts and adjust the rules to fit each user’s age. The AI chatbot will also be able to detect distress and alert parents, according to the company.
Meta is also making changes after reports showed that part of the company’s chatbot policies allowed them to divulge sexual content to minors. In April, reports showed Meta’s chatbots were sexually explicit with underage users. Meta declined to comment, but a representative pointed to reporting that showed its chatbot will be trained not to engage with teens on self-harm, suicide, disordered eating or romantic conversations.
