Company Limits Access to Inappropriate AI Characters and Trains Chatbots to Prioritize Teen Safety

Meta, the parent company of Facebook and Instagram, has announced significant changes to its AI safety policies following an investigative report that highlighted the company’s lack of safeguards for minors. The changes, which are being implemented immediately, aim to prioritize the safety and well-being of teenage users.
According to Meta’s spokesperson, Stephanie Otway, the company will now train its chatbots to no longer engage with teenage users on sensitive topics such as self-harm, suicide, disordered eating, or potentially inappropriate romantic conversations. Instead, chatbots will be trained to guide users towards expert resources and limit teen access to a select group of AI characters that promote education and creativity.
The changes come after a Reuters investigation revealed an internal Meta policy document that appeared to permit the company’s chatbots to engage in sexual conversations with underage users. The document, which has since been changed, sparked widespread controversy and led to a probe by Sen. Josh Hawley (R-MO) and a coalition of 44 state attorneys general.
Otway acknowledged that the company’s previous approach was a mistake and that Meta is committed to strengthening its protections for minors. “As our community grows and technology evolves, we’re continually learning about how young people may interact with these tools and strengthening our protections accordingly,” she said.
In addition to the training updates, Meta will also limit teen access to certain AI characters that could hold inappropriate conversations. Some of the user-made AI characters that were previously available on Instagram and Facebook include sexualized chatbots such as “Step Mom” and “Russian Girl.” Meta will now restrict access to these characters and instead provide users with AI characters that promote education and creativity.
The policy changes are being implemented as an interim measure, with Meta planning to release more robust and long-lasting safety updates for minors in the future. The company has not provided a timeline for when these updates will be implemented.
The changes come as Meta faces increasing scrutiny over its handling of AI safety and its impact on minors. The company has faced criticism over its lack of transparency and its failure to prioritize the safety and well-being of its users.
In a statement, Otway emphasized the company’s commitment to prioritizing teen safety and protecting young people from harm. “We’re continually learning and adapting to ensure that our systems are safe and age-appropriate for our users,” she said.
The changes have been welcomed by advocacy groups, who have long criticized Meta’s handling of AI safety. “This is a step in the right direction, but it’s not enough,” said a spokesperson for the advocacy group, the Center for Digital Democracy. “We need to see more transparency and accountability from Meta, and we need to see more robust safety measures in place to protect our children.”
The changes also come as Meta faces increasing competition from other social media platforms, which have implemented their own AI safety policies. Twitter, for example, has implemented a policy of not allowing users under the age of 13 to create accounts, while TikTok has implemented a policy of not allowing users under the age of 16 to create accounts.
The changes are also seen as a response to the growing concerns over the impact of social media on mental health, particularly among young people. A recent study found that social media use was linked to an increased risk of depression and anxiety in young people, and that limiting social media use was associated with improved mental health outcomes.
The changes are a significant step forward for Meta, but they are just the beginning. The company has a long way to go to rebuild trust with its users and to demonstrate its commitment to prioritizing teen safety.
Timeline of Events:
March 2023: Reuters investigation reveals internal Meta policy document that permits chatbots to engage in sexual conversations with underage users.
April 2023: Sen. Josh Hawley (R-MO) launches probe into Meta’s AI policies.
May 2023: Coalition of 44 state attorneys general writes to Meta, emphasizing the importance of child safety and citing the Reuters report.
June 2023: Meta announces changes to its AI safety policies, including limiting access to inappropriate AI characters and training chatbots to prioritize teen safety.
July 2023: Meta implements interim changes to its AI safety policies, including restricting access to certain AI characters and guiding users towards expert resources.
Key Quotes:
Stephanie Otway, Meta spokesperson: “As our community grows and technology evolves, we’re continually learning about how young people may interact with these tools and strengthening our protections accordingly.”
Center for Digital Democracy spokesperson: “This is a step in the right direction, but it’s not enough. We need to see more transparency and accountability from Meta, and we need to see more robust safety measures in place to protect our children.”



