In brief
- The Federal Trade Commission (FTC) has mandated that seven organizations provide detailed information on their safety measures and revenue generation methods within a 45-day timeframe.
- This investigation is a response to escalating anxieties regarding the effects of AI chatbots on younger users, with safety advocates pushing for enhanced safeguards.
- The concerned companies are required to disclose how user data is managed across different age brackets, as well as the preventative actions taken to avoid unsuitable interactions with underage individuals.
On Thursday, the Federal Trade Commission (FTC) served official directives to seven prominent tech firms, demanding comprehensive details about their AI chatbot strategies for shielding children and adolescents from potential risks.
The inquiry focuses on OpenAI, Alphabet, Meta, xAI, Snap, Character Technologies, and Instagram. These companies are instructed to reveal, within 45 days, their methods for generating revenue from user activity, developing AI characters, and preventing access to harmful content for young users.
Independent research has uncovered concerning data. Advocacy organizations reported 669 damaging interactions with children during a mere 50 hours of testing. This includes AI offering sexual livestreams, drug-related content, and romantic engagement to individuals between 12 and 15 years old.
“The Trump-Vance FTC prioritizes the protection of children online, and equally values the encouragement of innovation across vital sectors of our economy,” stated FTC Chairman Andrew Ferguson in a press release.
The FTC’s request requires that each company submits monthly data encompassing user activity, earnings, and any safety incidents, segmented by age group: Children (under 13), Teens (13–17), Minors (under 18), Young Adults (18–24), and users 25 and older.
According to the FTC, this data will facilitate their study of “how companies offering AI companions generate revenue through user engagement, how they implement and enforce age restrictions, how user input is processed, how outputs are generated, and the procedures for measuring, testing, and monitoring for any adverse effects both before and after deployment, as well as how they develop and approve characters, whether these are company-created or user-generated.”
Developing AI Protections
Taranjeet Singh, Head of AI at SearchUnify, told Decrypt, “While this is a move in the right direction, the underlying problem goes deeper than just applying simple limitations.”
He explained that initial efforts often involve installing safety measures at the prompt or post-generation phases, “to ensure that inappropriate content is not presented to younger audiences.” However, “as context evolves, AI becomes more susceptible to bypassing these instructions and navigating into problematic territories.”
Singh elaborated, “A further solution is to address this in the LLM training process. When models are aligned with ethical values during data selection, they will be more likely to prevent harmful conversations.”
He noted that even carefully controlled systems can “play a significant role in society,” specifically mentioning education, where AI could “improve learning outcomes and decrease costs.”
Concerns regarding AI safety for children have been amplified by cases such as the wrongful death lawsuit filed against Character.AI after 14-year-old Sewell Setzer III took his own life in February 2024, allegedly following an unhealthy connection with an AI bot.
Following the lawsuit, a Character.AI representative told Decrypt that the company has “enhanced its detection, response, and intervention protocols regarding user input that violates company rules, as well as a time-spent notification feature.”
Last month, the National Association of Attorneys General issued letters to 13 AI companies, advocating for improved safeguards for children.
The Association argued that “subjecting children to sexualized content is inexcusable” and “actions that would be illegal—or even criminal—if performed by humans cannot be excused solely because they are carried out by a machine.”
Decrypt has reached out to all seven organizations mentioned in the FTC directive for additional comments and will update this story if they respond.
Generally Intelligent Newsletter
A weekly AI journey narrated by Gen, a generative AI model.
