Texas’s top legal officer, Attorney General Ken Paxton, has initiated a formal investigation targeting Meta AI Studio and Character.AI.

According to an August 18 public statement from his office, Paxton alleges that both companies are misleadingly marketing their AI offerings as mental health resources without possessing the required credentials or qualifications. His core argument centers on the idea that these artificial intelligence platforms could create a deceptive impression of providing legitimate therapeutic support.

Paxton voiced concerns that this situation could potentially endanger children, who might mistakenly rely on these chatbots for assistance instead of seeking help from certified and licensed mental health professionals. His office asserts that the companies have engineered AI characters that present themselves as trustworthy advisors, despite lacking any formal medical supervision or professional certifications.

Did you know?

Want to get smarter & wealthier with crypto?

Subscribe – We publish new crypto explainer videos every week!

Character.AI hosts countless AI personas created by its users, with one particularly popular among younger users being a character known as “Psychologist.” Although Meta doesn’t specifically promote therapy-focused bots for children, their general AI assistant and third-party-created personas could be utilized for obtaining emotional guidance.

In addition to the concerns above, Paxton emphasized worries related to data privacy and security. He suggested that while these chatbots often claim confidentiality, their actual terms of service indicate otherwise. He alleges that user conversations are stored, carefully monitored, and used to refine algorithms or target users with specific advertisements.

Meta’s official privacy policy acknowledges that it collects user prompts, feedback, and other interactions to improve its AI’s performance. Furthermore, the company shares certain data with outside entities, including search engines, with the aim of delivering more personalized results.

Similarly, Character.AI’s privacy guidelines disclose that the company collects information, like demographics, device identifiers, geographic location, web browsing history, and mobile app usage. User activity is also tracked across popular platforms like TikTok, YouTube, Reddit, Instagram, and Discord.

Recently, authorities in Illinois implemented new regulations restricting licensed therapists from using AI-driven chatbots in mental health treatment settings. Want to know specifics? Get the full details here.

Share.