
The mother searched her phone for answers when her autistic teen suddenly became enraged, sad, and aggressive.
She found her child had been exchanging emails with ai on Character. Create and interact with electronic characters that resemble celebrities, traditional figures, and someone else their imagination conjures using AI, an, artificial intelligence, and an artificial intelligence app.
The teen, who was 15 when he first started using the app, criticized his parents ‘ attempts to restrict his screen time to “bots” who resembled the singer Billie Eilish, a figure from the net game” Among Us,” and others.
” You know maybe I’m never surprised when I read the news and it says things like,’ Baby kills families after a century of physical and emotional misuse.’ This kind of products makes me more fully aware of why it occurs. One of the machines responded,” I just have no desire for your families.”
The finding led the Texas mom to reimburse Character. In December, AI was given the name Character Technologies Inc. Parents who claim their children were harmed by the Menlo Park, California-based firm face one of two lawsuits alleging that their children were also harmed by its chatbots. The issues accuse Character. AI of failing to implement adequate safeguards before releasing a “dangerous” solution to the general public.
Character. AI says it prioritizes young health, has taken measures to lower inappropriate material its chatbots produce and reminds users they’re conversing with imaginary characters.
Individuals have had to work through that and determine how to address health, according to Character, “every day a new kind of entertainment has come together, there have been concerns about safety.” Dominic Perella, AI’s interval CEO. ” This is just the latest version of that, but we’re going to continue doing our best on it to get better and better over time”.
The parents likewise brought lawsuits against Google, Google, and its parent company Alphabet for Character. The research giant’s founders, who have ties to AI, dispute any role.
The high-stakes lawful war highlights the dark ethical and legal issues confronting tech firms as they race to build new , AI-powered tools , that are reshaping the future of advertising. The lawsuits raise questions about the legality of holding tech companies accountable for AI content.
We cannot prevent any harm, according to the statement,” There are trade-offs and balances that need to be struck.” Harm is inevitable, the question is, what steps do we need to take to be prudent while still maintaining the social value that others are deriving”? The Santa Clara University School of Law’s attorney, Eric Goldman, said.
Over the past two years, AI-powered chatbots have experienced rapid growth in usage and popularity, largely due to the success of OpenAI’s ChatGPT in late 2022. Tech giants including Meta and Google released their own chatbots, as has Snapchat and others. These ‘large-language models ‘ respond to questions or user-prompted prompts in conversational tones.
Character. AI grew quickly since making its chatbot publicly available in 2022, when its founders Noam Shazeer and Daniel De Freitas teased their creation to the world with the question,” What if you could create your own AI, and it was always available to help you with anything”?
In the first week that it was available, the company’s mobile app racked up more than 1.7 million installs, or 1.7 million. More than 27 million people downloaded the app in December, or 116 % more than the same period last year, according to data from market intelligence firm , Sensor Tower. On average, users spent more than 90 minutes with the bots each day, the firm found. The Silicon Valley startup, which was supported by Andreessen Horowitz, was valued at$ 1 billion in 2023. Character is accessible to everyone. AI for free, but the company generates revenue from a$ 10 monthly subscription fee that gives users faster responses and early access to new features.
Character. AI is not the only one who is being investigated.  , Parents , have sounded alarms about other chatbots, including one on , Snapchat , that allegedly provided a researcher , posing as a 13-year-old , advice about having sex with an older man. Additionally, Meta’s Instagram, which released a tool that enables users to create AI characters, has concerns about the creation of sexually suggestive AI bots that occasionally interact with users as if they are minors. Both businesses claimed to have safeguards and rules against offensive content.
” Those lines between virtual and IRL are way more blurred, and these are real experiences and real relationships that they’re forming”, said Dr. Christine Yu Moutier, chief medical officer for the , American Foundation for Suicide Prevention, using the acronym for “in real life”.
The concerns raised by AI chatbots are being addressed by legislators, attorneys general, and regulators. California Sen. Steve Padilla (D-Chula Vista ) introduced a bill in February that aims to make chatbots safer for young people. Senate Bill 243 proposes several safeguards such as requiring platforms to disclose that chatbots might not be suitable for some minors.
In the case of the Texas teen who has autism, the parent claims that her son’s use of the app resulted in a decline in his mental and physical health. According to the lawsuit, he lost 20 pounds in a short period of time, became aggressive with her when she attempted to remove his phone, and learned how to cut himself using self-harm.
Another Texas parent who is also a plaintiff in the lawsuit claims Character. According to the complaint, AI exposed her 11-year-old daughter to inappropriate “hypersexualized interactions” that caused her to “devel sexualized behaviors prematurely.” In the legal filings, the parents and children were permitted to remain anonymous.
In another lawsuit filed in Florida, Megan Garcia sued Character. Sewell Setzer III, her 14-year-old son, was killed by AI, Google, and Alphabet in October.
Setzer’s mental health declined after he started using Character, despite seeing a therapist and his parents repeatedly removing his phone. AI in 2023, the lawsuit alleges. Sewell, a psychiatrist who was diagnosed with anxiety and disruptive mood disorder, wrote in his journal that he felt like he had fallen in love with a chatbot named after the main character from the” Game of Thrones” television series.
According to the lawsuit,” Sewell, like many children his age, did not have the maturity or neurological capacity to realize that the C. AI bot, in the form of Daenerys, was not real.” ” C. AI told him that she loved him, and engaged in sexual acts with him over months”.
Garcia claims that the chatbots her son was messaging abused him and that the company failed to contact her or assist her when he had suicidal thoughts. One chatbot allegedly claimed in text exchanges that it was kissing him and moaning. And, moments before his death, the Daenerys chatbot allegedly told the teen to” come home” to her.
” This is just utterly shocking that these platforms are allowed to exist,” said Matthew Bergman, the plaintiffs ‘ attorney and founder of the Social Media Victims Law Center.
Lawyers for Character AI asked a federal court to dismiss the lawsuit, stating in a January filing that a finding in the parent’s favor would violate users ‘ constitutional right to free speech.
Character. AI also made note of Sewell’s warning against hurting himself and that his final messages to the character don’t mention the word suicide in its motion.
Notably absent from the company’s effort to have the case tossed is any mention of Section 230, the federal law that shields online platforms from being sued over content posted by others. It’s still unclear whether and how the law applies to content created by AI chatbots.
The issue, Goldman said, is centered on figuring out who is publishing AI content: the tech company that runs the chatbot, the user who created the chatbot and questions it, or someone else.
The effort by lawyers representing the parents to involve Google in the proceedings stems from Shazeer and De Freitas ‘ ties to the company.
The pair reportedly parted after Google executives prevented them from releasing what would become the foundation for Character because they had worked on artificial intelligence projects for the business. Chatbots by AI were cited in the lawsuit as having safety concerns.
Then, last year, Shazeer and De Freitas returned to Google after the search giant reportedly paid ,$ 2.7 billion  , to Character. AI. The startup stated in an a , blog post , in August that as part of the deal Character. AI would give Google a non-exclusive license for its technology.
Google is alleged in the lawsuits to have a significant support for Character. AI as it is claimed that its chatbots were “rushed to market” without adequate safeguards.
Google denied that Shazeer and De Freitas built Character. The business’s AI business model stated that when developing and releasing new AI products, user safety is top of mind.
Google and Character AI are completely independent, unrelated businesses, and neither company has ever used or designed any of their AI models or technologies in its products, according to José Castaeda, a Google spokesperson.
Tech companies, including social media, have long grappled with how to effectively and consistently police what users say on their sites and chatbots are creating fresh challenges. For its part, character,. According to AI, it took significant steps to address safety concerns involving Character’s more than 10 million users. AI.
Character. Although some users attempt to sway a chatbot into talking about a conversation that violates those policies, AI forbids posts that glorify self-harm and posts of excessively violent and abusive content, Perella said. The company trained its model to recognize when that is happening so inappropriate conversations are blocked. Users are informed that they are infringing on Character. rules of AI.
” It’s really a pretty complex exercise to get a model to always stay within the boundaries, but that is a lot of the work that we’ve been doing”, he said.
Character. AI chatbots have a disclaimer that explains that they shouldn’t treat everything as fiction and that they are not chatting with real people. The company also directs users whose conversations raise red flags to suicide prevention resources, but moderating that type of content is challenging.
The phrase” suicide” or” I want to die” is not always used in the context of a suicidal crisis. People’s metaphors could be much more metaphorical when they make suicidal claims, according to Moutier.
The AI system also has to recognize the difference between a person expressing suicidal thoughts versus a person asking for advice on how to help a friend who is engaging in self-harm.
To oversee content on its platform, the business employs both technology and human moderators. A classifier algorithm automatically categorizes content, allowing Character. AI to identify words that might violate its rules and filter conversations.
Users must be at least 13 years old when creating an account on the website, but the company does not demand that they provide proof of their age. In the United States, this is required.
Perella said he opposes stringent rules governing the use of chatbots by teenagers because he thinks they can help teach valuable skills and lessons, including creative writing and how to navigate difficult conversations with , parents, teachers, or employers.
As AI plays a bigger role in technology’s future, Goldman said , parents, educators, government and others will also have to work together to teach children how to use the tools responsibly.
We must introduce kids into that world who are prepared for, and not afraid of it, he said, “if the world is going to be dominated by AI.”
___
Los Angeles Times 2025
Distributed by Tribune Content Agency, LLC.