Sure, there’s a bit of sensationalism in the article. But, the facts are the company set a 2 hour limit to teenagers recently, and now is cutting the service to minors after a couple teen deaths, questions from regulators and lawsuits from parents. Full WSJ article
Character.AI, one of the top makers of role-play and companion chatbots, implemented the daily two-hour limit in November, citing mental-health concerns. This week the company started cutting off teens completely.
Character.AI’s first version, launched in 2022, offered some of the earliest chatbots available to consumers. It quickly gained traction among people who wanted to role play with its customizable characters, netting the company about 20 million monthly users today.
The decision to block teens follows the deaths of at least two who killed themselves after using Character.AI’s chatbots. The company now faces questions from regulators and mental-health professionals about the role of this emerging technology in the lives of its most vulnerable users, as well as lawsuits from parents of dead teens.
Wait, what? The AI developer is aware of the issues, they will try to do better in the future…eventually.
Mental-health experts say this distress illustrates the emerging risks of generative AI that can simulate human speech and emotion. The brain reacts to these chatbots the way it reacts to a close friend mixed with an immersive videogame, according to Dr. Nina Vasan, director at Stanford Medicine’s Brainstorm Lab for Mental Health Innovation. “The difficulty logging off doesn’t mean something is wrong with the teen,” Vasan said. “It means the tech worked exactly as designed.”
Karandeep Anand, Character.AI’s chief executive, says he saw firsthand during his years working in social media what happened when the industry failed to incorporate safety into the initial design of its products.
About a year ago, Character.AI built a separate model for its under-18 users, to try to offer a safer, more age-appropriate setting. But in the following months, executives observed that chatbots, in long conversations, are less likely to adhere to safety guidelines.
Executives also realized that even when chatbots function perfectly, teens sometimes use them in problematic ways. Teens try to chat with the bots for too long or try to discuss topics that are restricted, such as violence. By mid-September, it became clear to Anand that Character.AI needed to intervene.
Anand believes his company will eventually be able to make safe products for teens that are just as engaging as the chatbots. He is optimistic about audio and video features Character.AI is working on that don’t allow for the types of extended interactions.
As someone who has to be mindful with drinking and played video games a bit too much in the past, I’m no stranger to addictive behavior, but…what’s so addictive in a chatbot? I’ve used AI for coding and some web search when other tools fail, not more engaging than an hex wrench when working on my bike. What is happening?