close
close

The controversial chatbot’s security measures are “a sticking plaster”

The controversial chatbot’s security measures are “a sticking plaster”

Chatbot platform Character.ai is overhauling how it works for teens, promising to make it a “safe” space with additional controls for parents.

The site is facing two lawsuits in the US – one over the death of a teenager – and has been dubbed “clear and present danger“to young people.

It says it will now “increase security” in everything it does with new features that inform parents about how their child uses the platform – including the time they spend with chatbots and who they talk to most often .

The platform – which allows users to create digital personas they can interact with – will receive its “first iteration” of parental controls by the end of March 2025.

But Andy Burrows, head of the Molly Rose Foundation, called the announcement “a delayed, reactive and wholly unsatisfactory response” that he said “seems like a Band-Aid solution to their fundamental security problems.”

“It will be an early test for Ofcom to get to grips with platforms like Character.ai and take action on their continued failure to address entirely avoidable harm,” he said.

Character.ai was criticized in October when chatbot versions of teenagers Molly Russell and Brianna Ghey were found on the platform.

And the new safety features come at a time when legal action is being taken against the company in the US over concerns about its previous handling of child safety. with a family that lays claim to it A chatbot told a 17-year-old that murdering his parents was a “reasonable response” to limiting his screen time.

New features include notifying users after they’ve spoken to a chatbot for an hour and introducing new disclaimers.

Users will now see further warnings that they are talking to a chatbot and not a real person – and that what it says is being treated as fiction.

And it adds additional disclaimers to chatbots posing as psychologists or therapists to tell users not to rely on them for professional advice.

Social media expert Matt Navarra said he believes the introduction of new security features “reflects a growing awareness of the challenges posed by the rapid integration of AI into our daily lives.”

“These systems not only deliver content, they simulate interactions and relationships that can create unique risks, particularly related to trust and misinformation,” he said.

“I think Character.ai addresses an important security vulnerability, which is the potential for abuse or for young users to encounter inappropriate content.

“It’s a smart move that takes into account the growing expectations for responsible AI development.”

But he said while the changes are encouraging, he’s interested to see how the safeguards hold up as Character.ai grows larger.

Leave a Reply

Your email address will not be published. Required fields are marked *