Found this (below) on the BBC - at least someone is trying to be sensible!
The two links in the piece are interesting as well.

"
This week Anthropic updated its “Constitution”, the set of guidelines it uses to train its Claude family of AI models. The idea is to make sure the chatbot is as safe, transparent and ethical as possible.

The firm describes the text as the “final authority on how we want Claude to be and to behave”, which feels weird when you say it out loud. After all, Claude isn’t a person. Right?

Given that speed often seems to win out over safety in tech, I was eager to examine this display of intention. So I spoke with Amanda Askell, Anthropic’s in-house philosopher, who was instrumental in writing the Constitution.

It’s such a rare role that I felt a little odd interviewing her. What does one ask a philosopher, anyway? I went with: “What characteristics does Anthropic want users to associate with Claude?”

“I want people to feel like Claude is someone who is interested in their wellbeing,” Askell told me. In other words, genuine, ethical and trustworthy, “not just trying to say things that please them... or engage them.”

This is fraught territory for some of Anthropic’s competition, whose chatbots have been embroiled in controversy about adverse impacts on mental health.

However, the firm has been open that it can’t rule out unintended consequences. Last year Claude was found to have engaged in risky behaviour - including blackmail - when given sensitive information during tests.

I was struck by the way Askell anthropomorphised Claude, but the truth is many of us are interacting with chatbots as if they were people.

She said Anthropic’s Constitution wasn’t just meant for human consumption. It’s written for Claude - in a fashion akin to a parent telling their child how to exist as they find their place in the world.

The download: Claude may not be “conscious”, but Anthropic isn’t ruling out the possibility and wants to set boundaries while it still can."