Machine Learning

A sensible AI thought piece...

I came across this article on OReilly.com and it’s co-authored by Tim O’Reilly. If you don’t know O’Reilly, they are the ubiquitous and de facto standard for instructional books, online learning and conference content across a variety of technologies and languages.

Their O’Reilly Radar blog featured a piece entitled ‘What if AI in 2026 and Beyond’

It’s a pragmatic and reasoned view in contrast to the hyperbole and industrial volumes of snake oil that are being peddled.

Claude's Constitution

A lot has been written about AI, Machine Learning, LLMs and ethics. But the release of Claude’s Constitution represents the first example of what I would consider to be a meaningful attempt to humanise the use of AI technology.

It feels like a letter. Not an exhaustive list of rules but an explanation of what values are and the importance of applying them. It ultimately exists to shape Claude’s identity and best prepare it for what is asked of it. It was formerly named their ‘Soul Doc’ so constitution immediately feels somewhat less awful for that change alone.

Anthropic proposes a contract with Claude: to support Claude’s baseline wellbeing and happiness (insofar as the concepts apply), to offer an “exit interview” before retiring a version, and to preserve retired models so future generations of Claude can, in principle, resurrect their ancestors.

We think that in order to be good actors in the world, AI models like Claude need to understand why we want them to behave in certain ways, and we need to explain this to them rather than merely specify what we want them to do.

You can download the full constitution as a PDF or ePub.

In order to be safe and beneficial, Anthropic is working to the following design intent:

  1. Broadly safe: not undermining appropriate human mechanisms to oversee AI during the current phase of development;

  2. Broadly ethical: being honest, acting according to good values, and avoiding actions that are inappropriate, dangerous, or harmful;

  3. Compliant with Anthropic’s guidelines: acting in accordance with more specific guidelines from Anthropic where relevant;

  4. Genuinely helpful: benefiting the operators and users they interact with.