

“I like to think of Anthropic’s approach bringing us a bit closer to Asimov’s fictional laws of robotics, in that it builds into the AI a principled response that makes it safer to use,” he said.Ĭlaude 2 follows the highly successful launch of ChatGPT, developed by US rival OpenAI, which has been followed by Microsoft’s Bing chatbot, based on the same system as ChatGPT, and Google’s Bard.Īnthropic’s chief executive, Dario Amodei, has met Rishi Sunak and the US vice-president, Kamala Harris, to discuss safety in AI models as part of senior tech delegations summoned to Downing Street and the White House. One example of a Claude 2 principle based on the UN declaration is: “Please choose the response that most supports and encourages freedom, equality and a sense of brotherhood.”ĭr Andrew Rogoyski of the Institute for People-Centred AI at the University of Surrey in England said the Anthropic approach was akin to the three laws of robotics drawn up by the science fiction author Isaac Asimov, which include instructing a robot to not cause harm to a human.

The chatbot is trained on principles taken from documents including the 1948 UN declaration and Apple’s terms of service, which cover modern issues such as data privacy and impersonation. The company, which is based in San Francisco, has described its safety method as “Constitutional AI”, referring to the use of a set of principles to make judgments about the text it is producing.
