AI firm Anthropic recently announced the results of a survey it conducted to crowdsource a set of rules for its generative AI, Claude, to play by.
The company calls this approach “Collective Constitutional AI” and it sets rules for behavior that the AI is trained to use.
Anthropic was founded by several break-away employees from OpenAI who became worried about the company’s relationship with Microsoft.
Microsoft has invested $11B into OpenAI and now owns 49% of the company.
Danielle and Dario Amodel, the siblings who broke away to form Anthropic, focused heavily on ethics and creating a public-benefit corporation.
Anthropic has since received funding from both Google and Amazon who recently pledged $4B in funding.
One of the biggest fears over the use of generative AI models, which are trained by ingesting knowledge from large open sources like the Internet, is that the tools will learn bad behaviors from humans unless taught to do otherwise.
The announcement from the company stated, “This constitution takes inspiration from outside sources like the United Nations Universal Declaration of Human Rights, as well as our own firsthand experience interacting with language models to make them more helpful and harmless.”
Since the original model for the Constitution was created by a small number of developers, the team decided to open the questions to a broader audience.
The survey asked questions such as, “Should AI prioritize the needs of marginalized communities?” with over 1,000 respondents weighing in.
The resulting Constitution will serve as a set of rules that the AI will use to create responses to questions.
The complete Constitution can be found here.