Meta has re-shuffled its generative AI teams and disbanded a standalone group that was focused on responsible AI policies, shifting those team members to delivery teams responsible for AI.
The company still maintains that it will keep the concept of ethics at the core of its use of AI, but there will be no single team focused on leading the company’s governance policies on the topic.
Ethical AI is a hot topic as companies and governmental regulatory bodies struggle with the potential of misuse and bias in generative AI tools.
In early November, the Biden administration released an executive order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence which outlined requirements for AI vendors wishing to do business with the federal government.
In July, OpenAI, Microsoft, Google, and Anthropic joined together to form the Frontier Model Forum, an industry body focused on the safe and responsible development of frontier AI.
Meta is seen as trailing on the topic as it attempts to develop its own AI strategy, as the company has struggled to articulate how AI will be used within its product sets.
The company publically released its foundational generative AI model Llama 2 in July with a more focused set of parameters than OpenAI’s ChatGPT.
The disbanding of the Responsible AI team follows a year of restructuring by Mark Zuckerberg and the Meta leadership.
Zuckerberg named 2023 as a “year of efficiency” for Meta which, in practical terms, meant the downsizing of the company and the loss of 21,000 jobs since November of 2022.
Along with this reduction in force were a number of changes to teams and a general restructuring of the organization.
Until the company fully rolls out its plan for AI to Facebook, Instagram, and WhatsApp, it is difficult to gauge if dissolving the Responsible AI team is just another team shift or an intentional shift away from central corporate governance of AI.
Leave a Reply