RAND Corporation, an influential international think tank, played a key role in drafting President Biden’s new executive order on AI regulation.
According to an AI researcher and an internal RAND recording, RAND advanced sweeping reporting requirements for powerful AI systems to mitigate catastrophic risks.
These priorities align with those of Open Philanthropy, a group financed by Facebook co-founder Dustin Moskovitz that donated over $15 million to RAND this year.
Open Philanthropy promotes “effective altruism,” an ideology made famous by disgraced FTX founder Sam Bankman-Fried, which uses data-driven philanthropy to focus on speculative AI threats like bioweapons development.
Proponents of effective altruism believe that it prioritizes global problems and interventions that can positively impact lives now and in the future.
Effective altruists rely on science and rationality to guide their altruism and philanthropy toward demonstrably effective causes that reduce suffering.
AI Companies like Anthropic were founded on the principles of effective altruism and utilized it to train their AI chatbot called Claude.
Open Philanthropy has close ties with the leadership groups at both Anthropic and OpenAI.
Critics claim effective altruism serves tech companies by distracting them from current AI harms like racial bias.
RAND’s CEO and a senior scientist both have Biden administration connections and worked at the White House OSTP and National Security Council previously.
RAND confirmed its personnel, including scientist Jeff Alstott, helped draft the executive order.
This raises concerns about tech billionaire agendas outweighing existing AI challenges as the White House looks to RAND for guidance on regulating the technology.
However, it was these same tech billionaires who were among the first to call for governmental oversight of AI in the first place.