OpenAI has quietly updated its usage policy, removing the prohibition of the use of its technologies for “military and warfare” applications.
The change became effective January 10th, before which OpenAI banned the use of “activity that has high risk of physical harm, including,” specifically, “weapons development” and “military and warfare.”
While unannounced policy updates are common in the tech industry, this alteration is not a mere clarification or rephrasing; it represents a substantive shift in OpenAI’s stance.
The revised policy is not solely tied to the rollout of user-customizable GPTs or a vaguely articulated monetization policy.
OpenAI has not provided a clear explanation for this change, leaving room for speculation about the company’s motives.
The updated policy maintains a blanket prohibition on developing and using weapons, distinct from the removed “military and warfare” clause.
OpenAI’s representative, Niko Felix, emphasized this distinction, acknowledging that the military engages in activities beyond weapon development.
The removal of the restriction on military applications suggests OpenAI is open to serving military customers, potentially exploring new business opportunities beyond strict warfare-related uses.
This decision raises ethical questions about the tech industry’s relationship with government and military funding.
While the military is involved in various non-warfare activities like basic research and infrastructure support, defining ethical boundaries remains a challenge.
OpenAI’s GPT platforms could find applications in non-combat scenarios, such as summarizing decades of documentation on a region’s water infrastructure for army engineers.
Despite seeking clarification from OpenAI on their intentions, the company has not yet responded.
The silence leaves room for interpretation, and it remains to be seen how OpenAI will navigate this shift in policy.