OpenAI Publishes 5 Principles For Its AGI Push
OpenAI CEO Sam Altman published five operating principles this week that outline OpenAI’s work toward advanced artificial intelligence, saying the company wants AI to serve humanity broadly, not gather power inside a few labs or a few governments. Future systems may require tighter limits when safety risks rise, he added. The timing of this post coincides with the company’s appearance in court this week for the start of jury selection on charges that it has shifted from its nonprofit roots to benefit humanity to a for-profit venture.
That surfaces the core tension in building AI systems that become increasingly powerful. OpenAI wants broad access, but it reserves room to restrict some capabilities when resilience matters more than user freedom.
OpenAI has been addressing artificial general intelligence since its founding. AGI refers to AI systems that can generally perform a wide range of cognitive tasks at or above human level, rather than excelling only at narrow functions, or requiring specific models for specific situations. The company’s 2018 charter came from its roots as a research lab still defined by its nonprofit origins. It promised broad benefit, long-term safety, technical leadership and cooperation. It even said OpenAI would stop competing and help another value-aligned project if that project came close to building safe AGI first.
The new principles keep some of that language, but they give the company more room to maneuver. This updated document, which focuses on democratization, empowerment, universal prosperity, resilience and adaptability, places less emphasis on AGI, shifts away from the older rival-assistance language and uses fewer direct commitments than the 2018 charter.
With increasing concern about AI’s power and role in society, and even direct threats and attacks on AI CEOs and researchers, being more explicit about AI’s progress toward AGI is becoming critical. OpenAI is now sharing not only what kind of lab it wants to be, but also what kind of institution it has become.
The Five Principles Announced This Week
The company says it will resist concentrated power and wants major AI decisions shaped through democratic processes, not only by AI labs, as part of its first principle of democratization. The core idea is to broaden access to AI capabilities without centralizing it with one organization or company.
The challenge is that the AI race runs on chips, energy, talent, data centers and distribution. Those resources cost a lot of money and tend to gather around companies that can raise enormous capital and negotiate at the scale of national infrastructure. This makes true democratization difficult when not anyone can muster the resources needed to build required AI capacity.
The second principle, empowerment, gives users broad latitude. OpenAI says people should have meaningful control over how they use AI. Yet the same section ties that freedom to a duty to reduce catastrophic harm, local harm and social damage. Already there have been many instances of people using AI systems to cause harm to themselves and others. The tension between openness and control will be hard to balance especially when lawsuits and regulation put AI in the crosshairs.
The third principle, universal prosperity, links AI access to massive infrastructure buildout and lower compute costs. OpenAI is no longer talking only about models. It is talking like a cloud company, an energy customer and a national industrial asset at the same time. It believes that AI will be a fundamental utility, like water and electricity, and should be considered to be something that should be broadly available and universally accessible.
The fourth principle, resilience, moves the company closer to the language of national security. OpenAI points to biological risk, cybersecurity and critical infrastructure. It says no single lab can secure the future alone, and it wants to put in oversight to ensure that any society-wide harms can be detected early and mitigated easily. This might be easier said than done, especially when foreign labs are building their own models that might be outside the reach and control of the large AI vendors.
The fifth principle, adaptability, is the most revealing. OpenAI says it will change course as it learns and that some periods may require placing resilience ahead of empowerment. This means that the closer AI gets to AGI capabilities, the more access may become conditional.
There’s no guarantee that all models will be available to everyone all the time, or that those models will be affordable to use. Flexibility also carries its own risk. A principle that can change with evidence can also change with market pressure. A lab that says it may place resilience above empowerment needs outside scrutiny, not only internal conviction.
OpenAI Joins A Wider Race To Define AGI Safety
OpenAI is not alone. Every major frontier lab is trying to write rules without losing speed. Anthropic has taken a more research-oriented, self-policing route. Its Responsible Scaling Policy is a voluntary framework for managing catastrophic risks from advanced AI systems. Anthropic says risk governance should be proportional, iterative and exportable. Its latest Version 3.1 update , updated this month, focuses on increasingly capable models and how the company identifies, evaluates and mitigates risks.
In its latest update, the company said it completed two prior safety goals, including launching planned moonshot R&D projects and finishing an internal safeguards report on data-retention policies. The policy now defines the AI R&D capability threshold more precisely and makes explicit that Anthropic can still pause AI development whenever it deems that appropriate, even when the its Responsible Scaling Policy does not require it.
The Anthropic approach shows how quickly safety pledges bend under competitive pressure. Business Insider reported in February that Anthropic had softened a signature safety pledge and moved toward public risk assessments rather than firm unilateral pauses. Highly reported conflicts between Anthropic and the US government also mirror those challenges of trying to stick to its responsible AI goals. The company doesn’t have much choice in changing those responsible AI frameworks since pausing model development alone won’t help if other companies keep racing.
Google DeepMind uses a different design. Its Frontier Safety Framework is built around Critical Capability Levels, with evaluations before deployment and tiered mitigations for dangerous capabilities. The framework focuses on risks such as autonomy, biosecurity, cybersecurity and machine learning research. This approach lies somewhere on the continuum between public mission statement and a tripwire system that guides model release decisions.
Meta is pressing open source models and approaches as an alternative to central guidance and control. Meta CEO Mark Zuckerberg has argued that open-source AI is good for developers, good for Meta and good for the world. In 2024, he wrote that open source would be the best development stack and a long-term platform. Zuckerberg is increasingly focused on Meta on building full general intelligence and making it broadly available.
The Race For Governance Keeps Trying To Keep Up With AI Development
The policy world is trying to catch up with the rapid and heavy pace of AI model development and capability. The International AI Safety Report 2026 says a dozen companies published or updated frontier safety frameworks in 2025. Many of those rules remain voluntary. Some governments have started turning pieces of that practice into law. The main challenge is that AI capability moves fast and so proof of harm often arrives late.
OpenAI’s 2018 charter sounded like it came from a research group trying to prove it could be trusted before the race accelerated. It talked about public goods, cooperation with other research and policy institutions, and a willingness to assist another project under some conditions. But things have changed in the past eight years. The most recent 2026 principles sound more corporate and more exposed to geopolitics.
OpenAI now says AI power could be held by a few companies or spread through many users. It says it prefers the second path. Yet that path still depends on enormous central investment with more compute, more data centers, more energy and lower infrastructure costs over time. That is the paradox. OpenAI wants broad access to AGI. To get there, it says the world needs lots of infrastructure it doesn’t currently have.
That will not be cheap. It will not be built by casual startup companies or small organizations. It will depend on GPUs, power, land, talent, capital and government permission, the same bottlenecks shaping the entire AI industry. OpenAI knows this and the company is asking the public to let it build at scale, believe it will spread the benefits and accept that it may restrict some uses when risks grow. Perhaps that and Altman’s own personal safety concerns are driving the development and release of the latest principles. The company, and Altman specifically, is asking for support, patience and belief in where all this is heading.
Loading article...