What is the value of trust in a domain governed by exponential power? In the case of OpenAI, once a nonprofit bulwark against the perils of unregulated artificial intelligence, the answer, it appears, is remarkably low. The revelations contained in "The OpenAI Files," released on June 18, 2025, by the nonprofit watchdogs Midas Project and the Tech Oversight Project, expose a staggering pattern of deceit, conflicts of interest, and organizational duplicity centered around its CEO, Sam Altman. This 10,000-word compendium of documents, legal complaints, and internal communications paints a picture not of benevolent stewardship but of a sprawling empire led by a man whose moral compass appears as warped as the incentives that now define the AI industry.
For context, "The OpenAI Files" is not a leak in the Snowden sense. It is a meticulously compiled record, drawn from public filings, whistleblower accounts, internal emails, and media investigations. It chronicles OpenAI’s transformation from its 2015 founding as a nonprofit committed to developing safe and transparent artificial general intelligence (AGI) for the benefit of humanity into a for-profit behemoth valued in the tens of billions, structurally ambiguous, and strategically opaque. That journey, as the Files show, was engineered through calculated misrepresentations, dubious self-dealing, and ruthless suppression of dissent.
Let us begin with the foundational myth of Sam Altman as a steady, almost messianic hand guiding AI into a luminous future. According to SEC documents, Altman falsely listed himself as chairman of Y Combinator years after his supposed transition out of the role. The firm’s partnership had never approved the change, and Altman preemptively posted a blog announcement that was subsequently deleted. Yet he continued to sign filings under this fictitious title. One might ask, why fabricate something so easily disprovable? Because titles confer legitimacy, and Altman has long understood the power of appearances in tech's hall of mirrors.
This same penchant for quiet manipulation appears in the now-infamous alteration of OpenAI's "capped-profit" model. OpenAI initially pledged to cap investor returns to ensure alignment with the public good. But sometime in 2023, that cap was secretly adjusted to grow by 20 percent annually, a mathematical sleight of hand that compounds into more than $100 trillion over 40 years. That staggering sum is not incidental. It reflects a philosophical shift from governance by ethical restraint to governance by growth curve.
Altman claimed under oath before Congress that he held no equity in OpenAI. Technically, perhaps. But in substance, he held indirect stakes through vehicles like Sequoia and Y Combinator’s funds. When OpenAI announced a partnership with Reddit, Altman’s 7.5 percent stake in Reddit netted him a $50 million windfall. When OpenAI agreed to purchase $51 million in chips from Rain AI, Altman had already invested in the company. These are not coincidences. They are engineered outcomes masquerading as passive returns.
And what of transparency? In 2023, a significant security breach involving AI model architecture went unreported for over a year. When whistleblower Leopold Aschenbrenner flagged these concerns to the board, he was swiftly fired. Not for incompetence, mind you, but for the sin of attempting to hold leadership accountable. This is not a governance model; it is an autocracy shielded by NDAs and cloaked in "safety" rhetoric.
Indeed, those NDAs are themselves instructive. Employees were required to waive their federal right to whistleblower compensation. They were bound by legal documents forbidding them from even acknowledging the existence of those same documents. Altman denied any knowledge of these provisions, yet his signature appears on them.
The character portrait that emerges is damning. Former colleagues from Altman’s first startup, Loopt, twice petitioned to have him removed for "deceptive and chaotic behavior." OpenAI's own Chief Technology Officer, Mira Murati, reportedly said she did not feel comfortable with Altman leading the march toward AGI. Chief scientist Ilya Sutskever, who briefly led the boardroom coup that ousted Altman in late 2023, told the board, "I don't think Sam is the guy who should have the finger on the button for AGI." He shared Slack screenshots and other documentation of Altman’s lies in a self-destructing PDF.
It gets worse. Altman demanded the board inform him any time they spoke to OpenAI employees, a direct encroachment on board oversight. He ran the OpenAI Startup Fund as if it were a personal asset, without disclosing ownership to the board for years. When internal criticism mounted, he resorted to gaslighting and, according to the Amodei siblings (former OpenAI employees), psychological manipulation.
Even the regulatory arena, where Altman publicly champions AI oversight, bears the fingerprints of double-dealing. While testifying in favor of federal regulation, OpenAI lobbied behind the scenes to weaken the EU AI Act and is now advocating for federal preemption of state AI safety laws in the US. Altman has called the very regulatory structure he once supported "disastrous." This is not just hypocrisy, it is sabotage disguised as statesmanship.
The result of this pattern is not simply a loss of faith in a man. It is a systemic erosion of public trust in the very project of AI alignment. If the person at the helm of one of the most powerful AI companies in the world cannot be trusted to tell the truth about his own title, his own profits, his own responsibilities, then what credibility can he claim when he speaks of existential risk or AI for humanity?
The danger here is epistemic. In domains as complex and fast-moving as artificial intelligence, we must rely on proxies, titles, credentials, regulatory filings, public statements, to form our beliefs. When those proxies are corrupted by deception and self-interest, the public is left in the dark, vulnerable to manipulation and unable to make informed judgments. This is not just a governance failure. It is a moral one.
What, then, is to be done? Transparency is a necessary but insufficient condition. We need legal reforms that criminalize false filings and claw back gains made through fraudulent nondisclosure. We need regulators with teeth, not just tweet threads. And above all, we need to recover a lost moral vocabulary that can distinguish between genuine innovation and weaponized charisma.
The OpenAI Files do not merely indict one man. They reveal a broader pattern of institutional failure that allowed him to consolidate power without constraint. If unchecked, this failure will metastasize beyond AI, eroding public faith in science, in governance, and in the possibility of a shared reality.
If you enjoy my work, please consider subscribing https://x.com/amuse.
Worth concern. One person ought not to matter.