Our Anthropic IPO Christmas Wishlist
Tell Us What You’re Optimizing For
Feeling Festive? In the spirit of Christmas wishlists, we decided to come up with our own – that is, for Anthropic’s forthcoming public listing disclosures. We haven’t worked for the SEC, so if you have, we would love to hear your take on our analysis.
Anthropic is exploring going public as early as 2026 and OpenAI has signaled similar intentions. Any Anthropic IPO filings will finally help us understand the economics of AI a little better: burn rates, revenue models, and business sustainability. Let’s assume they are in fact going to go public. This post asks whether they will be obliged to disclose what actually matters? Monthly active users, circular revenue deals, training data liabilities, and, most critically: what its algorithms optimize for at the fine-tuning and reinforcement learning stages.
It took some time to get here. The 2012 JOBS Act let AI companies raise enormous amounts of capital while staying private and so outside the SEC’s public disclosure machinery. Going public changes that. As we noted in Tech Policy Press, the SEC’s corporate disclosure regime “remains one of the few proven, scalable checks on corporate behavior – ‘Truth in securities.’ Or, as Justice Louis Brandeis put it, ‘sunlight is said to be the best of disinfectants; electric light the most efficient policeman.’”
Public companies can still do bad things, but they can be held accountable in ways private firms cannot. When Wells Fargo opened millions of fake customer accounts, regulators and investors activated an entire accountability infrastructure that simply doesn’t exist for private firms. Disclosure is foundational to that accountability – enabling third-party businesses from insurance and audit to capital allocation, helping the public understand market structure and competitive dynamics, and protecting consumers from fraud.
This matters especially for large technology companies with multiple product lines, opaque business models, and indirect monetization through proprietary algorithms. We cannot govern markets we don’t understand.

So what can we expect Anthropic to report in its S-1 registration and subsequent 10-K filings? And what should it disclose if they want shareholders to understand its actual business operations?
Finances, Investments, Obligations
First, the “hard” stuff – the numbers we already know how to compel. As a fully grown company at listing, Anthropic will provide three years of audited income and cash flow statements. Its balance sheet is likely to show an accumulated deficit, illustrating the financial sustainability of its present business model and how it has evolved.
The prospectus in the S-1 will include a cap table explaining how past fundraising shapes who gets what in various scenarios. Item 403 requires a table of all shareholders with more than 5% of stock – Amazon and Google’s stakes, voting power, and special share classes quantified pre- and post-IPO. Risk factors can expect to cover things like: “We have a history of losses and may not achieve profitability” and “We have significant obligations under preferred stock to Amazon which means…”.
Interrelationships
S-1 registration forces disclosure of how Anthropic’s “partnerships” actually work, including those with cloud providers – things like minimum royalty payments, investor guarantees, binding commitments, and material contractual obligations. S-1 Items 404 and 601(b)(10) require disclosing related-party contracts above $120,000 and material contracts not made in the ordinary course of business, which should cover the multi-billion circular deals. Items 101 and 303 would require for a company like Anthropic to explain its compute and chip dependencies, long-term purchase commitments, and how those obligations affect margins and liquidity.
With growing concerns about circular deals and an AI bubble, investors will need to see the equity ties Anthropic has to cloud providers, committed-spend obligations, revenue-sharing arrangements, and exclusivity clauses. The disclosures should reveal the extent to which partnerships are exclusionary, which suppliers Anthropic depends on, and where it acts as a dominant supplier.
The FTC’s January 2025 report on cloud providers’ partnerships with AI companies mapped these risks, showing how Microsoft–OpenAI, Amazon–Anthropic, and Google–Anthropic deals combine equity stakes and revenue-sharing with multi-billion-dollar cloud commitments, discounted compute, and some control and exclusivity rights for cloud providers. But it shouldn’t take a special FTC investigation to surface this – companies should lay out these commitments in standard SEC filings.
Risk factors should explicitly state that Anthropic relies on a small number of cloud suppliers who are simultaneously strategic investors and direct competitors, and that long-term contracts limit its ability to move workloads or renegotiate pricing (if this is in fact the case).
These disclosure duties continue after IPO through annual 10-K reports, quarterly 10-Qs, proxy statements, and event-driven 8-Ks. Together, these filings should give investors and regulators a running, legally enforced account of how much control Amazon and Google exercise over Anthropic’s cost structure and strategic options.
By comparison, some information can already be gleaned from counterparty disclosures. Amazon’s 2025 Q1 Form 10-Q discloses the nature and extent of its Anthropic stake, though AWS only discloses long-term customer commitments in the aggregate. Alphabet has disclosed much less granular detail on its Anthropic partnership, focusing on TPU capacity commitments rather than the economics of the stake.
Training Data Costs
Beyond operating revenue and losses, the investing public should know about Anthropic’s training practices and ensuing litigation. In Bartz v. Anthropic, Anthropic agreed to a $1.5 billion settlement with authors – roughly $3,000 per book, with destruction of pirated copies. This is the largest copyright class action settlement in history.
Given these risks, Anthropic can be expected to provide substantive details in its S-1 and 10-K updates – especially since other litigation is ongoing, including music publishers suing over alleged use of song lyrics and Reddit suing over scraping user-generated content. Anthropic should describe settlement terms, remaining court approvals and claim risk, cash-flow timing, and any insurance or recovery. These lawsuits show that Anthropic’s training practices are a significant risk to investors.
Operating Metrics and What Algorithms Optimize For
Here is where disclosures by most public tech companies fall far short. Today’s technology companies derive value from their intangibles – data, software, engineering talent, algorithmic processes, ability to retain and monetize user attention indirectly, and other non-price operating assets. Yet accounting and SEC disclosures have not kept up, instead lumping these together as ‘goodwill’ or generic intangibles on the balance sheet. They are most keenly reflected, we believe, in operating metrics that ultimately tell us a digital business’s potential to compete in fluid online markets under dynamic technological conditions.
Operating metrics include: monthly active users (MAUs), time spent on platform, ad load per user, what algorithms optimize for, efficiency of training runs – things the business community obsesses over but companies don’t have to clearly define or disclose. Companies get penalized for stopping such disclosures (Netflix with MAUs) or having unclear definitions (Twitter’s apparent MAU being inflated by fake bot users, as Musk once argued).
What Operating Metrics Anthropic Should Disclose
For AI companies like Anthropic, operating disclosures should cover not just user numbers, but internally used metrics for resource allocation and monitoring that probably cover things like API usage, model performance, safety incidents, time spent on platform, and the third-party ecosystem that depends on Claude. Drawing on the logic of segment reporting – where managers look at specific metrics when allocating resources and judging the performance of an ‘operating segment’ (i.e., a business line such as Google Maps or Claude Code) – Anthropic might disclose:
Core Usage Metrics
Monthly active API users and enterprise customers
API call volumes and growth rates
Distribution of usage across different model sizes and capabilities
Geographic distribution of usage
Developer ecosystem metrics: number of apps built on Claude, integration partners
Safety and Performance Metrics
Key safety metrics: refusal rates, jailbreak attempts, harmful output incidents
Model performance benchmarks over time
System reliability and uptime statistics
A Monetization Narrative
A monetization narrative would be disclosed as part of Anthropic’s business description in Part 1, Item 1 (Business) of its annual 10-K form. Monetization refers to converting business or end user data, attention, or activity into sales revenue and profit. In a market where no leading model provider turns a profit, immense pressure will be put on Anthropic to monetize its users and suppliers. This has risks.
In previous research at University College London with Prof. Mariana Mazzucato, we called for a monetization narrative in firms’ 10-Ks due to the unique importance of operating metrics in digital companies’ business models. This results from digital platforms’ advertising-heavy business models, exploitation of intermediary market positions, and algorithmic optimization of non-price metrics that encode business objectives.
Under existing U.S. securities law, companies should disclose in their annual Form 10-K the material information investors need to understand their business model and its risks. For firms whose revenues and risk profile are substantially driven by algorithms, this reasonably includes decision-useful disclosure about how those systems operate, how they shape key performance metrics, and how their failures or biases create business opportunities or risks. But companies interpret their SEC disclosure obligations narrowly, ignoring these non-price factors despite them being material to shareholders. It makes sense they need prodding then. This is something Europe might consider leading on, especially if it’s only requiring disclosure of non-proprietary information that companies already track internally.
For Anthropic specifically, a monetization narrative would help explain:
Fine-tuning and reinforcement learning optimization: What are they optimizing their models for at various stages? What risks are they seeing? What commercial incentives are being encoded into training? If Claude is optimized to encourage longer conversations or favor certain types of responses that lead to higher API usage, investors should know. If safety filters are calibrated based on commercial considerations – balancing user satisfaction against risk exposure – that trade-off should be disclosed.
Ecosystem monetization: How does Anthropic monetize the broader ecosystem around Claude? This includes not just direct API fees, but potential revenue sharing arrangements, data licensing, enterprise support contracts, and preferential treatment of certain partners or integrators. When Claude chooses which tool to call, which retrieval results to surface, or which model variant to route a request to, it makes allocation choices.
Indirect value capture: Beyond direct revenue, how does Anthropic capture value from the Claude ecosystem? This might include data from user interactions that improve future models, brand value from high-profile deployments, or strategic relationships that give Anthropic leverage in negotiations with cloud providers or content licensors.
These metrics and narratives would give investors and the public genuine insight into Anthropic’s market position, growth trajectory, and operational risks, rather than leaving them to guess based on occasional press releases or leaked information. Perhaps more importantly, it will set a precedent for other companies that are perhaps less civically-minded than Anthropic.
Where to From Here?
Much of what Anthropic should disclose if it goes public is already required under existing SEC regulations – no new rules are needed. What’s needed is clear guidance on what is material enough that omitting it would trigger consequences. The S-1 preparation process offers exactly this mechanism: since companies submit a draft to the SEC, which then provides comments and can object to omissions before the company goes public.
More broadly, changes in corporate disclosure norms can occur through either the listing side (the SEC or ESMA in Europe) or through the accounting side. The most important step is greater emphasis on mandatory disclosure of operating metrics: including through a monetization narrative and algorithmic disclosures. The logic of segment reporting provides a reasonable basis for firms to disclose externally the operating metrics they use internally when optimizing their operating segments (business lines) and making resource allocations.
In a recent working paper, we proposed baby steps the SEC could take to bring AI-specific matters under its disclosure regime, including issuing disclosure guidance to clarify what counts as material AI activity or risk, and where it belongs in 10-Ks and 8-Ks. We proposed dedicated AI-incident reporting modeled on the successful 2023 cyber rule.
But enforcement of existing rules matters as much as new guidance. The SEC’s comment process on S-1 filings gives regulators leverage to push for comprehensive disclosure before companies go public. This is the moment to establish what “material” means for AI companies – not through abstract rulemaking but through concrete expectations applied to Anthropic’s IPO filing.
Opportunities within the EU are even greater, where disclosure traditions in the Digital Services Act (DSA) could be integrated with IFRS accounting and listing requirements.
Anthropic doesn’t have to wait for any of this, though. It is already a Delaware public benefit corporation with a stated mission of developing advanced AI for the long-term benefit of humanity, and it could choose to retain that structure if and when it goes public. As such it could choose to lead by voluntarily disclosing investor-relevant information and making it easier for everyone to understand the complex and risky business of building AI models. Voluntary disclosure would establish best practices, provide Anthropic credibility with regulators and the public, and pressure competitors to follow suit.
We look forward to reading its prospectus — if it does end up going public. And we are sure OpenAI will too. Will it be the season to be jolly?







