Contents
Essay
Venture
The Structural Transformation: What the Six Patterns of AI VC Funding Really Mean
Investor Concentration Risk: How AI Venture Became a Single Trade
Deutsche Börse launches €5.3bn bid for private equity-backed Allfunds
The gap between top quartile and bottom quartile venture funds was over 40%
Startup Funding Continued On A Tear In November As Megarounds Hit 3-Year High
State of European Tech report | Sarah Guemouri & Tom Wehmeier (Atomico)
Series A rounds continue to dominate the market… but Series A funds themselves are fading fast.
I’m here to give the small group of you who actually care about decision science, power-law math
AI
Media
Regulation
Crypto
Interview of the Week
Startup of the Week
Post of the Week
Editorial:
Winner Takes it All? Or The Great Compression: What is Happening?
I’m adding Google Notebook LM infographics to the editorial this week. Let me know if you love, hate or don’t care about them.
The daily financial news presents a picture of chaos. We see “$2 billion ‘seed’ rounds” that defy historical logic, massive industry consolidations, and unprecedented investment strategies. While these events seem disconnected and irrational, they are symptoms of a single, unifying force at work: The Great Compression. This phenomenon is collapsing the distance between venture capital stages, career timelines, and the very platforms we use to access information.
This compression isn’t a sign of a speculative bubble. It is the system’s rational response to a new “Winner Takes Most” innovation curve, driven by the immense capital intensity of artificial intelligence. In a landscape where the first to achieve scale captures nearly all the value, the entire economic system is frantically squeezing itself into a handful of high-stakes bets simply to ensure its own survival.
--------------------------------------------------------------------------------
1. Venture Capital Is No Longer Venture Capital
The most visible sign of this new reality is the effective destruction of traditional venture capital stages. Labels like Seed, Series A, and Series B have been “rendered meaningless” by the sheer capital required to compete in AI. In the past, software companies could grow incrementally, raising capital as they hit milestones. AI, however, has shifted the field to an “industrial model” that demands massive infrastructure investment long before a product finds its market.
This industrial logic justifies the “1,000x gap in check size” and the emergence of the very “$2 billion ‘seed’ rounds” that signal this new era. Capital is being forced into a “barbell distribution,” clustering around a few massive, category-defining bets. This is why elite firms like Sequoia, Nvidia, and a16z are repeatedly co-investing in the same mega-rounds. They are not diversifying; they are building a “de facto AI index.” The cost of missing the single winning AI platform is existential, so investors are compelled to consolidate their bets into a “highly unified, synchronized capital stack.” For them, spreading capital too thin in a “Winner Takes Most” market is a guarantee of failure.
--------------------------------------------------------------------------------
2. Your Career Is Becoming a High-Stakes Tightrope
This dramatic compression of capital isn’t just reshaping investment portfolios; it’s creating a parallel squeeze on the human capital within the professional labor market. AI is rapidly dismantling the “billable hour” by automating the routine tasks that once justified it, like drafting documents or summarizing information in seconds. This is creating a sharp bifurcation in the value of professional work. Manual throughput is becoming worthless, while the rewards shift entirely to high-level strategic contributions like “judgment, risk, and outcomes.”
This shift explains the paradoxical and intense work culture emerging in the technology sector, where founders and VCs are demanding “9am–9pm, six-day weeks.” The pressure is immense, as one observer noted:
“7 days a week is the required velocity to win right now”
This culture isn’t arbitrary; it’s a direct consequence of the market dynamics. In a race where second place offers no consolidation prize, professionals are squeezed between the demand for grueling velocity and the looming threat of their skills becoming economically obsolete. The career ladder is compressing into a high-stakes tightrope.
--------------------------------------------------------------------------------
3. The Fight for the ‘Front Door to Reality’
The same forces compressing capital and careers are fueling a final, decisive battle: the fight to control the “front door to reality.” We are witnessing a massive consolidation of the interfaces we use to discover information, shifting from a web of open search results to singular, definitive AI answers. The trend is already underway, with data showing a “4,700% year-over-year increase in retail visits driven by AI assistants” alongside a significant drop in SEO click-through rates.
The power at stake in this consolidation is immense, leading to a desperate race for control. The implications are profound:
“If the new shelf space is inside ChatGPT’s answer box, then whoever defines ‘trust, relevance, and extractability’ controls what America buys.”
There is little room for competition here. The very nature of an AI agent is to be a singular, trusted intermediary. This dynamic necessitates a “Winner Takes Most” outcome. The platform that successfully becomes the default choice for users will control the “very infrastructure of choice and consent,” creating the ultimate monopoly on information and commerce.
--------------------------------------------------------------------------------
Conclusion: The System’s Single, High-Stakes Gamble
The Great Compression is the economic system’s logical adaptation to a “Winner Takes Most” reality. Venture capital has collapsed into a single, correlated bet on AI because the industrial scale of the technology requires it. Professional careers are being squeezed because only high-level judgment retains value in an automated world. And digital platforms are consolidating because the interface that wins the user’s trust wins everything.
While this concentration of resources may be a rational response, it also concentrates risk. By linking retirement savings and our collective economic stability to a “handful of highly correlated, high-stakes trades,” we are betting our collective future that the winners of this curve will be benevolent—and that the system can survive the compression required to crown them. Of course my day job at SignalRank is building a highly diversified derisked index of private assets. Maybe there is a way to have your cake and eat it :-)
Essay
The rise of agentic journalism
Niemanlab • December 4, 2025
Media•Journalism•AgenticJournalism•AIAgents•NewsInnovation•Essay
In 2026, a new type of journalism will emerge: one tailored explicitly to machine compilers of language and information. This journalism will not be directed at people, but rather at chatbots and AI information summarizers. A journalism for the “agentic web”: a web populated by automated agents that serve us, retrieving information, sharing our data, making our appointments, answering our emails. The agentic journalism.
Agentic journalism would break from our traditional article format. AI systems do not need ledes, nut-graphs, or narrative flows; they need user-relevant, novel, and machine-readable content. Maybe the format for agentic journalism will be a bulleted list or a JSON file — whatever it takes for that machine to ingest and reformat the content.
The role of the journalist in agentic journalism would be to add information about an event: the five Ws, quotes, context, and links to multimedia content. The writing itself, that fun exercise of putting together the puzzle pieces into a cogent reportage, wouldn’t even need to be automated by the news organization. It would be automated at the destination, pieced together by whatever format the end-user can extract from the machine they are using. In this type of journalism, editors focus on the accuracy and machine-readability of the information supplied by the reporter. The role of copy-editing (which we are already offloading to machine-assisted systems) would be even more diminished.
You might ask: What does this guy think about agentic journalism? Is he pitching it or warning us against it? Well, as a good academic in the social sciences, I’m not here to provide you with clear-cut answers. I’m here to, frustratingly, give you more questions. My prediction will stick to the historical perspectives and the techno-social forces that are in play.
Technology has always reshaped how journalism is produced, distributed, and consumed. The telegraph enabled the Associated Press; radio and television centralized news around financial powerhouses (state-backed or tightly regulated entities); the web offered unprecedented reach, and with it, the pressure of immediate audience feedback. With the rise of search engines and social media, journalists have written less for readers and more for algorithmic intermediaries: SEO-friendly content that is clearer, but less creative, and news articles planned according to their potential social media reach. The great pivot to video happened not because we found out our public preferred it, but because multimedia content was more attractive to digital platforms. These pressures are not just the audience making choices; it’s computers making choices for us. Journalism, then, adapts to these external machine editors.
Now, audiences are increasingly using AI-based products to get information, both about their individual and public lives. Some see AI tools as more approachable, less biased, and more tailored to their preferences. For some people, this will lead to an increase in their exposure to news content. Publics who are tuned out of the news may find, in these novel and personalized ways of encountering this type of content, a newfound utility for journalism. For news organizations, the shift to agentic journalism could mean a new way to monetize content and aggregate value to their brand, by attracting attention and value to their output. To that end, journalists might start packaging stories with structured metadata (clear entity tags, event timestamps, source links, and standardized schemas) to make content legible to AI crawlers. The newsroom’s new craft could be less about prose and more about indexability.
Will private capital create a crisis in 401ks?
Ft • November 27, 2025
Essay•Venture
Overview
The article highlights mounting concerns about the growing role of private capital in US retirement savings, particularly how private equity and similar alternative investments may increasingly be embedded in 401(k) plans. It raises the possibility that the search for higher returns in a low-yield environment, combined with aggressive marketing by private equity managers, could introduce new risks into the core savings vehicle for American workers. Alongside this, it notes that the same private capital ecosystem is fuelling enormous borrowing to fund artificial intelligence–driven data centre expansion, with OpenAI’s partners collectively nearing $100bn in related debt. The piece links these two themes as examples of how private-market leverage and complex financial structures are spreading into areas that touch ordinary savers and the broader economy.
Private Capital’s Push into Retirement Plans
The article discusses how private equity firms have been lobbying plan sponsors, regulators, and asset allocators to allow a greater share of 401(k) assets to flow into illiquid private vehicles.
Proponents argue that private markets can deliver higher long-term returns and diversification versus traditional public equities and bonds, especially in an era of more volatile public markets and pressure on conventional 60/40 portfolios.
Critics, however, worry about transparency, valuation opacity, high fee structures, and liquidity constraints. These features are acceptable for sophisticated institutional investors but may be inappropriate for ordinary workers whose retirement security depends on being able to access and understand their savings.
The article suggests that, while some regulatory signals have been cautiously supportive of “limited” private exposure in defined contribution plans, there is no consensus on how much is safe, or how to protect participants from mis-selling and misaligned incentives.
Systemic Risk and Potential for a 401(k) Crisis
A key concern is that if private investments become a significant component of 401(k)s, downturns in private markets might not show up quickly because of infrequent valuation marks, masking real losses until they become severe.
High leverage often used in private equity deals could amplify losses in a stressed environment, heightening the risk that workers’ retirement balances might fall sharply just when they need them most.
The article raises the possibility that a synchronized correction in both public and private markets, combined with liquidity demands from retirees, could force funds into fire sales, worsening market stress.
It notes that any such crisis would be politically explosive, given the centrality of 401(k)s to US retirement policy and the perception that Wall Street had been allowed to gamble with workers’ nest eggs.
OpenAI, Data Centres, and the $100bn Borrowing Wave
The second major thread is the vast borrowing spree tied to AI infrastructure, particularly data centres required to train and deploy models from companies such as OpenAI.
OpenAI’s partners and backers—major technology companies and infrastructure investors—are described as approaching $100bn in aggregate borrowing for data centre buildout and related hardware.
This capital is often structured through private credit, project finance, and other non-bank channels, again highlighting the growing importance of private capital markets in shaping the real economy.
The article implies that, while such investment may be justified by expectations of explosive AI-driven productivity gains, it also concentrates risk: if AI revenues disappoint, heavily leveraged data-centre assets could become financial stress points.
Implications for Savers and Markets
The piece links the two developments—private capital in 401(k)s and leveraged AI infrastructure—as manifestations of an economic cycle where abundant private money chases high-growth narratives, sometimes with limited transparency.
For retirement savers, the implication is that their portfolios may be increasingly exposed indirectly to complex, highly leveraged bets on long-duration technologies such as AI and large-scale infrastructure.
The article suggests policymakers and regulators will need to balance innovation and capital formation against the imperative to protect non-professional investors, especially where tax-advantaged retirement savings are involved.
Ultimately, it warns that if oversight does not keep pace with the integration of private capital into retail-facing products, the next financial shock could emerge not only from public markets or banks, but from the intersection of opaque private assets and everyday retirement accounts.
Sven Beckert on How Capitalism Made the Modern World
Yaschamounk • November 29, 2025
Essay•Economy•Capitalism•IndustrialRevolution•GlobalHistory
Capitalism as a Historical, Not Natural, Order
The conversation presents capitalism as a historically specific, contingent way of organizing economic life rather than a timeless or “natural” order. Sven Beckert argues that we misunderstand capitalism when we treat it as an abstract system that can be defined purely by economic models. Instead, “really existing capitalism” must be grasped historically, across centuries and geographies, as a process that has repeatedly changed its form. The key move is to “denaturalize” capitalism: to see it as a revolutionary departure from most of human history, when people lived in subsistence economies, under feudal obligations, or under religious authorities who extracted surplus without reinvesting it for further accumulation. Once capitalism is recognized as a human-made order, it becomes possible to see that it could have been otherwise—and can still be reshaped.
Three Core Misconceptions About Capitalism
Beckert identifies three widespread misconceptions:
Pure abstraction: Many assume capitalism can be adequately understood by timeless economic laws. Beckert insists this misses the way capitalism has transformed over 500–1,000 years, requiring a historical lens.
Eurocentric narrative: Standard histories center Europe and the United States, treating the rest of the world as a lagging “future Europe.” Beckert instead advances a global history in which West Africa, India, China, and the Middle East are integral to capitalism’s development.
Urban–industrial bias: Capitalism is often told as a story of factories, cities, steel, cars, and railroads. Beckert stresses that much of capitalism’s history unfolds in agriculture and in the countryside, where most people lived until very recently, especially on plantations and in rural manufacturing systems controlled by merchants.
From “Islands of Proto‑Capitalism” to a Global Capitalist System
Beckert traces capitalism’s origins to merchant communities that applied a capitalist logic—investing capital in long-distance trade to generate more capital. These merchants existed for centuries in places like the port of Aden, West Africa, India, and China. They were “capitalists without capitalism”: modern in behavior but marginal to broader economic life. The crucial transition came when these scattered islands of capital, especially in Europe, forged coalitions with emerging states. In the 15th and 16th centuries, European merchants and states jointly sought routes around powerful Middle Eastern merchant networks to reach the wealth of India and China directly, while monarchs sought revenue for constant wars. This alliance drove expansion into the Atlantic, African islands, and eventually the Americas, where new “islands of capital” like Cabo Verde, Potosí, and Barbados were built as plantation and extraction economies.
What is world‑historically new, Beckert argues, is that merchants did not just trade existing goods; they came to dominate production itself. They organized sugar, silver, cotton, and other commodities at scale, turning entire societies into mechanisms for capital accumulation. This shift—from merchants only arbitraging prices to merchants controlling production—marks the real takeoff of the “capitalist revolution.”
Capitalism Before and Beyond the Industrial Revolution
Beckert challenges the view that capitalism begins with the Industrial Revolution. For centuries before mechanization, most people still lived in subsistence or feudal arrangements, but pockets of economic life followed capitalist logic:
Massive plantation sectors in the Americas produced sugar, coffee, rice, indigo, and later cotton for European markets.
Rural households in Europe and North America produced textiles and other goods under merchant control, selling into long-distance markets.
This period saw less productivity growth and technological innovation than we associate with capitalism today. Instead, it involved large‑scale geographic redistribution of wealth—from enslaved Africans and Indigenous peoples to European merchants—more than overall global enrichment.
The Industrial Revolution in late 18th- and early 19th‑century Britain was, in Beckert’s terms, the most important “offspring” of capitalism, not its origin. It depended on:
Pre‑existing global markets for cotton textiles, built through trade with India.
An effectively limitless supply of raw cotton grown by enslaved Africans in the Americas, freeing British agriculture from supplying that input.
Imperial and commercial power that allowed Britain to dominate global markets, including selling machine‑made cotton textiles back into India by the mid‑19th century.
The real core of the Industrial Revolution was that productivity‑enhancing innovation became permanent and generalized. What began in Lancashire cotton mills spread to coal, iron, steel, railroads, and later chemicals and electrical machinery, and then geographically to Belgium, France, Prussia, the United States, Egypt, Mexico, and beyond. “Permanent revolution” in technology and output became a structural condition.
Expansion of Capitalist Logic and the Changing Role of Finance
Capitalism, Beckert emphasizes, expands along three axes:
Geography: from small regions in Britain to the entire globe.
Sectors: from textiles to virtually every major industry.
Life realms: from production and trade into intimate spheres like dating, now organized around subscription-based apps and monetized platforms.
The logic of capital investment for profit penetrates ever more domains, shaping behavior and institutions.
Finance plays a shifting but central role. In early capitalism, merchant and finance capital—banks, trading companies like the East India Company—were the primary engines, since large pools of capital were needed for long-distance trade. The Industrial Revolution introduced a period in which industrial capitalists could accumulate fortunes from production itself, often starting with modest means; starting a cotton mill did not demand massive initial capital, and reinvested profits could fuel growth, as in Henry Ford’s self-financed expansion.
Since the 1970s, the pendulum has swung back toward finance and merchant capital. Global brands in sectors like textiles and shoes rarely produce goods themselves. Instead, they control design, capital, and markets, while hundreds of thousands of dispersed manufacturers compete for contracts. Power tilts toward finance-rich coordinators of production, echoing early merchant dominance more than the classic “Fordist” industrial era.
Late‑Stage Capitalism, Limits, and Human Agency
Beckert is skeptical of the term “late‑stage capitalism.” Predictions of capitalism’s imminent collapse have circulated since the mid‑19th century and repeatedly been falsified, even as capitalism has radically reshaped itself. The basic logic—owners of capital investing to generate more capital—has persisted through wildly different forms: slave plantations, Victorian factories, mid‑20th‑century welfare states, and contemporary finance‑driven globalization. Declaring a “late” phase assumes a vantage point we do not possess.
He does, however, identify a potential structural limit: capitalism’s dependence on “free gifts of nature”—land, fossil fuels, unpaid care work, ecological sinks. Environmental constraints and climate change may impose real boundaries on continued expansion in its current form.
Crucially, Beckert insists that capitalism’s historicity implies agency. Capitalism is not a “social construct” in the trivial sense of unreality; it is brutally real and powerful. But because it was made by human actions—merchants in Aden, planters in Barbados, enslaved rebels in Saint‑Domingue, industrial workers demanding welfare states—it can be contested and reconfigured. Even actors with little formal power have reshaped the system: the Haitian Revolution helped destroy slavery, and labor movements helped build welfare states. While no single politician or society can redesign capitalism at will—constraints like international competition matter—recognizing capitalism as contingent opens intellectual and political space. There is not one inevitable capitalism, but many possible capitalisms, and future configurations will depend on collective choices as much as on impersonal “laws.”
9-9-6
Benn • November 28, 2025
Essay•AI•AIBubble•WorkCulture•Startups
“The future is already here,” the lede goes, “it’s just not evenly distributed.”
Similarly: The AI bubble will burst—it’s just that the disappointment won’t be evenly distributed.
First, I suppose—is AI a bubble? Some people are worried.1 Ben Thompson says yes, obviously: “How else to describe a single company—OpenAI—making $1.4 trillion worth of deals (and counting!) with an extremely impressive but commensurately tiny $13 billion of reported revenue?” Others are more optimistic: “While [Byron Deeter, a partner at Bessemer Venture Partners,] acknowledges that valuations are high today, he sees them as largely justified by AI firms’ underlying fundamentals and revenue potential.”
Goldman Sachs ran the numbers: AI companies are probably overvalued. According to some “simple arithmetic,” the valuation of AI-related companies is “approaching the upper limits of plausible economy-wide benefits.” They estimate that the discounted present value of all future AI revenue to be between $5 to $19 trillion, and that the “value of companies directly involved in or adjacent to the AI boom has risen by over $19 trillion.” So: The stock market might be priced exactly as it should be. Or it could be overvalued by $14 trillion.
Either way, though—these are aggregate numbers; this is how much money every future AI company might make, compared to how much every existing AI company is worth. Even if the market is in balance, there are surely individual imbalances. Sequoia’s Brian Halligan: “There’s more sizzle than steak about some gen-AI startups.” Or: “OpenAI needs to raise at least $207 billion by 2030 so that it can continue to lose money, HSBC estimates.” Or: “Even if the technology comes through, not everybody can win here. It’s a crowded field. There will be winners and losers.” That is the nature of a gold rush, though, even when there is a lot of gold in the ground. Some people get rich, and some people just get dirty.
No matter, says Marc Andreessen; this gold will save the world. And the people digging for it are heroes:
Today, growing legions of engineers – many of whom are young and may have had grandparents or even great-grandparents involved in the creation of the ideas behind AI – are working to make AI a reality, against a wall of fear-mongering and doomerism that is attempting to paint them as reckless villains. I do not believe they are reckless or villains. They are heroes, every one. My firm and I are thrilled to back as many of them as we can, and we will stand alongside them and their work 100%.
I do not know if the tech employees are heroes, but they are working hard. Some, monstrously so:
recently i started telling candidates right in the first interview that greptile offers no work-life-balance, typical workdays start at 9am and end at 11pm, often later, and we work saturdays, sometimes also sundays. i emphasize the environment is high stress, and there is no tolerance for poor work.2
This is the new vibe in Silicon Valley: Grinding, loudly. Hard tech, and extremely hard core. Because that’s what’s needed to meet the “deranged pace” of this historical moment. Venture capitalist Harry Stebbings: “7 days a week is the required velocity to win right now.” Cognition’s Scott Wu: “We truly believe the level of intensity this moment demands from us is unprecedented.” From others—this isn’t mere capitalism; this is a crucible: “‘This work culture is not unprecedented when you consider the stringent work cultures of the Manhattan Project and NASA’s missions,” said [Cyril Gorlla, cofounder and CEO of an AI startup]. ‘We’re solving problems of a similar if not more important magnitude.’”
So far, so good, at least for the capitalists: According to CNBC, there are now 498 private AI companies worth more than $1 billion. A hundred of them are less than three years old. There are 1,300 startups worth more than $100 million. And these companies have created dozens of new billionaires.
In recent years, this has become the math that punches Silicon Valley’s clock: 996—work from 9 am to 9 pm, six days a week. Seventy-two hours a week; 3,600 hours a year; 10,000 hours in three years. But if that adds up to a billion-dollar payday? Or even a pedestrian few million? Just hang on. “‘I tell employees that this is temporary, that this is the beginning of an exponential curve,’ said Gorlla. ‘They believe that this is going to grow 10x, 50x, maybe even 100x.’” Another founder told Jasmine Sun their plan—get in, get rich, get out:
I asked a founder I know if he thinks that AI is a bubble. “Yes, and it’s just a question of timelines,” he said. Six months is median, a year for the naive. Most AI startups are all tweets and no product—optimizing only for the next demo video. The frontier labs will survive but it’ll be carnage for the rest. And then what will his founder friends do? I ask. He shrugs. “Everyone’s just trying to get their money and get out.”
The Joy of Becoming Worthless…except to each other
Rushkoff • Douglas Rushkoff • November 29, 2025
Essay•AI•AI Employment•Disaster Capitalism•Post Capitalism
My last piece, The Intentional Collapse, seems to have agitated a few people. I know it came off a bit dark. I talked about how the Uber wealthy believe the world as we know it is ending and that there won’t be enough essential resources to go around, so they need to take control of as much money and stuff and land as possible in order to position themselves for the end of days.
The way they do that is with an induced form of disaster capitalism, where they intentionally crash the economy in order to have some control over what remains. So the function of tariffs, for example, is to bankrupt businesses or even public services in order to privatize and then control them. Stall imports, put the ports out of business, and then let a sovereign wealth fund purchase the ports. Or as is happening right now: use tariffs to bankrupt soybean farmers, who have to foreclose on their farms so that private equity firms can purchase the farmland as a distressed asset, then hire the farmers who used to own and work that land as sharecroppers.
What I explained was that the kleptocratic elite, in collaboration with the current White House administration, are engaged in a controlled demolition of this civilization because they realize the pyramid is collapsing and they don’t have faith that there will be enough left to feed and house everyone. The best they can do is earn a ton of money, buy a lot of land, control an army, and get people accustomed to seeing that army deployed. That’s what we’re watching on TV and on our city streets, and why so many Americans voted against the current administration. It was a resounding “what the fuck?”
But I briefly mentioned something about AI and employment that I want to get into now. See, it’s not coincidence that AI is emerging at this same moment in our civilization’s history. As Lewis Mumford observed, new technologies are often less the cause of societal changes than they are the result. Culture is like a standing wave, creating a vacuum or readiness for a new medium or technology. If we really are at the end of capitalism—the end of this eight or nine-hundred year process of abstraction, exploitation, and colonialism—then we would also, necessarily, be at the end of the era of employment. And I will get to why I think that can ultimately be a good thing, but let’s go through the scenario that’s running through everyone’s heads right now, and then find our ways there.
AI is coming for our jobs. Not the super-creative ones, or the high-touch human ones, but the ones that maintain administrative control over everything. The majority of jobs. All the people in the mortgage departments, the insurance companies, the spreadsheet people, the powerpoint people. Doomers say it’s 90% of jobs, but let’s even say it’s just half of office jobs taken by AI’s and blue collar jobs taken by robots.
The problem with that, from a business perspective, is if you have no employees earning money out there in the world, then who will be your consumers? Even Henry Ford, the racist antisemite, understood that workers—even his assembly line employees—needed to be able to earn enough money to buy a Ford car. But how are AI billionaires going to continue to make money if there are no gainfully employed people capable of buying AI services from them or at least buying products from the companies that do purchase AI services?
Venture
Stage Definition Collapse: Why “Seed” Now Means $2 Billion
Fourweekmba • Gennaro Cuofano • November 27, 2025
Venture
In the traditional venture canon, stage definitions were tied to company progression:
Seed: $1–5M to validate an idea
Series A: $10–20M to reach product-market fit
Series B: $30–50M to scale
Series C+: >$100M for expansion
But 2025 AI reality annihilates this structure:
Seed: $100M–$2B
Series A: $100M–$350M
Series B: $250M–$2B
Series C+: $300M–$40B
A single comparison shows the absurdity:
Thinking Machines Lab: $2B “seed”
Traditional seed: $2M
Same label.
1,000× difference in check size.
1,200× difference in valuation.
Stage labels have detached from reality.
As explained in The State of AI VC (
), AI capital formation has compressed into an industrial model, not a software model. Stage collapses because capital intensity has replaced company maturity as the gating factor.
Query 1: Why Has the Stage System Collapsed?
Because stage was built for software economics, and AI is governed by infrastructure economics.
Software startup progression was linear and low-cost:
hire a few engineers
find traction
iterate
then scale
AI companies face a non-linear cost curve:
GPU acquisition
model training cycles
inference fleet build-outs
data infrastructure
distributed systems engineering
cloud contracts
These are not milestones — they are fixed upfront requirements.
A company cannot reach PMF without:
multi-million-dollar clusters
complex ML tooling
inference reliability
regulatory security measures
Therefore, a “seed” is no longer an idea.
It is an industrial commitment.
As The State of AI VC notes, AI’s physical infrastructure requirements force funding to be “front-loaded” rather than sequential (
).
Stage dies because AI cannot scale on software-era cadence….
The Structural Transformation: What the Six Patterns of AI VC Funding Really Mean
Four week mba • Gennaro Cuofano • November 27, 2025
Venture
Every pattern in 2025 AI venture capital—barbell distribution, stage collapse, velocity acceleration, investor concentration, sector rotation, and geographic clustering—reduces to a single unifying force:
“Compression — of stages, timelines, capital concentration, and traditional venture mechanics.”
What looks like a funding boom is actually a mechanical restructuring of how technology capital formation works. The system is not adding “more capital.” It is reorganizing around new bottlenecks, new competitive pressures, and new liquidity requirements.
The six structural patterns don’t merely describe 2025.
They forecast the next decade.
As detailed in The State of AI VC (https://businessengineer.ai/p/the-state-of-ai-vc):
“The traditional playbooks do not work anymore—not for founders, not for GPs, and not for LPs.”
Let’s unpack what compression really means for primary and secondary markets.
The middle market has collapsed.
The $500–900M “growth stage” now represents only 13% of all AI deals.
Capital clusters at two extremes:
Entry tickets ($100–250M)
Category winners ($1B+)
This bifurcation reflects a structural truth:
“AI categories now require either massive scale (labs, infra, compute) or clear defensibility (verticals + picks/shovels).”
Nothing in between is fundable.
This compression forces founders into two lanes:
Become a category winner (market dominance + capital intensity), or
Sell the infrastructure picks and shovels.
There is no middle-lane anymore.
Traditional venture labels are now meaningless.
A “Seed” round in 2025 can be:
$100M+
1,000x larger than another “Seed” round
larger than historical Series C rounds
Stage names persist only because legal documents require them—not because they signal anything about risk or maturity.
Capital intensity replaced stage as the organizing principle:
$100–300M → Application Layer (legal, healthcare, enterprise AI)
$250M–1B → Infrastructure Layer (chips, inference, developer tools)
$1B–40B → Foundation Models (labs)
This requires new due diligence, new comp sets, and new valuation heuristics.
As The State of AI VC notes:
“Stage heuristics died. Competitive intensity is now the only filter that matters.”
Investor Concentration Risk: How AI Venture Became a Single Trade
Fourweekmba • Gennaro Cuofano • November 27, 2025
Venture
The defining structural risk in the 2025 AI venture cycle is not valuations, velocity, or stage compression — it is investor concentration. Across the top $100M+ rounds, the same five to six investors dominate: a16z, Kleiner Perkins, Lightspeed, Sequoia, Nvidia, GV/Fidelity.
But the problem is not simply that these names appear frequently.
The problem is correlation.
As documented in The State of AI VC (https://businessengineer.ai/p/the-state-of-ai-vc), these firms:
co-invest with each other repeatedly,
cluster into the same high-momentum rounds,
and create cross-fund exposure for LPs even when portfolios appear diversified on paper.
LPs think they are allocating across multiple GPs, geographies, and strategies.
In reality, they are allocating into the same dozen AI companies, with exposure multiplying beneath the surface.
This is the hidden correlation problem — the illusion of diversification masking a highly unified, synchronized capital stack.
The data pattern is stark:
a16z: 12 mega-rounds
Kleiner Perkins: 9 mega-rounds
Lightspeed: 8 mega-rounds
Nvidia: 7 mega-rounds (strategic)
Sequoia: 5
GV, Fidelity: 4 each
The critical three — a16z, Kleiner, Lightspeed — co-appear together in 6 deals.
This is not coincidence.
It is the structural backbone of the AI funding network.
When these firms move, they move together — reinforcing each other’s signals, validating the same companies, and amplifying valuation momentum.
This is cluster-led conviction, not decentralized discovery.
As explained in The State of AI VC (https://businessengineer.ai/p/the-state-of-ai-vc):
“Investor concentration has created a de facto AI index — but without the risk controls, liquidity, or hedging.”
The cluster behaves like one meta-fund controlling the majority of capital entering late-stage AI.
The LP problem is subtle but severe.
Consider a typical institutional LP allocating to:
Fund A: a16z
Fund B: Kleiner
Fund C: Lightspeed
On paper, this is diversification.
In practice, it produces:
3× exposure to Anthropic
2× exposure to Harvey, Abridge, Glean
Highly correlated vintage risk
Synchronized valuation cycles
The LP believes they are diversified across three top-tier managers.
But the cross-ownership creates a synthetic index with excessive concentration risk in:
Foundation labs
AI-native applications
Infrastructure picks
This is not a portfolio — it is a stacked bet.
Compression as Transformation in AI VC
Four week mba • Gennaro Cuofano • November 27, 2025
Venture
2025 AI venture capital looks chaotic—barbell distributions, mega-round velocity, stage collapse, investor concentration, sector rotation, geographic clustering. But these are not six independent anomalies. They are six manifestations of one underlying structural force:
Compression — of stages, timelines, capital, and investor bases.
What appears as “more capital deployed” (the $75B+ deployed across AI rounds) is not a bigger version of the old venture environment. It is a fundamental restructuring of how technology capital formation works, as documented in The State of AI VC (https://businessengineer.ai/p/the-state-of-ai-vc).
Compression is not a symptom.
It is the transformation.
Let’s break down the four pillars of compression and then map the strategic consequences.
“Seed,” “Series A,” “Series B,” “Series C+”—the entire stage taxonomy has collapsed.
A Seed round can be:
$100M
$1B
$2B (Thinking Machines Lab)
While another Seed is still $2M.
You cannot infer risk, maturity, product readiness, or team strength from stage labels. The old heuristics (pre-product → product-market fit → scale → growth) have been erased by capital intensity.
The new organizing principle:
Category competitiveness determines round size. Not maturity.
This means:
Investors who cling to stage thinking misunderstand risk.
LPs relying on stage diversification are exposed to hidden concentration.
Founders must position around competitive pressure—not chronological maturity.
The stage system is dead.
Capital intensity killed it.
Traditionally, companies raised rounds every 18–24 months.
In 2025, leading AI companies raised rounds in:
5.5 months on average
with some raising every 4 months
representing 75% compression in fundraising cycles
This shift is not about FOMO. It is mechanical:
AI companies must buy compute capacity before revenue materializes.
Competitors race to secure H100 clusters, HBM supply, and inference infra.
Supplier bottlenecks distort timing. When chips are available, companies must buy immediately—regardless of runway.
The velocity compression drives:
continuous fundraising
valuation stacking
accelerated employee wealth creation
liquidity pressure on GPs and LPs
As The State of AI VC notes:
“Funding cycles no longer map to product cycles. They map to capital intensity and competitive pressure.”
This is industrial capital formation, not venture capital formation.
Deutsche Börse launches €5.3bn bid for private equity-backed Allfunds
Ft • November 27, 2025
Venture
Deutsche Börse has launched a €5.3bn bid to acquire Allfunds, the private equity-backed fund distribution platform listed in Amsterdam, in a move that would mark the German exchange group’s biggest deal in years and further its expansion into investment fund services.
The offer values Allfunds at €8.80 a share, representing a premium to its recent trading price and split between €4.30 in cash and €4.30 in new Deutsche Börse shares, alongside a permitted dividend of €0.20 per Allfunds share for the 2025 financial year. The deal structure would see Allfunds investors become shareholders in the enlarged Deutsche Börse group.
Deutsche Börse said it is in exclusive discussions with Allfunds’ board over a possible acquisition of all issued and to-be-issued share capital, on the basis of a non-binding proposal. The announcement of any binding offer remains subject to customary preconditions, including due diligence, finalisation of transaction documentation and approval by the boards of both companies.
Allfunds, which connects asset managers with distributors and oversees more than €1.7tn of client assets, is backed by private equity firm Hellman & Friedman and Singapore’s GIC. The two largest shareholders have been exploring options for their stakes after taking the business public in 2021, following a 2017 deal in which Hellman & Friedman bought control from Spain’s Banco Santander and Italy’s Intesa Sanpaolo.
The proposed tie-up is aimed at reducing the fragmentation of Europe’s cross-border fund distribution market and building a pan-European platform with greater scale. Deutsche Börse said combining Allfunds with its existing fund services arm would create a more integrated offering to asset managers and distributors and enhance its position in post-trade and data services.
If completed, the transaction would add to Deutsche Börse’s recent series of acquisitions, including the €3.9bn purchase of Danish investment management software provider SimCorp in 2023, as it seeks to diversify beyond traditional trading and clearing into recurring, technology-driven revenue streams.
Data: Zombie VC Firms Walk Among Us
Upstarts media • Alex Konrad • December 4, 2025
Venture
Venture capital is still a relationships game. For startup founders, finding the right person to back your business is still the most important part of fundraising (besides the cash).
But startups can save themselves a lot of time, and potential headache, by turning the tables a bit. The key question: Is this VC firm a walking zombie?
The signs might not be obvious yet. Partners are still active at firms until they’re not, sometimes writing checks weeks before announcing a transition to part-time or ‘venture partner’. And with a few blockbuster exceptions, VC firms don’t typically blow up. Instead, they slowly, often quietly, peter out.
But there will be signs. New data shows that when a firm raised its fund — and how far along it is in the deployment cycle — could go a long way to determining whether you’re wasting your time.
Think of it as a loading progress bar, showing 60% or 80%. How far into a fund’s life cycle is that VC firm? How fast have they recently been writing checks?
And if the answer is that you’re looking to become one of the last investments for a fund raised three or four years ago, brace yourself.
New data from Carta (via its first Fund Economics Report) shows that funds raised in 2021 and 2022 have noticeably slowed down their investment pace, after running hot initially.
Funds closed in 2021 deployed faster in their first year, deploying 33% of their money compared to a more typical less than 20%, then applied the brakes: the median 2021 fund has still only deployed 88% of its capital, lower than any vintage of the previous four years.
It’s a similar story for funds raised in 2022. They’re currently 67% deployed, and at the three-year mark had deployed slower than the five previous years of funds.
What that means, according to Peter Walker, head of insights at Carta: firms still investing out of 2021 and 2022 funds — the exuberant zero interest rate, or ZIRP, era — are becoming much pickier (or skittish) as they reach the end of their fund lifecycles now.
“They’re approaching this fundraising market and finding it much chillier than they’d hoped,” Walker says. “They’re worried this might be the last time they get to invest.”
Founders talking to stalling firms face the following hurdles:
Added conversations and data requests
More intense due diligence
Slower decision processes
That’s not the situation with many newer funds: while 2023 vintages are tracking closer to 2022, funds raised in 2024 are tracking to deploy faster than historical norms, Carta found. AI-focused funds that have invested widely and quickly, and the blue-chip bigger firms with long track records are also notable exceptions.
One caveat: firms might have other, very good reasons to have slowed down their check-writing. Perhaps they don’t want to play a valuations game with AI enabled software, or macro factors are creating concerns in a specific sector of focus.
Another: many startups don’t have the luxury to turn away firms based on yellow flags like this. They need capital, and they have to take what they can get.
All things being equal, however, it makes sense for founders to add a couple of diligence questions back to their own VC calls:
What vintage fund are you deploying out of, and how far along is it?
What’s been your pace of deployment in the past year? Has it been consistent with previous years? (And if not, why?)
Do you anticipate raising another fund soon?
You won’t be able to spot all the zombies this way, but it can provide some peace of mind. Nobody wants to work with a fund that won’t answer the phone in a couple of years.
Why should you raise VC? Well, many times you shouldn’t.
LinkedIn • Peter Walker • November 30, 2025
LinkedIn•Venture
Source: LinkedIn | Peter Walker
You’ve built a bootstrapped company. Clear line of sight to profitability (actually profitable recently). Why should you raise VC?
Well, many times you shouldn’t. That’s a fair take. Who needs external investors when you have full control and full optionality?
But here’s why other founder do choose to engage with venture even though they don’t “need” it.
1) Cash for Growth
Could you accomplish a years worth of growth in 6 months if you had more cash to put to work?
If the answer is yes (and it usually is) perhaps trading equity for capital is useful to boost growth. Growth is always the biggest input into company valuation and ultimate sale price, should that path ever be attractive.
(Btw cash can also be incredibly useful to have on hand in case of unpredictable emergencies. Just ask startups who were running too lean in March 2020).
2) Brand for Hiring
Great talent can work at many startups. Being backed by a well-known fund can improve your standing in the minds of that next valuable engineer.
Beyond just brand, VCs will often extend themselves by personally recruiting talent to your company.
3) Network for Everything
Need a contact at that major prospect? Your VC might have one. Need an intro to this technical expert? Your VC might have one. Need to talk to a founder whose been through this tricky situation? You get the idea.
Good VCs bring network leverage to their portcos.
If none of these reasons resonate, cool avoid VCs and keep building. Many possible games to play and venture just happens to be the loudest 🙏 | 32 comments on LinkedIn
MSCI launches index combining public and private equities
Ft • December 4, 2025
Venture
Overview of the New Index
MSCI has introduced a new benchmark that combines public and private equity into a single global index framework, responding to the rapid expansion of unlisted assets. The product, known as the MSCI All Country Public + Private Equity index, is designed to give investors an integrated view of overall equity exposure across both listed markets and private equity holdings. It reflects how institutional portfolios have increasingly blended traditional public equities with large allocations to private funds.
Structure and Methodology
The index fuses MSCI’s existing All Country World Investable Market Index (ACWI IMI) with a newly created All Country Private Equity index. (ft.com)
Private equity is set at a 15 per cent strategic weight within the combined benchmark, with the remaining 85 per cent allocated to public equities. (ft.com)
The private equity component tracks about 10,000 private equity funds globally, covering buyout, venture capital and other strategies to approximate the opportunity set. (ft.com)
The index is rebalanced quarterly and calculated daily, allowing investors to monitor performance and risk in near real time despite the illiquid nature of private assets. (ft.com)
This methodology attempts to convert inherently opaque, infrequently valued private fund positions into a systematic, benchmarkable slice of a global equity portfolio.
Market Context and Rationale
Private equity assets under management have more than doubled since 2018 to around $4.7tn, underlining its growing importance in institutional portfolios. (ft.com)
Large investors such as pension funds, endowments and sovereign wealth funds increasingly treat public and private equity as a single “equity bucket”, creating demand for blended benchmarks.
MSCI has been building capabilities in private markets analytics, notably through its acquisition of Burgiss, while the broader data and benchmarking race in private markets includes moves like BlackRock’s purchase of Preqin. (ft.com)
The new index positions MSCI to capture a bigger role as investors seek standardized ways to measure performance and allocate capital across the full equity spectrum.
Intended Users and Use Cases
The benchmark targets institutional investors and high-net-worth clients that already hold significant private equity alongside listed equities. (ft.com)
Potential applications include:
Setting strategic asset allocation between public and private equity in a unified framework
Measuring total equity performance relative to a single reference index
Risk monitoring and reporting that reflects actual portfolio structure
By consolidating disparate exposures, MSCI aims to simplify conversations between asset owners, managers and consultants about “true” equity risk and return.
Criticisms and Methodological Challenges
Despite its ambition, the index has attracted skepticism:
Some investors question whether a blended index is really useful, given the wide dispersion of private equity returns and the bespoke nature of many programmes. Neuberger Berman’s Maya Bhandari is cited as doubtful that such a benchmark matches how investors actually set objectives. (ft.com)
Valuation practices, lagged pricing and smoothing in private equity raise concerns about how accurately any daily index can reflect real-time conditions.
Different investors target very different mixes of vintages, strategies and geographies in private markets, making a single “market” representation potentially unrepresentative.
These critiques highlight the tension between the desire for standardization and the inherently idiosyncratic character of private assets.
Implications for the Industry
If widely adopted, the index could reinforce the idea that public and private equity should be managed under one risk budget, influencing consultant frameworks and regulatory reporting norms.
It may encourage further product development, such as index-linked solutions or funds aiming to replicate the blended benchmark.
At the same time, ongoing debates about private equity performance, fees and transparency mean that some large investors may prefer bespoke benchmarks or separate public/private metrics rather than a single composite measure. (ft.com)
Overall, MSCI’s new index reflects how deeply private markets have become embedded in mainstream investing, while also exposing unresolved questions about how best to measure and govern these growing allocations.
SpaceX reportedly in talks for secondary sale at $800B valuation, which would make it America’s most valuable private company
Techcrunch • Connie Loizos • December 5, 2025
Venture
SpaceX is reportedly in talks for a secondary sale that would value the company at around $800 billion, according to Bloomberg, which would make it America’s most valuable private company by far.
The eye-popping figure reflects how routine mega-valuations have become in private markets. Just last week, for example, secondary marketplace Forge reported that employees of CoreWeave, the cloud computing company that went public in March, initially valued their shares on the platform at nearly $100 billion, up from $23 billion in a Series C last August.
It was only three months ago, meanwhile, that TechCrunch reported that SpaceX was in talks to sell insider stock via a tender offer at $255 per share, which would value the company at around $250 billion.
At the time, the valuation put SpaceX well ahead of ByteDance, the China-based parent of TikTok that’s currently valued at around $220 billion. But the new valuation — if it comes to pass — will put SpaceX far ahead of every other private tech company.
More than Elon Musk’s fame and proximity to President-elect Donald Trump is driving up the SpaceX share price. The company is reportedly spinning out its Starlink satellite internet business, for which SpaceX sought a $15 billion loan in August.
According to The Wall Street Journal, which broke the story in October, the company is in discussions with banks about the potential IPO for Starlink, which could reportedly achieve a valuation of $100 billion or more on its own. SpaceX COO and president Gwynne Shotwell had mentioned the spinout idea in 2023 to CNBC.
Then there’s the rocket side of the business, which is also going gangbusters. This week, the company launched its Starship rocket for a seventh time; this test flight involved a satellite deployment experiment.
SpaceX has also proven its worth via an earlier flight. In October, Starship performed a 1.2 million-pound lift and executed the first-ever booster catch by its Mechazilla launch tower.
Starship’s heavy-lift capabilities are key to NASA’s Artemis program for returning astronauts to the Moon, and the rocket could potentially support future missions to Mars.
The gap between top quartile and bottom quartile venture funds was over 40%
LinkedIn • Marcelino Pantoja • December 2, 2025
LinkedIn•Venture
Source: LinkedIn | Marcelino Pantoja
That number gets framed as proof that venture… | Marcelino Pantoja
In a 2011 lecture, David Swensen pointed out a striking fact. The gap between top quartile and bottom quartile venture funds was over 40 percent.
That number gets framed as proof that venture rewards skill. It is closer to a warning.
A spread that wide means most returns come from a very small group of funds. Those funds see deals that most VCs never see. Access matters more than analysis. The belief is that the best firms stay small because capacity is limited. Once they are full, new capital flows to managers outside that circle.
This is where things break down. As capital pools grow, it gets harder to stay in the top tier. More money does not buy better access. It usually pushes you toward average outcomes. In venture, size works against performance.
For allocators, the implication is uncomfortable but clear. Venture only works at small scale. If you cannot get into the true top funds, you are not picking the next great VC. You are backing someone in a much weaker part of the market. Writing a bigger check does not fix that.
For fund managers, the lesson is just as direct. Scarcity is not marketing. It is what protects access to the few deals that drive returns.
One reason this spread may not last is success itself. Lately the best VCs tend to raise more capital, write bigger checks, and move later in a company’s life. Capacity becomes the constraint. Early-stage exposure falls, ownership shrinks, and the return profile shifts. In the end, the forces that create top-quartile performance also make it hard to sustain.
Startup Funding Continued On A Tear In November As Megarounds Hit 3-Year High
Crunchbase • December 3, 2025
Venture
November was another outsized month for venture funding as investors poured $39.6 billion into startups globally. The funding total was on par with October and up 28% year over year from $31 billion, according to Crunchbase data.
Capital continued to concentrate into the largest companies. A stunning 43% of venture funding last month went to just 14 companies that raised rounds of $500 million or more each. That marked the largest number of such megarounds raised in a single month in the past three years.
The largest round of all went to Jeff Bezos’ Project Prometheus, which is tackling physical intelligence. It raised $6.2 billion in its first funding.
Other billion-dollar rounds last month went to:
AI coding startup Anysphere, maker of Cursor, which raised $2.3 billion in a round led by Accel and Coatue.
AI data center provider Lambda raised $1.5 billion led by TWG Global, and Kalshi, a future event betting platform, raised $1 billion led by Sequoia Capital and CapitalG.
US dominated again
The U.S. raised just over 70% of global venture capital in November, up from 60% in October. China was the next-largest market with $2.4 billion in total funding. The U.K. and Canada were the third- and fourth-largest, respectively, with $1 billion or more raised by startups in each country last month.
AI, hardware and fintech sectors lead
AI-related startups accounted for 53% of global venture funding last month, with over $20 billion invested in the sector.
Hardware was another leading sector with funding going to startups working on data centers, computer vision, robotics and defense technologies, among others. Financial services was the third-largest sector for venture funding in November, with large rounds in crypto, financial operations, compliance and payments.
State of European Tech report | Sarah Guemouri & Tom Wehmeier (Atomico)
Youtube • Slush • November 30, 2025
Venture
The video features Sarah Guemouri and Tom Wehmeier from the venture capital firm Atomico discussing the key findings of the annual State of European Tech report. The conversation provides a comprehensive analysis of the current health, challenges, and opportunities within the European technology ecosystem, drawing on extensive data and founder surveys.
A Resilient Ecosystem Facing Headwinds
The report highlights a European tech landscape demonstrating significant resilience despite a global downturn in venture funding. While total capital invested has decreased from peak levels, the baseline remains substantially higher than pre-2020 figures, indicating a matured and more sustainable ecosystem. A critical point of discussion is the stark contrast between the “haves” and “have-nots.” A small cohort of elite companies continues to secure large funding rounds, but a broad swath of the market, particularly early-stage startups, faces a much more challenging environment. The speakers emphasize that the era of “easy money” is over, forcing a necessary refocus on fundamentals like clear business models, path to profitability, and efficient growth.
Key Trends and Structural Shifts
Several important trends are identified. First, there is a notable geographic diversification of capital, with a significant increase in investment from non-traditional sources, including the Middle East and Asia. Second, the report details a shift in sector focus, with Climate Tech and Energy emerging as dominant themes, attracting a larger share of capital than any other vertical, including Software. This reflects both Europe’s regulatory leadership and global urgency around the energy transition. Third, the discussion covers the talent landscape, noting that while large-scale layoffs at major tech firms have occurred, there is a strong underlying demand for technical and AI-specific skills, creating a dynamic and competitive hiring environment for high-growth companies.
The Founder Perspective and Future Outlook
A core component of the report is its survey of European founders, which reveals a nuanced sentiment. Founders express increased confidence in building a globally leading company from Europe compared to previous years, citing the depth of talent and supportive regulatory frameworks. However, this optimism is tempered by significant concerns over access to growth capital, complex regulatory burdens, and the need for more robust public market options for exits. The speakers conclude that the current market correction, while painful, is ultimately healthy for the long-term development of European tech. It is weeding out weaker business models and incentivizing the kind of disciplined, ambitious company-building that can lead to enduring global category leaders.
The overarching implication is that the European tech ecosystem is undergoing a necessary maturation. Success will depend on the continued flow of risk capital, supportive policy, and the ability of founders to navigate a more selective investment climate by demonstrating robust unit economics and addressing large, meaningful problems, particularly in areas like climate and enterprise software.
Series A rounds continue to dominate the market… but Series A funds themselves are fading fast.
LinkedIn • Jackie DiMonte • December 4, 2025
LinkedIn•Venture
Source: LinkedIn | Jackie DiMonte
Some data per Carta (link below):
📈25% of venture capital was invested at the Series A
📈33% of… | Jackie DiMonte | 13 comments
Series A rounds continue to dominate the market… but Series A funds themselves are fading fast.
Some data per Carta (link below):
📈25% of venture capital was invested at the Series A
📈33% of rounds were Series A
And yet, Series A deal counts continue to drop:
Q2: Series A deal count 18%⬇️, value 23%⬇️, while valuations 20%⬆️ YoY
Q3: Series A deal count 10%⬇️, value 8%⬆️, and valuations ~25%⬆️ YoY
So, what’s happening?
The Series A fund of a decade ago is disappearing. They either:
1️⃣ Raised big $$$ during ZIRP and graduated to multi-stage, or
2️⃣ Felt the pressure of competition and pricing and moved earlier (without reducing fund size)
As a result:
🔵 Many more $250–500M funds now invest with a core focus on seed
🔵 Larger rounds at seed (bigger funds / multi-stage is less price-sensitive)
🔵 Higher expectations for maturity at every stage
This has also pushed a bifurcation among the funds investing at the A. They now behave like either growth or value investors:
Growth ➡️ high growth, high burn, high valuations backed by multi-stage
Value ➡️ everyone else?
This is why we’re seeing some A rounds happen at $100K annualized revenue and others at $2M+.
There are obviously exceptions, but for a reason. If you’re not competing on brand, you’re competing on price. And multi-stage can do both.
This environment is incredibly beneficial for some founders and funds but makes fundraising difficult and opaque for many others.
Furthermore, the earlier big/multi-stage funds get involved, the earlier the potential for conflicting incentives and associated consequences. Without focused Series A funds, expectations escalate for these companies, many times faster than opportunities present. (I wrote about it here: https://lnkd.in/gpKyABRQ).
Series A is dead, long live Series A! | 13 comments on LinkedIn
I’m here to give the small group of you who actually care about decision science, power-law math
LinkedIn • December 4, 2025
LinkedIn•Venture
Source: LinkedIn | Guy Conway
I’m not here to scream “AI is eating VC”.
I’m here to give the small group of you who actually care about decision science, power-law math… | Andreas Munk Holm
This post is long, nerdy, and deliberately anti-hype.
I’m not here to scream “AI is eating VC”.
I’m here to give the small group of you who actually care about decision science, power-law math, and the future of European capital allocation a brutally transparent look at what Rule 30 is doing.
Guy Conway and Damian C. announcing rule30 on EUVC just dropped and it’s the deepest conversation on “datadriven VC” / “Quant VC” we’ve ever had. Long overdue.
The guys claim “the world’s first (and still only) fully systematic, algorithmically-decided pre-seed fund” -- you’ll decide what you think 🤔
Here’s what I took away:
1️⃣ Access is a myth. The real bottleneck is triage at scale
Pre-seed Europe + US = 150–200 investable companies per vintage
Rule 30 ingests 10,000+ signals per month from 15+ raw sources
Humans (even with Harmonic-style tools) collapse under that volume → fall back to pedigree heuristics
The algorithm triages 10,000 → 75 with a winner/loser ratio multiple times higher than top human funds
Result: they see everything and still only need a 3-person team
2️⃣ “Data-driven VC” ≠ quantitative VC
Most “data-driven” funds use data as a crutch for human IC decisions.
Rule 30 spent two years cleaning & contextualising raw data before training a single model.
The insight: Crunchbase/PitchBook/Dealflow data is useless without massive transformation. No academic will do that work (ruins a PhD). No traditional VC will do that work (no incentive). They did it anyway.
3️⃣ The label problem — how do you train when outcomes take 10–12 years?
Standard approach: wait forever → impossible.
Rule 30’s solution:
For every vintage since 2010, label the top decile by valuation delta from first round
Found that 5-year delta is >90 % correlated with 12-year outcome at portfolio level
Trains on 5-year labels → still predictive of terminal DPI
This is the single biggest technical unlock. Everything else flows from it.
4️⃣ Personality isn’t magic — it’s a time-series slope
They map every founder’s professional + network trajectory against age-matched cohorts in the same geography.
Outlier slopes (rate of status jumps, quality-adjusted connections, etc.) are one of the strongest features.
Also: pre-investment graph velocity predicts who will lead the next round before term sheets are out.
5️⃣ Portfolio construction — the math is brutal and unambiguous
75–85 deals, fixed €250–500 k checks, no follow-ons
Why? Pure DPI maximisation under power-law
Concentrated only works if you’re actually Benchmark (real brand + real help)
30–40 deal “middle” portfolios are mathematically broken
Their model has 97.5 % confidence of ≥3× net in worst-case black-swan scenario
Target: 6–8× net with massively reduced volatility
Is this the the new reality?
AI
What’s New with ChatGPT Voice
Youtube • OpenAI • December 5, 2025
AI•Tech•ChatGPTVoice•VoiceAssistant•ProductUpdate
Overview
This content invites viewers to watch a video that explains recent updates and capabilities related to ChatGPT’s voice functionality. The central theme is improving how users interact with ChatGPT in a more natural, conversational way through spoken input and audio output, turning the model into something closer to a real-time voice assistant. The emphasis is on demonstrating how voice makes ChatGPT more accessible, more fluid in everyday use cases, and more useful across contexts such as work, learning, and personal assistance.
Core Purpose of ChatGPT Voice
ChatGPT Voice is positioned as a way to talk to the model hands-free, using speech instead of typing.
The feature aims to make interactions feel more like a conversation with a person—quick back-and-forth, clarifications, and follow-up questions spoken aloud.
The voice modality supports situations where users are on the move, multitasking, or simply prefer speaking to writing.
Key Capabilities Highlighted
Users can speak questions or prompts, and ChatGPT responds with synthesized speech.
The system is designed to handle complex queries, extended conversations, and step-by-step explanations just as it does in text.
Voice can be used for a variety of tasks:
Brainstorming ideas or drafting content by dictation.
Getting explanations of difficult concepts in plain language.
Receiving guidance or walk-throughs (e.g., recipes, instructions, planning tasks) while the user’s hands are busy.
The demonstration underscores seamless transitions between topics, mirroring natural human conversation.
User Experience and Interaction Flow
The video encourages users to “watch this video on YouTube” to see the feature in action, suggesting a focus on live demonstration rather than technical documentation.
It likely shows:
How to start a voice conversation within the ChatGPT interface or app.
How the model responds in real time, including pauses, follow-up questions, and corrections.
How the voice experience preserves context across a conversation, just like text chat.
Emphasis is placed on ease of use: minimal setup, intuitive controls (tap to speak, tap to stop), and straightforward access inside the existing ChatGPT product.
Implications and Potential Impact
Voice dramatically broadens when and where people can use ChatGPT: commuting, cooking, exercising, or any situation where typing is inconvenient.
It can make AI tools more inclusive for users who have difficulty typing or reading on screens, or who are more comfortable with spoken language.
As voice becomes more central, ChatGPT begins to resemble a general-purpose digital assistant, potentially competing with or complementing existing smart speakers and mobile voice assistants.
The update supports a trend toward multimodal AI—systems that accept and produce different kinds of inputs (text, voice, possibly images) in a unified, conversational experience.
Key Takeaways
ChatGPT Voice enables natural, real-time spoken conversations with the model.
The feature is designed for convenience, accessibility, and more human-like interaction.
It supports a wide range of use cases—from learning and productivity to daily assistance—without requiring users to rely solely on typing.
By showcasing the feature via YouTube, the content focuses on visual and auditory demonstration, helping users quickly understand how to use and benefit from ChatGPT Voice.
ChatGPT will decide what Americans buy this holiday
Fastcompany • December 5, 2025
AI•ECommerce•AIShopping•RetailDiscovery•ConsumerBehavior
The way consumers search is changing faster than the industry expected. This holiday season, many shoppers are looking for gifts inside AI platforms, rather than retailer sites or traditional search. They are asking natural questions like:
“Find me a cruelty-free skincare gift for sensitive skin under $100.”
“What are good gift ideas for a three-year-old that are safe and durable?”
“What are the safest, nontoxic treats for my Golden Retriever?”
This shift is already measurable. Adobe Digital Insights reports a 4,700% year-over-year increase in retail visits driven by AI assistants between July 2024 and July 2025. At the same time, click-through rates from SEO have dropped 34% as users bypass the search results page entirely. eMarketer reports 47% of brands have no idea whether they appear in AI-driven discovery at all.
The platforms know this shift is accelerating. Google’s recent decision to add conversational shopping and AI-mode ads just weeks before the holidays shows how quickly consumer behavior is moving. Brands must adjust too.
Despite the complexity behind AI systems, three simple signals determine which products get recommended: trust, relevance, and extractability. These signals are the backbone of how AI decides what to surface, and matter as much as packaging, price, or placement.
AI systems develop a sense of which sources to believe during training. Domains with consistent verification signals gain more weight because the model has learned they usually publish accurate information.
This is why leading retailers, including Ulta, Sephora, Target, Amazon, and Bloomingdale’s, rely on independent verification partners for the claims displayed on their digital shelves. Verified domains act as trust anchors. When a model must choose, it selects the product backed by clearer and more reliable sources.
Trust often determines whether you are included in the answer at all.
AI assistants answer based on meaning, not keywords. When a shopper asks for “eczema-safe moisturizer” or “gluten-free protein bars,” the system retrieves products whose attributes clearly map to those concepts.
Relevance depends on using consistent claims across every channel you sell in—consistency is heavily prioritized. When multiple sources concur, this repeated confirmation strongly reinforces your product is the right choice.
Missing or inconsistent attributes keep your product of the candidate pool.
Meta buys AI pendant start-up Limitless to expand hardware push
Ft • December 5, 2025
AI•Tech•MetaAcquisition•Wearables•LimitlessPendant
Meta has acquired Limitless, an AI wearables start-up known for its pendant-style device that continuously records, transcribes and summarizes real-world conversations. The deal marks a clear signal that Meta is broadening its hardware ambitions beyond virtual reality headsets and smart glasses into a wider ecosystem of AI-powered, always-on devices. Limitless, previously called Rewind, built its product as a personal memory aid, positioning it as a way for users to “rewind” and search through their past interactions using AI-generated summaries.
Limitless’s Technology and Product
Limitless’s flagship product is an audio pendant that clips to clothing and records in-person conversations, meetings and ambient dialogue.
The device uses AI to transcribe speech in real time, then stores and organizes this data so users can search and retrieve specific information later, effectively functioning as an external memory system.
A companion app provides searchable transcripts and condensed summaries of conversations, turning raw audio into structured knowledge that users can revisit for work, personal organization or recall.
Strategic Fit with Meta’s Hardware and AI Vision
The acquisition aligns with Mark Zuckerberg’s stated push toward “personal superintelligence” delivered through consumer devices, not just via apps on phones or PCs.
Meta has already invested heavily in smart glasses and VR/AR hardware; adding an AI pendant indicates a broader bet on ambient, screenless computing and voice-first interfaces.
Limitless’s team and technology will be folded into Meta’s Reality Labs division, which is responsible for experimental hardware and has recently added senior design leadership from Apple, suggesting Meta is serious about industrial design and user experience in its next generation of devices.
The deal also reflects a rebalancing of Meta’s priorities, as it puts more emphasis on near-term AI hardware experiences rather than the longer-horizon metaverse vision that previously dominated its strategy.
Financial and Market Context
Limitless was last valued at about $367mn in 2023 and had raised more than $30mn from high-profile backers including Andreessen Horowitz and Sam Altman.
Financial terms of the Meta acquisition have not been disclosed, but the price likely reflects both the value of the product and the strategic importance of acquiring a team with deep expertise in AI-powered capture and recall.
Existing Limitless customers will continue to be supported for at least a year, but new sales of the pendant will stop, indicating Meta intends to integrate the technology into its own branded hardware rather than operate Limitless as a standalone product.
Competitive and Privacy Implications
Meta is entering a nascent but rapidly growing market for always-on AI assistants in hardware, alongside efforts such as Humane’s AI Pin, various OpenAI-linked devices, and AI features embedded into Amazon and Google ecosystems.
These products raise substantial privacy questions because they involve continuous or frequent recording of real-world environments and conversations.
Meta’s move intensifies debate over how such devices should signal recording, obtain consent from bystanders, and store or protect highly sensitive conversational data. Regulatory scrutiny and user trust will be critical factors in adoption.
At the same time, if executed well, the technology could reshape productivity, memory support and personal information management by making real-world interactions fully searchable and persistently accessible.
Broader Impact and Outlook
The deal underscores a shift in big tech’s AI strategy from purely software-based assistants to integrated hardware-plus-AI systems that are worn or carried all day.
For Meta, Limitless offers a way to diversify beyond headsets and smart glasses into subtle, lightweight wearables that may be more acceptable for everyday use.
Over time, Meta could weave Limitless-style capture and recall into a broader family of devices—glasses, earbuds, pendants—creating a unified personal data and AI layer across a user’s life.
Success will depend on balancing powerful capabilities with transparent controls, robust privacy protections and clear value to users, particularly in work and productivity contexts where such devices may initially gain traction.
Say Goodbye to the Billable Hour, Thanks to AI
Wsj • Rita Gunther McGrath • December 4, 2025
AI•Work•ProfessionalServices•BillableHours•GenerativeAI
Overview of the Core Argument
The article argues that generative AI is poised to undermine the traditional billable-hour model in law and other professional-services industries by automating large portions of “grunt work.” As AI tools rapidly perform tasks like document review, contract drafting, due diligence and research, the link between revenue and hours logged becomes increasingly tenuous. Professionals will be pushed to charge based on outcomes, value, and business impact rather than time spent, reshaping both how firms operate and how clients perceive expertise. This shift could democratize access to high-quality services but will also expose which professionals truly add strategic value versus those who mainly resell standardized knowledge and process.
How AI Disrupts the Billable Hour
AI tools can draft, summarize and analyze documents in seconds, obliterating the time that used to justify many billable hours.
Routine legal work such as NDAs, basic contracts, and standard filings can now be templated and auto-generated with AI, reducing the need for junior associates to manually produce them.
Knowledge that once lived in expensive expert hours is increasingly embedded in AI systems trained on large corpora of legal and professional texts.
Clients will quickly notice that tasks previously billed at many hours can be completed far faster, triggering pressure to renegotiate fee structures.
As a result, the premise that “more hours = more value” becomes unsustainable. Firms that cling to hourly billing for tasks obviously aided by AI risk losing credibility and clients.
From Time-Based to Outcome-Based Pricing
Professionals will need to frame their value around:
Risk reduction and compliance outcomes
Size of transactions closed or disputes resolved
Strategic advice that shapes key decisions or competitive positioning
Alternative fee arrangements—fixed fees, success fees, subscriptions, and retainers tied to performance metrics—are likely to spread beyond isolated use cases.
Clients will prefer arrangements where they pay for results and reliability, not for watching the clock.
This transition rewards firms that understand their economics well enough to price risk and outcomes, and that invest in process, data, and AI to deliver predictable results efficiently.
Redefining Roles and Career Paths in Professional Services
The traditional pyramid model, with large leverage from junior staff doing repetitive tasks, is under strain as AI replaces much of that work.
Junior professionals may spend less time learning by doing “grunt work” and more time:
Managing AI tools and validating outputs
Working directly with clients earlier
Developing specialized domain or industry expertise
Senior professionals will need to shift from supervising hours to:
Designing workflows where AI and humans complement each other
Packaging firm knowledge into reusable, AI-enabled products and playbooks
Coaching teams to deliver higher-level judgment and creativity.
Careers may increasingly emphasize skills like problem framing, ethical reasoning, communication, and business acumen over sheer throughput of analysis.
Implications for Competitive Advantage and Access
Firms that adopt AI early and redesign their business models can:
Cut costs while improving speed and consistency
Offer more transparent, outcome-based pricing that attracts sophisticated clients
Create scalable “products” (e.g., automated self-service tools, standardized advice modules) instead of only selling bespoke service.
Smaller firms and new entrants could use AI to compete with larger incumbents by matching technical quality at lower cost, potentially broadening access to legal and professional help for smaller businesses and individuals.
However, there is a risk of a new divide between:
Commodity, AI-driven services with low margins, and
High-end strategic advisory work that remains relationship- and judgment-intensive.
The article suggests that professionals and firms must consciously choose where on this spectrum they want to compete and build corresponding capabilities.
Strategic and Ethical Considerations
Trust becomes central: clients must believe that professionals are using AI responsibly, checking outputs, and safeguarding data confidentiality.
Firms must be transparent about how AI is used in their work and how that affects pricing and timelines.
Regulators, bar associations, and professional bodies may eventually weigh in on acceptable uses of AI and on representations made to clients about time and value.
In conclusion, AI doesn’t just make existing work faster; it forces a rethinking of what clients are actually buying from professionals. As time decouples from value, the winners will be those who can clearly articulate and reliably deliver outcomes—using AI as a force multiplier, not as a hidden replacement for hours on a timesheet.
OpenAI’s ‘code red’ moment
Ft • December 4, 2025
AI•Tech•OpenAI•Sam Altman•Competition
The article explores how Sam Altman’s recent “rallying call” signals a pivotal, almost “code red” phase for OpenAI as competition in artificial intelligence intensifies. It suggests that Altman’s messaging is not just about inspiring employees and partners, but also about redefining the company’s priorities amid pressure from powerful rivals, rapidly advancing models, and growing expectations from investors and the wider tech ecosystem. The central tension is whether OpenAI can maintain its stated mission of safe and broadly beneficial AI while also racing to preserve its lead in a market that is quickly filling with well-funded competitors.
OpenAI’s Competitive Inflection Point
The article frames Altman’s remarks as a response to a new, more aggressive competitive landscape in AI, with Big Tech incumbents and new entrants pushing hard on large language models and related services.
This “code red” mood reflects concern that OpenAI’s early advantage in generative AI could erode as competitors close the gap on model capabilities and product ecosystems.
Altman’s message is interpreted as both defensive and ambitious: a bid to galvanize OpenAI around bolder product moves, faster iteration, and a more assertive posture in the marketplace.
Shifting Priorities and Strategic Focus
The piece highlights a subtle but important reordering of OpenAI’s priorities: commercial deployment and platform dominance increasingly sit alongside — and sometimes appear to overshadow — the original research‑driven, safety‑first narrative.
Altman’s call suggests heightened focus on:
Building more powerful foundation models.
Expanding developer platforms and APIs as the default infrastructure for AI applications.
Deepening integration deals with major cloud providers and key enterprise customers.
These priorities raise questions about whether safety and governance can realistically keep pace with the push to scale, especially as competition rewards rapid public releases.
Internal and External Pressures
Internally, the article notes that staff and researchers may feel torn between long-term alignment work and short-term product milestones that shape OpenAI’s market position.
Externally, investors and partners expect OpenAI to translate its technological lead into durable revenue streams, enterprise contracts, and defensible moats, amplifying the urgency behind Altman’s messaging.
The “code red” framing is portrayed as a cultural signal: employees are being asked to treat the current moment as defining for the company’s future relevance and independence.
Implications for the AI Ecosystem
The article raises broader questions about what OpenAI’s shift means for the overall direction of AI development:
If the perceived need to “win” trumps caution, model releases could become more aggressive, with less time for thorough safety evaluations.
Rival companies may mirror OpenAI’s posture, intensifying an arms race across model size, capabilities, and deployment scope.
Policymakers, researchers, and civil society groups may need to adapt quickly, as the speed and scale of deployment can outstrip existing governance frameworks and norms.
Balancing Mission and Market
The piece concludes by underscoring the central dilemma: OpenAI was founded around a mission of ensuring that advanced AI benefits all of humanity, but it now operates inside a hyper-competitive, capital-intensive industry logic.
Altman’s rallying call is interpreted as an attempt to reconcile these forces — promising both rapid innovation and responsible stewardship — but the article leaves open whether such a balance is sustainable.
Ultimately, the “code red” moment is portrayed as a test of whether OpenAI can remain mission‑driven while behaving like a dominant commercial platform in one of the most consequential technology races of the century.
Firms harness AI tools in search for competitive edge
Ft • December 2, 2025
AI•Work•Professional Services•Automation•Productivity
Companies are deploying artificial intelligence tools across a wide spectrum of routine and complex tasks in an effort to gain a competitive advantage. The technology is being used both to increase efficiency and to enhance the quality and consistency of knowledge‑based work, especially in professional services. A central theme is that AI is shifting from experimental pilots to being embedded in day‑to‑day workflows, particularly in areas that rely heavily on document review, analysis and drafting.
Expanding Uses of AI in Professional Work
Firms are using AI to trawl through vast data sets during audits, allowing professionals to examine entire populations of transactions rather than small samples.
In consulting and advisory work, generative AI is being used to draft client presentations, reports and internal memos in minutes, replacing what used to take hours of manual effort.
AI tools are also helping teams synthesize regulatory material, sector research and internal knowledge bases into concise, tailored outputs for specific clients or projects.
These uses show how AI is moving into the core “thinking work” traditionally done by highly trained specialists, not just automating basic administrative tasks.
Productivity, Speed and Quality Gains
One major benefit for firms is the acceleration of document-heavy processes: audits, due diligence, risk assessments and strategy decks can be assembled much faster.
By scanning mass data sets consistently, AI can reduce human error, flag anomalies and help auditors or analysts focus on the most material issues rather than manual number‑crunching.
For consulting teams, AI-generated first drafts create a structured starting point, enabling staff to focus on refinement, insight and client-specific nuance.
The implication is that organisations can process more work with the same or fewer resources, while potentially improving accuracy and coverage in data-rich tasks.
Human Oversight and Skills Shifts
Despite the speed gains, firms still rely on human professionals to validate AI output, interpret findings and make final judgments.
Employees are being asked to develop new skills: prompt writing, critical evaluation of machine‑generated text, and an understanding of AI’s limitations and biases.
Junior roles, traditionally built around repetitive document work, are being reshaped; training now emphasizes higher‑order thinking earlier in careers, as AI takes over some entry‑level tasks.
This points to a reconfiguration of white-collar work, where human expertise is less about gathering and formatting information and more about contextual judgment, ethics and client relationships.
Strategic and Competitive Implications
Early adopters seek a structural edge by institutionalizing AI tools across business units rather than running isolated experiments.
Firms are reassessing their technology stacks, data governance and security practices to safely deploy AI at scale, particularly when client data and confidential information are involved.
There is an emerging gap between organizations that integrate AI deeply into workflows and those that remain cautious, raising the risk of competitive divergence in cost structures and service quality.
Over time, the ability to operationalize AI—rather than simply having access to the tools—could become a defining factor in market leadership.
Risks, Governance and the Road Ahead
Reliance on AI in audits and consulting raises questions about model transparency, accountability and regulatory compliance.
Firms must ensure that AI-assisted analyses meet professional and legal standards, documenting how tools are used and how conclusions are reached.
As clients become more aware of AI’s role in service delivery, expectations may rise for lower costs, faster turnaround and more data‑driven insights.
Overall, the article portrays AI not as a distant disruption but as an active force already reshaping how high-value professional work is done. The organisations that manage governance, training and integration effectively are likely to capture the bulk of the productivity and quality gains, while those that lag may find their traditional advantages quickly eroded.
Anthropic signs $200M deal to bring its LLMs to Snowflake’s customers
Techcrunch • Rebecca Szkutak • December 4, 2025
AI•Tech•Anthropic•Snowflake•EnterpriseAI
AI research lab Anthropic inked a $200 million deal with Snowflake to bring its AI models to Snowflake’s 12,600 customers.
The deal is a multi-year agreement that will see Anthropic’s Claude models, including Claude Sonnet 4.5 and Claude Opus 4.5, integrated into Snowflake’s AI and data platform. The partnership is described as a significant expansion of the companies’ existing relationship and is designed to help enterprises handle complex, multi-step analysis across sensitive data using Claude-powered agents.
Claude Sonnet 4.5 will power Snowflake Intelligence, Snowflake’s enterprise AI service. Snowflake customers will be able to use Anthropic’s models to run multimodal data analysis directly on data stored in Snowflake. They will also be able to build their own custom AI agents on top of these models.
The companies are positioning this as a joint go-to-market initiative focused on bringing AI agents to large enterprise customers. By combining Claude’s advanced reasoning capabilities with Snowflake’s governed data environment, the partnership aims to give organizations—especially those in regulated industries like financial services, healthcare and life sciences—a way to move AI projects from pilot to production with greater confidence.
Anthropic’s CEO Dario Amodei said enterprises have spent years building secure, trusted data environments and now want AI that can operate within those environments “without compromise.” He framed the deal as a way to bring Claude directly into Snowflake, where enterprise data already resides, and as an important step toward making frontier AI practically useful for businesses.
The partnership also reflects Anthropic’s broader strategy of prioritizing enterprise customers over individual users. In recent months, the company has signed several large enterprise deals, including partnerships with Deloitte to roll out its Claude chatbot across more than 500,000 employees and with IBM to embed its LLMs into IBM software products.
TPUv7: Google Takes a Swing at the King
Semianalysis • November 28, 2025
AI•Tech•GoogleTPUv
Google’s TPUv7 push and Anthropic’s 1GW+ deal
The article argues that Google’s TPUv7 “Ironwood” marks a serious challenge to Nvidia’s AI dominance by combining technical efficiency, aggressive economics, and vertical integration. Anthropic has committed to over 1 gigawatt of TPU capacity—translating into roughly a million TPUv7 chips—via a mix of direct hardware purchase and cloud rental. Around 400,000 of the latest TPUs will be sold directly (manufactured by Broadcom) in a deal worth about $10 billion, while another ~600,000 will be provided through Google Cloud with roughly $42 billion of remaining performance obligations attached. This demonstrates that Google is no longer keeping its best accelerators for internal use and is willing to weaponize TPUs as a commercial platform for top AI labs such as Anthropic, Meta, SSI, xAI and possibly OpenAI.
Cost advantage and TCO economics versus Nvidia
A central theme is total cost of ownership rather than raw performance. SemiAnalysis modeling suggests TPUv7 servers have roughly a 44% lower TCO than Nvidia’s GB200 systems from Google’s internal vantage point. Even after Google and Broadcom take profit, Anthropic’s effective TCO using TPUs via GCP is estimated to be about 30% lower than buying GB200s outright. This advantage stems not just from chip pricing but from Google’s “financial engineering”: Google provides off–balance sheet credit support that connects datacenter operators (including ex‑crypto miners with cheap power and space) to AI demand, effectively lowering financing costs for TPU infrastructure. Because Ironwood’s TCO is so low, Google may only need about 15% model FLOPs utilization to match Nvidia’s margins; if frontier labs like Anthropic reach ~40% utilization, their effective costs could be cut in half compared with Nvidia GPU deployments.
CUDA moat under pressure: software and ecosystem
Historically, Nvidia’s real moat has been CUDA and its surrounding ecosystem. The article argues that this moat is now at risk as Google invests heavily in software to make TPUs feel GPU‑like. Google is moving toward native PyTorch support on TPUs, removing the friction of going through XLA, and integrating with popular open‑source inference stacks such as vLLM. The strategy is to let developers keep their familiar tools while simply swapping underlying hardware to TPUs. However, Google is criticized for still keeping key parts of XLA proprietary; fully open‑sourcing the stack is described as the potential “kill shot” against CUDA, enabling community‑driven optimizations and broader adoption.
Full‑stack and supply‑chain strategy: TPUv8 vs Nvidia Vera Rubin
Looking beyond TPUv7, Google is splitting next‑generation TPUv8 into at least two variants: one co‑developed with Broadcom (Sunfish) and one with MediaTek (Zebrafish), as part of a broader effort to avoid over‑dependence on any single silicon partner and to manage costs. Analysts see this dual‑supplier approach as strategically prudent but technically conservative: TPUv8 may skip TSMC’s cutting‑edge 2 nm nodes and HBM4, while Nvidia’s next architecture, Vera Rubin, is expected around 2026–2027 with HBM4 and more aggressive interconnects. If Nvidia executes well on Rubin, TPUv8 risks losing the current cost‑performance edge. The report frames today’s TPUv7 advantage as a window of opportunity rather than a permanent structural win.
Market impact and strategic implications
The article situates TPUv7 in a broader competitive shift. Nvidia’s share price has already reacted negatively to reports that Meta may shift billions of dollars of AI capex toward Google TPUs in its own datacenters, and Google executives say TPU demand now exceeds supply, with even 7–8‑year‑old generations at 100% utilization. If more frontier labs and hyperscalers adopt TPUs at scale, Nvidia’s “circular economy” strategy—investing in startups that then buy Nvidia GPUs—faces pressure, because every TPU cluster deployed reduces future GPU capex. At the same time, Google’s vertical integration—from chips and datacenters up through models like Gemini and services in Google Cloud—gives it powerful levers to price below standalone chip vendors while still earning attractive returns.
Key takeaways
TPUv7 Ironwood gives Google a substantial TCO advantage over Nvidia GB200‑class systems, especially when combined with creative financing and cloud commitments.
Anthropic’s 1GW+ TPU deal signals that leading labs are willing to bet their frontier workloads on TPUs rather than Nvidia GPUs.
Google is attacking Nvidia’s CUDA moat by deeply supporting PyTorch and vLLM on TPUs, though partial closed‑source XLA remains a bottleneck to full community buy‑in.
Next‑gen competition will pit TPUv8AX/v8X (Sunfish/Zebrafish) against Nvidia’s Vera Rubin; Google’s more conservative design choices could erode its current edge if Rubin ships as advertised.
The outcome of this TPU–GPU battle will strongly influence AI infrastructure costs, cloud market shares, and which companies can afford to train and serve the largest frontier models.
How OpenAI Builds for 800 Million Weekly Users: Model Specialization and Fine-Tuning
Youtube • a16z • November 28, 2025
AI•Tech•ModelSpecialization•FineTuning•Scalability
Overview
The content presents a video discussion focused on how a large AI company approaches building products and infrastructure to serve hundreds of millions of weekly users.
The central theme is scaling large language models while maintaining quality, speed, and reliability, and using model specialization and fine-tuning to meet diverse user and enterprise needs.
It emphasizes the tension between a single powerful general-purpose model and an ecosystem of more specialized or fine-tuned variants optimized for particular tasks, domains, or cost/latency constraints.
Model Specialization vs. General-Purpose Systems
A core idea is that a foundational, general-purpose model is extremely capable, but not always the most efficient or cost-effective solution for every use case.
Specialization and fine-tuning are described as ways to:
Improve performance on specific domains or workflows.
Reduce latency and infrastructure load for high-volume, repetitive tasks.
Meet enterprise requirements for reliability, predictability, and governance.
The discussion contrasts “one big model for everything” with a layered architecture where:
A frontier model handles complex reasoning and open-ended queries.
Smaller or specialized models are used for constrained tasks such as classification, routing, or structured extraction.
Fine-Tuning and Customization
Fine-tuning is highlighted as a key method for aligning the AI system with:
Company-specific data, tone, and policies.
Industry-specific knowledge (e.g., finance, healthcare, legal, support).
Repetitive workflows that benefit from strong task priors.
The video underscores that fine-tuning:
Can significantly reduce prompt length and thus latency and cost.
Enables better consistency and adherence to brand or compliance constraints.
Makes it possible to capture tacit knowledge from high-performing teams and scale it across an organization.
There is an emphasis on tooling and APIs that let developers and enterprises manage, deploy, and evaluate multiple fine-tuned variants.
Operating at Massive Scale
Serving on the order of hundreds of millions of weekly users is framed as an infrastructure and product challenge as much as a research one.
Key scale considerations include:
Latency: ensuring fast responses despite heavy global load.
Reliability: graceful degradation, routing to alternative models, and robust monitoring.
Cost-efficiency: dynamically choosing between larger and smaller models, or between general and specialized ones, based on request type.
The video suggests a layered routing strategy where incoming requests may:
Be analyzed by lightweight models to determine intent.
Be dispatched to different specialized/fine-tuned models or to the most capable general model when necessary.
Implications for Developers and Enterprises
For developers, the message is that building high-quality AI products means:
Starting with a strong base model.
Identifying high-value, repetitive use cases where fine-tuning or specialization yields outsized gains.
Continuously measuring quality, latency, and cost, and iterating across multiple model variants.
For enterprises, the approach enables:
Domain-specific copilots, agents, and automation systems.
Better control over outputs, aligned with legal, security, and brand requirements.
A path from experimentation with a general model to production systems built on robust, specialized components.
The broader implication is that the future AI stack will likely blend powerful foundation models with a growing constellation of specialized, fine-tuned models, orchestrated intelligently to deliver both scale and quality.
Key Takeaways
A single frontier model is necessary but not sufficient to serve extremely diverse, large-scale workloads.
Model specialization and fine-tuning are central tools for achieving better quality, lower latency, and improved cost profiles in real products.
Intelligent routing and orchestration across a family of models become increasingly important as usage grows.
This architecture positions AI platforms to support everything from casual consumer use to highly regulated, mission-critical enterprise applications.
OpenAI partners amass $100bn debt pile to fund its ambitions
Ft • November 27, 2025
AI•Funding•OpenAI•CloudComputing•TechFinance
Overview
The piece examines how a large, fast-growing but loss-making AI start-up has become the financial linchpin for many of the world’s biggest cloud providers and technology investors. These partners have collectively built up a debt and funding exposure of around $100bn that ultimately depends on the start-up’s long‑term commercial success. The article highlights a striking imbalance: while the company provides cutting-edge AI models that drive demand for cloud infrastructure, its own profitability is uncertain, yet it is expected to repay or justify an enormous pile of loans, prepayments, and capital commitments.
Structure of the $100bn Exposure
The debt pile is composed of:
Direct loans and credit facilities extended to the start-up.
Massive prepayments and long-term purchase commitments for AI compute and services.
Convertible instruments and structured deals that blur the line between debt and equity.
Major cloud providers are central:
They invest in and lend to the start-up.
They also commit to buying its AI services, effectively guaranteeing it revenue while tying their own infrastructure demand to its success.
Developers and smaller partners are indirectly exposed:
They build products and services on top of the start-up’s models and APIs.
Their own revenue forecasts presume the start-up’s continued ability to innovate and cut inference costs over time.
Risk Concentration and Business Model Tension
The article underscores a fundamental mismatch:
Training and serving frontier AI models is intensely capital-intensive.
Margins are still uncertain, especially as competition from rival labs and open-source models grows.
Key tensions include:
Cloud partners want sustained, predictable compute demand to justify their data-centre build‑out.
The AI start-up must ultimately move from subsidised growth to sustainable unit economics, even as it spends heavily on GPUs, data centres, and research talent.
If model performance gains slow, or customers push back on high pricing, the path to repaying or justifying $100bn of obligations becomes much riskier.
Strategic Motives of Cloud and Tech Partners
Cloud giants are not just passive lenders:
They are using capital and credit to secure exclusivity or preferential access to the start-up’s most advanced models.
Their aim is to lock in AI workloads to their infrastructure clouds for years to come.
For large tech investors, the bet is asymmetric:
A dominant AI platform could deliver extraordinary long‑term returns and strategic data advantages.
But concentration in one loss‑making company raises questions about systemic tech-sector risk if this flagship bet falters.
Developers accept dependence because:
The start-up’s models offer rapid time-to-market and strong performance.
Building their own models or switching providers would be expensive and technically challenging.
Broader Market and Systemic Implications
The article suggests this structure resembles earlier periods of tech exuberance:
Heavy leverage and forward commitments are being justified on the assumption of sustained exponential AI demand.
Any slowdown in adoption, regulatory intervention, or major safety incident could sharply reassess valuations and creditworthiness.
Potential impacts:
Cloud and chip capital expenditure plans might have to be recalibrated if the AI start-up’s growth underperforms expectations.
A disorderly adjustment could reverberate through credit markets, affecting other tech start-ups that rely on similar funding structures.
At the same time, if the bet pays off:
The AI start-up could mature into a central platform layer of the digital economy.
Its partners would have locked in the key infrastructure and distribution rails of the AI era, justifying the massive upfront risk.
Key Takeaways
A single AI start-up has become the focal point of roughly $100bn in loans, commitments, and structured financing.
Major cloud providers and developers are deeply reliant on its ability to turn powerful but expensive AI models into a sustainable, profitable business.
The arrangement concentrates both strategic upside and systemic financial risk in one high‑growth, loss‑making entity, making its execution and governance critically important for the broader tech ecosystem.
Why Sam Altman Declared ‘Code Red’ at OpenAI
Nymag • John Herrman • December 2, 2025
AI•Tech•OpenAI•Competition•Google
In the past few weeks, the most talked about start-up in the world has been emitting some alarming signals. OpenAI should expect some “rough vibes” in the coming months, CEO Sam Altman said in an internal memo. More recently, in another memo, he told employees he was declaring a “code red” effort within the company to improve its most popular product, ChatGPT, and focus work on underlying models. Once the comfortable leader not just in market share but in general AI capability, the company is now considering rushing out an incremental update to stay competitive with Google, Anthropic, and even xAI. Not even six months ago, Altman was blogging about the “gentle singularity,” claiming that we are “past the event horizon,” that “the takeoff has started,” and that humanity “is close to building digital superintelligence.” What happened since then?
We’ll start with the big one: Google. Altman’s “vibes” memo was a direct response to the release of Google’s Gemini 3 model, which bested OpenAI’s newest models in a range of important benchmarks. At the same time, Google’s new image generator, which is likewise more capable than anything else on the market, has driven actual user growth for the company, which now claims more than 650 million monthly users (though Google’s various attempts to build Gemini into its existing products means that number should be taken with a grain of salt).
With the additional news that Google’s in-house chip-building efforts seem to be going well, a late-2025 snapshot of the AI race probably doesn’t have OpenAI in the lead. Since the release of ChatGPT, two facts provided OpenAI with momentum and mystique: Its core product actually had a bunch of users, unlike any of its competitors, and its models seemed to be a generation in front of everyone else’s. Today, neither is quite true. In addition to Google’s gains, multiple third-party analytics companies are seeing a slump in ChatGPT usage, so OpenAI’s narrative of inevitability — a load-bearing corporate story if ever there was one — is falling apart.
But there are other factors. too, all of them at least temporarily punishing for the avatar of the AI boom. One is that, while Google’s newest models represent the state of the art, the leading labs — including Anthropic and xAI — seem to be herding fairly closely to one another, trading leads in benchmarks that tend to evaporate within a few months. Whether you take this as a sign of continued scaling and progress or as evidence of a plateau, it leaves OpenAI with more competition than it had two years ago, and that’s before you even mention the rise of open-source models, many from China, which are cheaper to use, highly customizable, and, according to NBC, are getting powerful enough for deployment by plenty of would-be OpenAI clients.
Open-source models have been catching up with frontier models for years, but only recently have they started benchmarking competitively. This week, Chinese AI start-up DeepSeek, whose unusually efficient model briefly sent American markets into chaos early this year, released an update that it says is competitive with the latest from Google and OpenAI despite training on far less capable hardware. A month ago, Moonshot AI, another Chinese start-up, made similar claims about its own models, which have since been validated independently.
Setting aside broader questions of an AI bubble or whether, as departed OpenAI co-founder Ilya Sutskever said a week ago, the era of dramatic LLM “scaling” is over, this adds up to a simple problem for OpenAI: It’s at risk of becoming just another company. Like AI tools themselves, some AI firms are enchanting, inspiring a sense of faith and wonder among investors and the general public that allows them to, say, burn $12 billion a quarter while punting the question of profitability into the 2030s. If you’re Google, a company with a wildly profitable core business and a number of clear options for monetizing LLMs as they exist today, a normalized AI narrative may represent a temporary setback or a ding to your stock price. If you’re OpenAI, which is tied up in hundreds of billions of speculative, contingent, and increasingly circular deals, “rough vibes” could compound into something much worse.
Media
Netflix’s WBD deal swaps history for fantasy, with a dose of high drama
Ft • December 5, 2025
Media•Film•Streaming Wars•Netflix•Mergers And Acquisitions
The piece focuses on a newly announced acquisition that assigns an $83bn valuation, including debt, to a major studio and its associated streaming businesses. The central theme is how this deal marks a decisive shift in the media landscape: a legacy of historical, theatrical, and cable-driven entertainment is being folded into a streaming-first, algorithm-driven future dominated by Netflix. The article frames the transaction as a swap of “history for fantasy,” suggesting that deep studio archives and traditional Hollywood structures are being monetised and reinterpreted through Netflix’s global distribution and data-centric business model, with high financial and strategic drama surrounding the price and timing of the move.
Deal Structure and Valuation
The acquisition, announced on a Friday, puts an $83bn price tag on the company’s studio and streaming assets, explicitly noted as including debt.
This valuation underscores how capital markets are now willing to ascribe significant value to libraries of intellectual property (IP) and streaming platforms, even amid industry scepticism about profitability.
The focus on “studio and streaming businesses” highlights that linear TV and other legacy elements are either de-emphasised or carved out, reinforcing the industry perception that streaming and IP libraries are where future value lies.
Strategic Rationale for Netflix
For Netflix, the deal represents an acceleration of its strategy to become not just a streamer but one of the world’s dominant content owners, with a deep catalogue of films and series that can be endlessly repackaged for global audiences.
Access to a vast archive allows Netflix to reinforce subscriber retention, reduce dependence on third‑party licensing, and build new franchises from existing brands, characters, and story universes.
The acquisition also shores up Netflix’s competitive position against other media conglomerates that have been leveraging their own studios and libraries to support rival platforms.
Implications for the Legacy Studio and Its Streaming Operations
For the studio and its in‑house streaming businesses, being acquired at such a valuation represents both validation and capitulation: validation that its IP and brands are extremely valuable, but capitulation that it could not, on its own, fully compete at global scale in the streaming wars.
The deal shifts strategic control over the studio’s creative output and distribution windows to Netflix, raising questions about what happens to theatrical release patterns, cable partnerships, and traditional syndication models.
The inclusion of debt in the $83bn figure reflects how heavily leveraged many traditional media groups have become after years of mergers, spin‑offs, and streaming bets; this acquisition effectively refinances that history into Netflix’s balance sheet and long‑term growth story.
Broader Industry and Competitive Impact
The acquisition intensifies consolidation in digital media, where a small group of global platforms now control not just distribution but also huge portions of the world’s filmed entertainment libraries.
Rivals must reassess their own strategies: some may consider selling or merging their studios and services rather than continuing to burn cash trying to match Netflix’s scale; others may seek tighter licensing alliances to maintain relevance.
The deal may also pressure regulators to scrutinise ownership of content libraries and streaming platforms, especially where market power over key franchises or genres becomes concentrated.
Cultural and Creative Consequences
By placing a historic studio’s output under the umbrella of a data-driven streamer, the acquisition embodies a cultural shift from cinema- and cable‑first storytelling to content engineered for binge‑watching, global reach, and algorithmic discovery.
Classic films and series could find new life as reboots or spin‑offs optimised for Netflix’s recommendation systems, but there is also concern that niche or less “performant” works might be deprioritised.
The narrative of “swapping history for fantasy” captures both the potential for creative reinvention and the fear that financial metrics will override curatorial and artistic considerations long associated with traditional studios.
Key Takeaways
A landmark $83bn, debt‑inclusive deal crystallises the market’s belief that streaming platforms and IP libraries are the core assets of modern media.
Netflix gains substantial leverage through ownership of a large studio catalogue and streaming infrastructure, reinforcing its dominance.
The legacy studio trades independence for scale and financial relief, reflecting the pressure of the streaming wars and high debt burdens.
The acquisition accelerates consolidation, raises regulatory questions, and deepens the shift from traditional Hollywood models to platform‑centric, algorithmically shaped entertainment.
Apple’s Succession Intrigue Isn’t Strange at All
The information • John Gruber • December 5, 2025
Media•Publishing•Apple•CEOSuccession•Leadership
Aaron Tilley and Wayne Ma, in a piece headlined “Why Silicon Valley is Buzzing About Apple CEO Succession” at the paywalled-up-the-wazoo The Information:
Prediction site Polymarket places Ternus’ odds of getting the job at nearly 55%, ahead of other current Apple executives such as software head Craig Federighi, Chief Operating Officer Sabih Khan and marketing head Greg Joswiak. But some people close to Apple don’t believe Ternus is ready to take on such a high-profile role, and that could make a succession announcement unlikely anytime soon, said people familiar with the company.
Nothing in the rest of the article backs up that “some people close to Apple don’t believe Ternus is ready” claim, other than this, several paragraphs later:
And while his fans believe Ternus has the temperament to be CEO, many of them say he isn’t a charismatic leader in the mold of a Jobs. He has also had little involvement in the geopolitical and government affairs issues that dominate most of Cook’s time these days. On a recent trip to China, for example, Apple’s new COO, Sabih Khan, accompanied Cook to some of his meetings.
No one else in the history of the industry, let alone the company, has the charisma of Steve Jobs. And while I think Polymarket has the shortlist of candidates right, I also think they have them listed in the right order. Sabih Khan probably should be considered an outside-chance maybe, but the fact that he accompanied Cook to China doesn’t make think, for a second, that it’s in preparation to name him CEO. If Kahn were being groomed to become CEO, he’d have started appearing in keynotes already. It’s silly to slag Ternus for not having the charisma of Steve Jobs, when Ternus has been a strong presence in keynotes since 2018, and in the same paragraph suggest Khan as a better option, when Khan has never once appeared in a keynote or public appearance representing Apple.
Some former Apple executives hope a dark-horse candidate emerges. For example, Tony Fadell, a former Apple hardware executive who coinvented [sic] the iPod, has told associates recently that he would be open to replacing Cook as CEO, according to people who have heard his remarks. (Other people close to Apple consider Fadell an unlikely candidate, in part because he was a polarizing figure when he worked at the company. Fadell left Apple in 2010.)
The parenthetical undersells the unlikelihood of Fadell returning to Apple, ever, in any role, let alone the borderline insanity of suggesting he’d come back as Cook’s successor.
It has become one of the strangest succession spectacles in tech. Typically, the kind of buzz that is swirling around Cook occurs when companies are performing badly or a CEO has dropped hints that they’re getting ready to hang up their spurs. Neither applies in Cook’s case, though.
There’s nothing strange about it. Apple has a unique company culture, but so too do its peers, like Microsoft, Amazon, and Google. And just like at those companies, it’s therefore a certainty that Cook’s replacement will from within the company. Polymarket doesn’t even list anyone other than Ternus, Federighi, Joswiak, and Khan.
As for hints, there are no needs for a hint other than the fact that Cook is now 65 years old and his been in the job since 2011.
Regulation
Google Must Limit Default Contracts to One Year, Judge Rules
Bloomberg • Leah Nylen, Josh Sisco • December 5, 2025
Regulation•USA•Google•SearchEngines•AIApps
Overview of the Ruling
A federal judge has ordered Alphabet Inc.’s Google to significantly change how it structures deals that make its search engine or artificial intelligence applications the default on smartphones and other consumer devices. Under the ruling, any such “default” contracts must be renegotiated at least once every year rather than remaining in place for long, multi‑year periods. This decision directly targets Google’s ability to lock in users and maintain its dominance in search and AI-powered services through long‑term default placement agreements with device manufacturers and other partners.
Scope of Contracts Affected
The ruling applies to contracts that set Google’s search engine as the default option on smartphones and potentially other connected devices.
It also extends to agreements that designate a Google AI app—such as an assistant or AI chatbot interface—as the default application users encounter.
By forcing annual renegotiation, the judge is effectively shortening the period during which Google can rely on a single contract to guarantee prime placement of its services.
Impact on Market Dynamics and Competitors
Annual renegotiation could open recurring windows of opportunity for rival search engines and AI providers to bid for default status on devices.
Device makers and operating system partners may gain greater leverage to negotiate better financial terms or more flexible arrangements, because they are no longer locked into long, stable Google deals.
This may reduce the “stickiness” of Google’s default position, potentially lowering barriers to entry for competitors and making it easier for alternative search or AI apps to gain distribution.
Implications for Google’s Business Model
Google’s long‑standing strategy has relied heavily on securing its services as the default option, driving vast user traffic and advertising revenue.
The requirement to renegotiate yearly introduces operational and financial uncertainty, as Google must repeatedly justify its default status to partners.
It may also increase Google’s costs, as more frequent negotiations could lead to higher revenue-sharing demands from device makers and carriers.
Regulatory and Legal Significance
The judge’s order reflects ongoing regulatory scrutiny of how large technology companies use default placement and preinstallation agreements to entrench market power.
While the article does not detail broader remedies, the one‑year limit on contracts suggests the court is seeking structural changes that encourage ongoing competition rather than imposing only one‑time penalties.
The decision could serve as a model for future regulatory or judicial actions targeting similar practices in digital markets, including app stores, browsers, and AI tools.
Broader Consumer and Industry Effects
For consumers, more frequent renegotiations may eventually translate into a more varied choice of default search and AI providers across different devices and brands.
However, the practical impact will depend on how assertively device makers use their newfound leverage and whether competing providers can offer compelling alternatives to Google.
In the evolving AI landscape, recurring contract renewals may accelerate experimentation with new AI apps and interfaces on devices, potentially leading to faster innovation but also more fragmentation in user experience.
Key Takeaways
Google is now required to renegotiate any contract that sets its search engine or AI app as the default at least once every year.
The ruling aims to curb the long‑term locking in of defaults that can reinforce Google’s market dominance.
Competitors may gain more frequent opportunities to vie for default positions, while device makers gain bargaining power.
The decision underscores a regulatory shift toward scrutinizing defaults and contractual structures as central to competition in digital and AI markets.
Silicon Valley’s Man in the White House Is Benefiting Himself and His Friends
Nytimes • November 30, 2025
Regulation•USA•AI•Crypto•ConflictOfInterest
Overview of the Central Argument
The article examines how David Sacks, serving as the Trump administration’s “A.I. and crypto czar,” has used his government role to shape federal policy in ways that benefit both his Silicon Valley allies and his own portfolio of technology investments. It portrays a convergence of public power and private gain, arguing that Sacks’s influence over artificial intelligence and cryptocurrency regulation is closely aligned with the interests of a tight network of tech investors, founders, and venture capitalists with whom he has longstanding financial and personal ties. The piece raises questions about conflicts of interest, ethics, and regulatory capture in two of the most consequential emerging technology sectors.
Position, Powers, and Policy Reach
Sacks is described as a central architect of the administration’s positions on regulation, funding, and national strategy around both A.I. and crypto.
His remit includes shaping federal research priorities, standards for safety and transparency, and enforcement direction for agencies overseeing digital assets.
The article emphasizes that decisions made in these domains can directly influence the valuation of companies in which Sacks and his close partners have significant stakes, including A.I. startups, infrastructure providers, and crypto platforms.
Overlap with Personal Investments and Silicon Valley Allies
The article highlights multiple instances where Sacks’s policy stances track closely with the lobbying goals of particular Silicon Valley firms and funds he is connected to.
Policies pushing for lighter-touch oversight on certain A.I. applications and more permissive treatment of crypto products are shown to align with the strategic interests of companies backed by his network.
The narrative suggests a pattern: when regulatory design choices could swing value between incumbents and upstarts, or between centralized and decentralized models, Sacks tends to support the options that would advantage his circle’s positions.
His role in convening closed-door meetings and advisory councils is presented as a way to give his allies privileged access to federal decision-making, compared with smaller competitors or consumer advocates.
Mechanisms of Influence and Potential Conflicts of Interest
The article points to Sacks’s ability to frame the debate around “innovation vs. regulation,” often warning that stringent rules would drive A.I. and crypto innovation offshore.
This framing is said to marginalize voices focused on consumer protection, labor impacts, systemic risk, and civil rights, while elevating the priorities of venture-backed firms seeking rapid scaling.
The piece raises concerns about the adequacy and transparency of ethics reviews: disclosure forms may technically list holdings, but they do not fully resolve questions about whether recusal or divestment is warranted.
It also explores the blurred line between informal advice and formal policy, noting that Sacks’s conversations with former co-investors and founders can function as de facto lobbying without typical registration or public record.
Broader Implications for Governance, Markets, and Public Trust
The article argues that when a powerful policymaker in fast-moving sectors like A.I. and crypto has deep, ongoing ties to industry players, the risk of regulatory capture intensifies.
This dynamic could tilt federal frameworks toward short-term profits and speculative growth rather than long-term resilience, safety, and fair competition.
It warns that if the public perceives A.I. and crypto rules as written “by and for” a small, wealthy elite, trust in both technologies and institutions may erode, fueling populist backlash and political volatility.
At the same time, the piece suggests that the outcome of this arrangement could shape global standards: U.S. policy choices around openness, interoperability, and enforcement will influence how other countries regulate and which firms ultimately dominate worldwide.
Key Takeaways and Conclusion
The core thesis is that Sacks’s dual role as a high-level policymaker and a deeply embedded Silicon Valley investor creates structural incentives to prioritize the fortunes of his friends and himself.
The article portrays his influence as emblematic of a broader pattern in which tech-aligned political figures translate their networks into policymaking power, often with limited constraints.
It concludes that the stakes are especially high in A.I. and crypto because early regulatory architectures tend to be sticky; choices made now could entrench winners, set norms for safety and accountability, and determine who bears the downside risks of technological disruption.
Crypto
WIRTW: AI Manhattan Project
Chamath • Chamath Palihapitiya • November 30, 2025
Crypto•Bitcoin•Stablecoins•Tether•GoldReserves
What I Read This Week: a summary of the content that I consumed in the previous week
Caught My Eye
1) Genesis Mission: The AI Manhattan Project
On November 24th, The Genesis Mission was signed into action by Executive Order.
The Genesis Mission is framed as a modern-day counterpart to the Manhattan Project, harnessing the Department of Energy’s 17 National Laboratories, industry, and academia to build an integrated discovery platform. The goal is to leverage this platform and advanced artificial intelligence to double U.S. research and development productivity within a decade.
They plan to reach this ambitious goal by getting access to the world’s largest collection of Federal scientific datasets to train scientific foundation models and create AI agents to test new hypotheses, automate research workflows, and accelerate scientific breakthroughs. The initiative also emphasizes strong public-private partnerships to integrate cutting-edge commercial AI infrastructure from leading tech companies.
The Genesis Mission is targeted at addressing a defined set of three critical national challenges:
American energy dominance: Accelerate advanced nuclear, fusion, and grid modernization.
Advancing discovery science: Build the quantum ecosystem that will power discoveries and industries.
Ensuring national security: Advance AI technologies for national security missions, deploying systems to ensure the safety and reliability of the U.S. nuclear stockpile and accelerating the development of defense-ready materials.
2) Tether: Largest Independent Holder of Gold Reserves
In the past several months, Tether, long known as the issuer of the leading stablecoin USDT, has quietly become one of the largest private gold holders in the world. Fahad Tariq and Andrew Moss, equity analysts at Jefferies, recently published a report on Tether’s growing influence in the gold market. Jefferies quantifies Tether’s holdings at 116 tonnes (valued at roughly US$14 billion) and states its purchases comprised ~2% of global gold demand and ~12% of central bank buying.
This corresponds to a gold position that is in the order of magnitude of the holdings of smaller central banks such as South Korea, Hungary or Greece (each holding around 100-115 tonnes of gold).
A small portion of these holdings, about 12 tons, directly underpins the gold-backed token Tether Gold (XAUt). Jefferies suggests that Tether sees bullion as a structural asset: using profits and stable-coin revenue to build a long-term store of value, and even exploring investments in gold royalty and streaming companies, thus gaining exposure across the gold supply chain, not just to bars in vaults.
Interview of the Week
Two VCs, No Filter: The Naked Truth about Elon Musk and Sam Altman
Keenon • Andrew Keen • December 5, 2025
Venture•Interview of the Week
Silicon Valley veterans Dave McClure and Aman Verjee have been friends and business partners for 25 years — first at PayPal, then at 500 Startups, and now at Practical Venture Capital. Yet they have quite different styles, personalities and, above all, politics. What they share, however, is an unvarnished take on the world — especially on the much mythologized Silicon Valley.
In this refreshingly unfiltered conversation, they assess tech’s two most dominant titans: Sam Altman and Elon Musk. McClure describes Altman as someone he’d never want to face across a poker table — “there’s probably three layers of chess going on in his head.” Verjee breaks down the competitive psychology driving Musk as OpenAI’s valuation leapfrogs SpaceX.
Plus Verjee makes sense of Google’s Gemini challenge to ChatGPT domination and McClure leaves us with one of his trademark blunt takes on Trump’s crypto conflicts.
Startup of the Week
Netflix to Buy Warner Bros. in $72 Billion Cash, Stock Deal | Bloomberg Tech 12/5/2025
Bloomberg • December 5, 2025
Media•Broadcasting•Streaming Consolidation•Platform Regulation•AI Infrastructure•Startup of the Week
Overview of Key Business and Tech Developments
The content covers three major business and technology stories: a landmark deal in media and streaming, a regulatory clash between the European Union and a major social media platform, and shifting expectations in the enterprise AI infrastructure market. Together, these stories highlight accelerating consolidation in entertainment, intensifying regulatory scrutiny of digital platforms, and a more volatile, demand-driven cycle for AI-related hardware investments.
Netflix–Warner Bros. Discovery Deal
Netflix has agreed to buy Warner Bros. Discovery following the latter’s planned spinoff of its traditional cable channels. The transaction structure suggests Warner Bros. will first separate its legacy linear TV assets, leaving a more streamlined collection of premium IP and direct-to-consumer businesses for Netflix to acquire.
The deal is valued at approximately $72 billion in a mix of cash and stock, indicating both the scale of Netflix’s ambition and the perceived long-term value of Warner Bros.’ film, TV, and streaming portfolio.
Strategically, this acquisition would give Netflix control over some of the most valuable entertainment franchises and libraries, deepening its catalog in films, premium series, and potentially sports and news content depending on what remains with the spun-off cable operation.
The move underscores a new phase of consolidation in streaming: rather than purely organic subscriber growth, large platforms are turning to mega-deals to secure must-have content, defend market share, and improve profitability through scale and cost synergies.
This transaction raises major regulatory and antitrust questions, as combining a leading global streaming platform with one of the largest legacy studios and content owners could reshape bargaining power with talent, distributors, and competitors.
EU Fine on X and Transatlantic Regulatory Tensions
The European Union has levied a $140 million fine on X, the social media platform owned by Elon Musk, triggering criticism from the company and from some US commentators who frame the move as overreach or politically motivated.
In response, the EU’s Ambassador to the US publicly defends the fine, positioning it as a straightforward enforcement of the bloc’s digital rules rather than a targeted action against a specific owner or viewpoint.
The ambassador’s remarks emphasize that large platforms operating in the EU must comply with its regulatory framework on issues such as content moderation, transparency, user protection, and competition, regardless of where the company is headquartered.
This exchange illustrates the growing rift—but also ongoing dialogue—between US-based tech firms and European regulators, highlighting how divergent legal regimes and political expectations are reshaping the global governance of social media.
The fine on X could serve as both a warning and a precedent for other platforms that fail to meet EU standards, potentially accelerating compliance investments and legal challenges across the industry.
HPE’s AI Server Outlook and Market Reality
Hewlett Packard Enterprise (HPE) CEO Antonio Neri discusses the company’s outlook following weaker-than-expected fourth-quarter results in AI server sales.
Despite intense hype around generative AI and the data-center buildout, HPE’s disappointing numbers suggest that enterprise AI infrastructure spending is uneven and heavily influenced by timing of large deals, customer readiness, and broader macroeconomic conditions.
Neri’s commentary likely focuses on how HPE is positioning itself for medium- to long-term growth: investing in AI-optimized hardware, partnerships with chipmakers and cloud providers, and integrated solutions that combine compute, storage, and networking for AI workloads.
The results underscore that capital-intensive AI infrastructure markets can be volatile quarter to quarter, even if the long-term trend remains upward. Investors and customers are being reminded that scaling AI in enterprises involves complex deployment cycles, regulatory considerations, and ROI scrutiny, not just enthusiasm.
Broader Implications
The proposed Netflix–Warner Bros. combination signals that the streaming wars are entering a consolidation phase where scale, IP ownership, and global distribution will decide winners and losers. Smaller or weaker players may face pressure to merge, license aggressively, or exit.
The EU’s fine on X confirms Europe’s willingness to use financial penalties to enforce digital policy, reinforcing the message that tech governance is increasingly fragmented across regions. This could result in more compliance complexity, localized product features, and potential conflicts over free speech, competition, and data rules.
HPE’s AI server challenges reveal that, beneath the surface of AI optimism, the buildout of infrastructure is lumpy and competitive, with potential shakeouts among hardware and systems vendors as customers consolidate around a few trusted partners.
Together, these developments paint a picture of a technology and media landscape defined by consolidation, regulation, and longer, bumpier investment cycles in AI-related infrastructure.
Post of the Week
Elon Musk: A Different Conversation w/ Nikhil Kamath | Full Episode | People by WTF Ep. 16
Youtube • Nikhil Kamath • November 30, 2025
Media•Social•Post of the Week
Overview
A long conversation with #ElonMusk about work, consciousness, family, money, AI and how the future might unfold.
No script, no performance, just two people thinking out loud.
A big thank you to Manoj Ladwa - a close friend of many years and a remarkable connector of India to the world and the world to India. Through India Global Forum, he has built one of the most influential platforms showcasing India’s rise. As I’ve said before, this is India’s decade, and leaders like Manoj Ladwa and @IndiaGlobalForum will be the flag bearers in making that a reality.
Timestamps :
00:00 – Settling in
02:08 – On X, text vs video, how people communicate
06:45 – Collective consciousness
09:54 – Meaning of life, Hitchhiker’s Guide to the Galaxy
14:16 – Individuals vs collectives
17:35 – What makes a company worth investing in
20:00 – Work Elon is most excited about across Tesla, SpaceX and xAI
23:35 – Starlink explained simply
29:45 – UHI, and “Working will be optional”: what that means
34:35 – Marshmallow test & delayed gratification
36:13 – The letter X
42:15 – Money, energy and the far future
46:13 – AI, US debt & what productivity unlocks
51:07 – Matrix, Simulation theory & probabilities
56:30 – Morality, religion & GTA
1:01:25 – Elon’s version of the simulation
1:03:17 – Elon’s Kids, Family structure & Nature vs. Nurture
1:12:33 – Should kids still go to college?
1:14:52 – How to regulate AI
1:20:08 – Language, history, and what remains timeless
1:23:22 — Movies vs podcasts
1:24:33 — Can AI understand human nuance?
1:26:01 — Where would Elon invest?
1:27:52 —David vs Goliath
1:30:03 — Humour, Friendship & Politics
1:37:00 — Politics, influence, and business
1:38:53 — Global trade, tariffs, Free markets
1:41:11 — The relationship between business and government
1:43:21 — DOGE
1:46:47 — Philanthropy
1:47:39 — H1B & Immigration Laws
1:51:17 — Advice for people building
1:52:00 — Value creation and hard work
1:53:26 — Closing thoughts & gratitude
#NikhilKamath Entrepreneur & Investor
Host of ‘WTF is’ & ‘People By WTF’ Podcast
X: https://x.com/nikhilkamathcio/
Instagram:
/ nikhilkamathcio
LinkedIn: https://www.linkedin.com/in/nikhilkam...
Facebook: / nikhilkamathcio
#elonmusk
X - https://x.com/elonmusk
A reminder for new readers. Each week, That Was The Week, includes a collection of selected essays on critical issues in tech, startups, and venture capital.
I choose the articles based on their interest to me. The selections often include viewpoints I can't entirely agree with. I include them if they make me think or add to my knowledge. Click on the headline, the contents section link, or the ‘Read More’ link at the bottom of each piece to go to the original.
I express my point of view in the editorial and the weekly video.











































