Contents
Editorial: Human¹⁰⁰ - Why This Week Proves AI Is Our Greatest Invention, Not Our Replacement
Essay
2026
Media
Venture
AI boom transforming the venture capital, megacap investing landscape
Pat Grady & Alfred Lin on the Tactics of Great Venture Investing | Ep. 36
Are private valuations set for a correction? Henry Ward, CEO of Carta, on #capitalmarket trends
Jeff Bezos’s Project Prometheus Joins The Unicorn Board Alongside 18 Other Startups In November
Education
Regulation
AI
Disney CEO on $1 billion investment in OpenAI: ‘This is a good investment for the company’
Nvidia Wins US Approval to Sell H200 Chips to China | Bloomberg Tech 12/9/2025
SoftBank and Nvidia reportedly in talks to fund Skild AI at $14B, nearly tripling its value
OpenAI Unveils More Advanced Model as Race With Google Heats Up
A new product, a new customer, a new financing!Introducing Superpower
Interactions API: A unified foundation for models and agents
LeCun’s Alternative Future: A Gentle Guide to World-Model AI [Guest]
Foundation Model Consolidation Is No Longer a Forecast — It’s a Mechanical Outcome
The Rise of Neolabs: Where the Next AI Breakthroughs Will Come From & 11 AI Labs to follow
OpenAI says it’s turned off app suggestions that look like ads
China
Interview of the Week
Editorial:
Human¹⁰⁰ - Why This Week Proves AI Is Our Greatest Invention, Not Our Replacement
This week’s stories feel like a chaotic pile-up: Netflix swallowing Hollywood, Disney investing in OpenAI and licensing 200 characters for use by consumers, SpaceX eyeing a $1.5 trillion IPO, and Sam Altman declaring an enterprise ‘code red’ over lunch and then announcing ChatGPT 5.2.
It’s easy to see this as a handful of tech giants and Venture Capitalists vacuuming up the last scraps of independence in media, capital, and intelligence. The prevailing narratives offer two bleak choices: the doomer view that sees an alien, runaway technology needing to be caged, or the diminishment view that insists only ‘real’ humans can be creative, framing AI as a cheap, threatening imitation incapable of going beyond its training set of human produced materials.
Both are wrong, and this week’s material shows why. We are not witnessing the triumph of machines over humanity. We are witnessing Human¹⁰⁰— the amplification of human ambition, creativity, and capability to the power of 100. And this due to a tool of our own making.
The connection between Ben Thompson’s ‘Hollywood End Game’ and the a16z ‘Big Ideas 2026’ is not about tech eating culture; it’s about human systems being rebuilt for 100x scale and precision.
Netflix isn’t killing Hollywood; it’s applying a human-engineered model—global data, direct relationships, algorithmic curation—to a century of human storytelling. The result? As Thompson notes, IP is being revalued not destroyed. The ‘end game’ is a more efficient, global pipeline for the stories we create. The fantastic Scandinavian dramas I watch are only possible because of this.
Similarly, the $1.21 gigawatt order for the ‘Superpower’ turbine isn’t an AI monster demanding sacrifice; it’s humans building unprecedented energy infrastructure to power the next phase of human computation and innovation. AI is ours, not a thing in itself.
This brings us to the week’s most revealing tension: the AI Value Gap.
OpenAI’s own data shows AI saves the average knowledge worker 54 minutes a day—worth about $7,282 per seat annually in recovered productivity. Yet tools like ChatGPT Plus capture only 3% of that value.
This is early innings. The value gap exists because we’re still learning to price augmentation.
As the Carta seed data shows, founders are building vertical applications—in classrooms from Iceland to Alabama, in enterprise workflows, in robotic ‘brains’ like Skild AI—that embed AI as a force multiplier.
The value will accrue not to some silicon overlord, but to the humans and companies that learn to wield it. Disney’s $1 billion bet on OpenAI isn’t capitulation; it’s a human institution betting its legendary creativity can be amplified, not replaced.
The doomer and diminisher views miss the story entirely.
They see the foundation model consolidation—where capital intensity mechanically favors a few winners—as a loss of control. But look closer: the ‘Neolabs’ emerging, the open-weight models from China, the Nordic education partnerships. The frontier is expanding, not contracting. The ‘mechanical outcome’ is more platforms, not fewer minds. And better humans. Even more dramatic is the collective uplift of human potential made possible by our new toolset. And the gross wealth that will create.
So where does this leave us? The unresolved question isn’t ‘Can we control AI?’ but ‘Can we govern the abundance it creates?’
The concentration of VC capital into the top 10 deals, the potential valuation correction Henry Ward warns of, Trump’s ‘ONE RULE’ push for federal AI preemption—these are the real battles. They are fights over how to distribute the 100x gains of Human¹⁰⁰, not whether to prevent them. And the wealth creation leaders - Elon, Sam, and, all the others, need to understand that they are not building personal wealth but human uplift.
Looking ahead, we should watch three things: whether the productivity value gap closes through new business models, whether open ecosystems can keep pace with consolidated capital, and if our policy frameworks can be designed for acceleration and universal benefit, not fear.
The progress this week points to is not alien or dehumanizing. It is profoundly, exponentially human. Our task is not to slow it down or talk it small, but to ensure we’re all holding a piece of the amplifier. Human¹⁰⁰ needs to be the framing for an optimistic and determined view of the future.
Essay
America Must Prepare for the Future of War
Nytimes • December 8, 2025
GeoPolitics•Defence•US Military Reform•Future Of War•Cyber Warfare•Essay
Evolving Nature of Warfare
The central argument is that warfare has fundamentally shifted in form, speed and technological underpinnings, and that the existing U.S. military structure is not adequately designed for this new reality. Instead of traditional, large-scale, manpower-heavy conflicts, modern war is increasingly characterized by cyberoperations, autonomous and remotely piloted systems, space-based assets, information warfare and economic and infrastructure disruption. The piece contends that America risks strategic surprise and potential defeat if it continues to rely on legacy assumptions about how wars begin, unfold and are won. Reform is framed not as a marginal optimization but as an urgent redesign of how the United States organizes, equips, trains and commands its forces.
Key Features of the “Future of War”
War is becoming more networked and data-driven, with sensors, drones and satellites feeding real-time information into algorithmic decision-making systems.
Non-kinetic domains such as cyberspace, space, and the information environment play as large a role as land, sea, and air in shaping battlefield outcomes.
Adversaries can inflict major damage—on power grids, communications, financial systems or political stability—without crossing traditional thresholds of open armed attack.
Technology is lowering barriers to entry, enabling smaller states and even nonstate actors to deploy tools like drones, cyberweapons and precision-guided munitions that once required superpower-level resources.
These dynamics weaken the relevance of sheer troop numbers or traditional platform dominance (e.g., tanks, large surface ships) and elevate agility, resilience and technological integration as decisive factors.
Why U.S. Military Reform Is Necessary
The U.S. defense establishment still largely reflects Cold War and post–9/11 counterinsurgency paradigms, with budget priorities favoring big, expensive platforms and long procurement cycles.
Hierarchical command structures and bureaucratic acquisition processes slow down innovation, leaving the U.S. lagging behind the speed at which commercial technology evolves and adversaries adapt.
Training and doctrine remain oriented around conventional battles rather than distributed operations, contested information environments, and persistent cyber and space threats.
The editorial board argues that this mismatch between structure and threat environment creates vulnerabilities that adversaries like China, Russia, Iran or technologically capable nonstate actors can exploit.
Core Elements of Recommended Reform
Modernizing Capabilities
Shift resources from legacy systems to emerging technologies such as autonomous platforms, advanced cyberdefense and offense, AI-enabled analytics, resilient satellite constellations and counter-drone systems.
Invest in rapid, modular procurement that can integrate commercial innovations quickly rather than waiting for decade-long acquisition programs.
Reorganizing and Training for New Domains
Treat cyber, space and information warfare as core theaters of conflict, not supporting functions, with dedicated forces, doctrine and clear lines of authority.
Train service members to operate in highly contested, electronically degraded environments where GPS, communications and centralized command cannot be assumed.
Strengthening Civil-Military and Allied Integration
Coordinate more closely with private-sector technology firms that increasingly drive innovation in AI, cloud computing, communications and space systems.
Deepen cooperation with allies to share intelligence, integrate systems, and present a more coherent deterrent posture in critical regions.
Strategic and Political Implications
The argument carries several broader implications:
Deterrence now depends less on visible mass and more on credible, adaptive capabilities in unseen domains; adversaries must believe the U.S. can respond rapidly and asymmetrically to a wide range of provocations.
Democratic oversight and public understanding of war’s changing nature become more challenging as operations shift into opaque cyber and space arenas, raising questions about transparency, escalation risks and legal frameworks.
Budget debates will become sharper as policymakers confront trade-offs between maintaining existing forces and investing in new technologies and organizational changes that may be politically controversial but strategically necessary.
Conclusion and Call to Action
The piece concludes that the United States faces a choice between proactively reshaping its military for the emerging character of conflict or clinging to outdated structures that deliver a false sense of security. Reform is portrayed as urgent rather than optional: the future of war is already visible in ongoing conflicts and cyber incidents worldwide. By modernizing capabilities, reorganizing around new domains, and integrating technology and alliances more effectively, America can better deter adversaries, protect its infrastructure and values, and reduce the risk that the next major conflict catches it unprepared.
What’s Next After You Lose Someone’s Money
This is going to be big • Charlie O’Donnell • December 11, 2025
Essay•Venture
I recently got hit up for a backchannel reference on a founder I had backed. His company didn’t return anything to investors when it got sold, and I hadn’t heard from him after the sale—so I didn’t know about the new company.
It’s perfectly reasonable to feel a bit awkward after you’ve lost someone’s money, regardless of whether they’re an individual angel or a venture capital investor. Just because it isn’t technically a VC’s own money wouldn’t make it any less of a black eye within their firm, right?
The follow-up after a loss might not be a conversation you’re excited to have—but it’s the best thing you can do for your reputation and your growth. Here’s how to have that conversation so these loose ends don’t come back to bite you.
What do I mean by that? Well, it’s a bit awkward to have to respond to a reference check with, “I haven’t heard from them, so I don’t know anything about this new company.” That’s going to make the new potential investor wonder if maybe you left on bad terms or whether the founder has any reason to think I wouldn’t want to speak with them.
That’s the funny thing—most founders wouldn’t imagine I’d want to chat with them after they lost my fund’s money, but as long as they worked hard and did their best, why wouldn’t I? Every startup investor knows going in that the chances of success are going to be low. Do founders really think that VCs just have a broken relationship with the founders that don’t make a big return—which is most of them?
When you were in the trenches with a founder, watching them fight tooth and nail to make something of your investment, you’ve gained a ton of respect—more than you could ever lose with a negative financial outcome. The idea that they’d rather back a complete stranger than work with you again doesn’t square with how they invest. They asked their own investors to give them 30 or 40 shots on goal because they know the first one, two, three, or twenty might not work out.
The Global Distribution of Wealth, Shown in One Pyramid
Visualcapitalist • December 9, 2025
Essay•GeoPolitics•Wealth Inequality•Global Wealth Distribution•UBS Global Wealth Report
Visualized: The Global Distribution of Wealth
See visuals like this from many other data creators on our Voronoi app. Download it for free on iOS or Android and discover incredible data-driven charts from a variety of trusted sources.
Key Takeaways
Just 1.6% of adults worldwide hold nearly 48% of global wealth.
Almost 3.1 billion adults, or 82% of the world’s adult population, control just 12.7% of total wealth.
The bottom wealth tier, for those in the $0-$10k wealth bracket, represents 1.55 billion adults but only 0.6% of global wealth.
The world got richer in 2024, with global personal wealth growing by 4.6%. However, the distribution of that wealth remains uneven.
At the top of the global wealth pyramid sits a small elite holding nearly half of the world’s assets, while billions of people in lower tiers own only a sliver of global wealth.
This infographic uses data from UBS’ latest Global Wealth Report to break down the global wealth pyramid by number of people and the share and amount of wealth they hold.
The Data on Wealth Distribution
UBS segments the world’s 3.8 billion adults into four wealth tiers, ranging from those with less than $10,000 to those with more than $1 million, who lie at the top of the global wealth pyramid.
The table below shows how wealth is distributed globally between these four tiers of adults:
At the apex of the pyramid, 60 million adults, who make up just 1.6% of the global population, own $226 trillion, or nearly half of all household wealth worldwide.
Beneath the apex, the world’s upper-middle tier (those with $100k–$1M in net worth) includes 628 million adults who collectively hold $184 trillion, representing 39.2% of global wealth.
The largest cohort of adults sits in the middle-lower band: 1.57 billion adults with $10k–$100k, holding a combined $56.8 trillion. Despite accounting for 41% of the world’s population, this cohort owns only 12% of global wealth.
At the base of the pyramid are 1.55 billion adults—40.7% of the population. Together, they hold $2.7 trillion, or 0.6% of global wealth.
Breaking Down the Top of the Wealth Pyramid
Of the 60 million adults at the top of the global wealth pyramid, 2,891 individuals are billionaires, collectively holding over $15.6 trillion in wealth.
Of these, just 15 individuals own more than $100 billion in wealth, while another 16 individuals fall in the $50 billion to $100 billion wealth bracket. The remaining 2,860 billionaires have less than $50 billion in wealth.
Can We Stop Our Digital Selves From Becoming Who We Are?
Nytimes • December 7, 2025
Essay•Media•Attention Economy•Social Media•Digital Identity
How Attention Shapes the Self
The core argument is that what we choose to notice—online and offline—gradually builds who we are. The piece suggests that our “digital selves” are not separate masks but active forces that train our minds, emotions, and relationships. As we repeatedly attend to certain types of content, interactions, and platforms, we reinforce particular habits of thought and feeling, which then influence our offline identity. Rather than asking how to wall off a “real” self from a “digital” self, the article urges us to see attention as a limited, formative resource that must be directed with care.
The Mechanics of Digital Capture
Algorithms are designed to maximize engagement by learning what keeps us looking, not what makes us wise or fulfilled.
Over time, feeds learn our emotional triggers—anger, outrage, envy, or fear—and preferentially show us material that elicits them.
The more we respond to a certain kind of post (for example, political outrage or status comparison), the more the system shows us similar content, gradually narrowing our sense of what is normal or important.
This creates a feedback loop: our attention trains the algorithm, and the algorithm, in turn, trains our attention. The article stresses that this is not simply about “wasting time,” but about shaping our dispositions: whether we are patient or impulsive, generous or suspicious, curious or closed.
Digital Selves as Habit Machines
Each platform encourages a specific “micro‑self”: the witty poster, the hot‑take commentator, the aesthetic curator, the relentless networker.
Performing these roles repeatedly builds habits: we learn to think in tweet‑length sound bites, to scan experiences for their photo potential, or to judge news by how well it will perform socially.
These habits do not stay confined to the screen; they leak into everyday conversation, perception, and even memory, influencing how we interpret events and how we see other people.
The article argues that our digital personas are less like costumes and more like training regimens: what we rehearse, we become. Even if we feel detached or ironic about our online self, the repetition of behaviors still engrains patterns in us.
Responsibility and Structural Power
On one hand, individuals are urged to take responsibility for their attention: to notice what kinds of content leave them drained, anxious, or brittle, and to deliberately shift away from those patterns.
On the other hand, the piece emphasizes that this is not purely a matter of willpower: platform design, default settings, and opaque recommendation systems exert enormous influence.
Frictionless design—endless scroll, autoplay, persistent notifications—lowers the cost of surrendering attention and raises the cost of resisting.
Thus, the question “Can we stop our digital selves from becoming who we are?” becomes partly a political and regulatory question about what kinds of attention economies we permit, and partly an ethical question about what we practice daily.
Strategies for Reclaiming Attention
Deliberate constraints: time‑boxed use, device‑free spaces, or following fewer accounts to widen and slow the feed.
Reorientation: seeking content that deepens perspective—long‑form writing, diverse viewpoints, or creators who reward patience rather than outrage.
Reflective practices: noticing after a session how one feels—stressed, resentful, energized, inspired—and treating that as data about what to change.
The article frames these not as purity rituals but as ways of aligning our digital behaviors with the kinds of people we hope to become.
Implications for Identity and Society
Individually, the piece suggests that guarding attention is an act of self‑authorship: by deciding what deserves our sustained focus, we decide which parts of us grow. Socially, pockets of collective attention—subcultures, fandoms, political tribes—coalesce into shared realities that may be hard to bridge. If large groups spend most of their digital time in outrage‑optimized environments, institutions, public discourse, and trust itself are reshaped. The concluding message is not that we must abandon digital life, but that we must treat attention as the medium from which both our inner lives and our common world are made—and act accordingly.
Opinion | Is AI Making Us Dumb?
Wsj • Andy Kessler • December 7, 2025
Essay•Education•AI and Learning•Critical Thinking•Moral Panic
Central Argument
The piece argues that fears about artificial intelligence “making us dumb” are misplaced and function largely as a distraction from deeper, longstanding failures in the education system. Rather than seeing AI as a corrosive force on human intelligence, the article frames it as a tool—powerful, fallible, and value-neutral—that exposes gaps in how we teach people to think, write, and reason. Moral panic over new technology, the author suggests, has accompanied everything from calculators to the internet, and in each case the real issue has been whether schools adapt to teach higher-order skills instead of rote tasks that machines can now do better.
Technology Panic vs. Educational Reality
The article situates current anxiety about AI alongside historical reactions to earlier technologies: calculators supposedly eroding math skills, spellcheck weakening spelling, search engines undermining memory.
In each episode, predictions that the technology would “dumb down” society failed to materialize; instead, people shifted to using tools to offload routine work and focus on more complex problems.
The author contends that blaming AI for intellectual decline is easier than confronting uncomfortable evidence of poor educational outcomes, such as weak reading comprehension, limited numeracy, and superficial writing skills.
These failings predate AI and reflect systemic problems in curricula, teacher training, incentives, and cultural expectations around effort and rigor.
AI as a Mirror of Human Weaknesses
The article treats generative AI as a mirror that reflects both our strengths and our deficits.
When students lean on AI to write essays, it reveals that many were never taught to structure arguments, think critically, or revise thoughtfully in the first place.
Concerns that AI-generated text will flood the world with mediocrity highlight a prior reality: much human-produced writing is already formulaic and shallow.
Rather than calling AI inherently “dumbing,” the author suggests it reveals the low bar our institutions have long tolerated in reading, writing, and analytical skills.
What Education Should Do Differently
The article calls for education systems to shift decisively toward:
Teaching critical thinking, logic, and argumentation rather than memorization.
Emphasizing original thought, skepticism, and the ability to interrogate sources, including AI outputs.
Training students to treat AI as a starting point or assistant, not an authority or substitute for thought.
Assignments and assessments need to evolve: open-ended projects, oral defenses, in-class problem solving, and tasks that require personal insight or real-world application become more important if AI can handle boilerplate responses.
Teachers should explicitly integrate AI into pedagogy: show its errors, biases, and limitations, and have students critique and improve AI-generated content.
Autonomy, Incentives, and Personal Responsibility
The argument stresses that human agency remains central: people choose whether to outsource thinking or to use tools to think better.
AI can tempt users into intellectual laziness, but so can television, social media, or any convenience technology; the key is discipline, expectations, and incentives.
When grades, jobs, or reputations reward depth, originality, and correctness, individuals will be motivated to go beyond generic AI answers.
The author implies that cultivating self-discipline and intrinsic curiosity matters more than banning or fearing AI tools.
Implications and Broader Impact
The article concludes that AI is unlikely to “make us dumb” on its own; instead, it will widen the gap between:
Those who use it as a cognitive amplifier—fact-checking, brainstorming, modeling, and accelerating learning.
Those who treat it as a crutch and accept unexamined outputs.
This divergence amplifies the urgency of reforming education so that more people learn how to question, verify, and build on AI rather than substitute it for real understanding.
Ultimately, blaming AI serves as an excuse to avoid the harder work of fixing a failing school system and raising expectations for intellectual effort. The real danger is not the technology, but our willingness to settle for shallow thinking when far better is possible.
2026
Big Ideas 2026: Part 2
A16z • a16z New Media • December 10, 2025
Venture•2026
This article presents a collection of forward-looking predictions from venture capital investors, focusing on the transformative impact of artificial intelligence across the industrial and consumer application landscapes in the coming year. The central thesis is that AI is moving beyond digital automation to fundamentally reshape physical industries, redefine enterprise workflows, and create new consumer paradigms, marking a shift from software that “ate the world” to software that will “move it.”
American Dynamism: AI Rebuilds the Physical World
The contributors from the American Dynamism team envision a renaissance of the American industrial base, powered by AI-native and software-first approaches. David Ulevitch argues that the most important shift is the rise of companies that start with simulation, automated design, and AI-driven operations to build next-generation energy, manufacturing, logistics, and infrastructure. This is not about modernizing the past but constructing what comes next.
Key sub-themes within this industrial transformation include:
The Factory Mindset: Erin Price-Wright predicts applying a factory mindset—emphasizing scale, repeatability, and modular deployment of AI and autonomy—to complex sectors like nuclear energy, housing construction, and data center build-out.
Physical Observability: Zabie Elmgren foresees a revolution in “physical observability,” where billions of networked cameras and sensors create a real-time, AI-native fabric to monitor and manage cities and critical infrastructure, enabling advanced robotics and autonomy.
The Electro-Industrial Stack: Ryan McEntush introduces the concept of the “electro-industrial stack”—the integrated technologies from minerals to software that power electric machines. He warns that national leadership in the next industrial era depends on mastering this hardware foundation.
The Data Crusade: Will Bitsky identifies a coming “crusade for data” within critical industries. He posits that industrial companies possess a comparative advantage in generating valuable, process-oriented training data from their physical operations, which will become a new strategic asset.
Apps: AI Becomes Invisible and Integral
The Apps team shifts focus to how AI integration will evolve within software and services, moving from visible tools to embedded, proactive systems.
Business Model Reinforcement: David Haber emphasizes that the best AI startups will amplify their customers’ core economics, driving revenue rather than just cutting costs, by aligning deeply with customer incentives.
New Distribution and Interfaces: Anish Acharya highlights ChatGPT’s evolution into a major distribution channel and “AI app store” for consumer products, while Marc Andrusko predicts the “death of the prompt box,” with AI becoming proactive, invisible scaffolding within workflows.
Agentic Workflows and Enterprise Orchestration: Olivia Moore and Seema Amble detail the expansion of AI agents from single-task solutions to managers of entire multi-modal workflows and customer relationship cycles. Amble notes this will force large enterprises to create new roles like “AI workflow designers” and invest in “systems of coordination” to manage fleets of digital workers.
Industry-Specific Rebuilding: Angela Strange argues that AI will only transform sectors like banking and insurance when the underlying infrastructure is rebuilt to be AI-native, leading to merged categories and dramatically larger market winners.
Consumer Shift: Bryan Kim predicts a major pivot in consumer AI from “help me” (productivity) to “see me” (connectivity), using multimodal data to build products that foster stronger personal relationships and self-understanding.
The overarching conclusion is that 2026 will be a pivotal year where AI transitions from a promising technology to the foundational layer of both the physical economy and digital enterprise. Success will belong to those who build trust into physical observability, exploit new data and distribution channels, and have the courage to rebuild legacy systems from the ground up with AI as the core operating principle.
SpaceX Could Lead $2.9 Trillion in Private Valuation to Market
Bloomberg • Bailey Lipschultz • December 10, 2025
Venture•2026
A potential initial public offering (IPO) for SpaceX is viewed by market analysts as a watershed event that could unlock a wave of other highly valued private companies to follow suit. The article argues that SpaceX, as a “centicorn” valued at over $100 billion, could act as a catalyst, freeing up an estimated $2.9 trillion in private company valuation to eventually enter the public markets. This figure is based on an analysis of the 30 largest private, venture-backed companies in the United States.
The Centicorn Bottleneck and Market Impact
The current market has seen a significant backlog of large, mature private companies, often called “centicorns” or “decacorns,” that have delayed public listings. These delays have been driven by a combination of ample private capital, regulatory complexity, and volatile public market conditions. A successful SpaceX IPO would demonstrate that public investors are willing to support and value companies with complex, capital-intensive, and long-term business models akin to space exploration and satellite internet.
The sheer size and prominence of a SpaceX listing would likely improve overall market sentiment toward new issuances.
It would provide a recent, high-profile comparable valuation for other companies in adjacent sectors like aerospace, advanced manufacturing, and deep-tech.
The event could create a “proof of concept” for taking visionary, founder-led companies with transformative goals public.
A Shift in the IPO Landscape
The analysis suggests that the IPO market has been missing these flagship, growth-oriented technology companies, which has had a dampening effect on the entire sector. The successful debut of a company like SpaceX could reignite investor appetite for growth and innovation, shifting focus back from purely profitability-driven narratives. This would be particularly impactful for companies in sectors that require significant upfront investment before reaching sustained profitability.
Furthermore, the article notes that many late-stage private companies and their investors are awaiting a clear signal from the market. A strong performance by SpaceX could provide the confidence needed for other centicorns in fields such as artificial intelligence, financial technology, and biotechnology to accelerate their own IPO timelines. The influx of such companies would significantly broaden the investment opportunities available to public market participants.
Implications for Investors and the Economy
The unlocking of $2.9 trillion in private value would represent a major liquidity event for early investors, employees with equity, and the companies themselves. This capital could be recycled into new ventures, fueling further innovation. For public markets, it would mean access to a new generation of industry-defining companies that have matured outside of the traditional IPO cycle.
However, the article also implies that this potential wave is contingent on the specific success of the SpaceX offering. Any stumble could prolong the private market logjam. The central thesis is that the SpaceX IPO is not just another market listing; it is positioned as a potential key that unlocks the next phase of growth companies transitioning to public ownership, reshaping the landscape for investors and the economy at large.
Enterprise Will Be a Top OpenAI Priority In 2026, Sam Altman Tells Editors at NYC Lunch
Bigtechnology • Alex Kantrowitz • December 11, 2025
AI•Business•OpenAI•Enterprise•Competition•2026
OpenAI CEO Sam Altman told a room full of editors and news CEOs this week that OpenAI will prioritize selling AI to businesses in 2026.
Altman lunched at Rosemary’s Midtown with leaders from The Atlantic, The New Yorker, The New York Times, and other top national publications on Monday. The conversation, wide ranging and at times unwieldy, featured a charming and disarming Altman speaking candidly about himself, his business, and plans for the coming year.
Altman’s plans for OpenAI’s enterprise push was the biggest revelation from the lunch. Under Altman, OpenAI has excelled at building consumer products, with ChatGPT approaching 900 million weekly users. But the company has faced fierce competition when selling its AI models to businesses, primarily from Anthropic, which is leading the enterprise AI market.
At the lunch, Altman made clear that selling to enterprises was a massive OpenAI priority, and mentioned that it was an application problem, not a training problem, that the company needed to solve. Altman was straightforward about OpenAI’s need to build better products for enterprises and his intent to fast track them.
For OpenAI, growing its enterprise business could be the surest way to scale revenue as it pursues one of history’s great infrastructure buildouts. Enterprise AI is the fastest growing software category in history, expected to bring in $37.5 billion next year, according to Gartner, up from almost zero in 2022.
Should OpenAI make inroads in the category, it could more easily justify new funding rounds and better support its push to build $1.4 trillion of computing infrastructure in the coming years. Notably, OpenAI’s October agreement with Microsoft, which added some distance between the companies, gives it greater leeway to build for enterprises.
Altman also addressed the ‘Code Red’ within OpenAI following the emergence of Google’s Gemini as a competitor. Altman has said Google’s AI model surpassed OpenAI’s GPT models in some areas, but at the lunch he pushed back on the notion that the latest Gemini model was an existential threat to OpenAI. The company has been through multiple code reds in its history, Altman said, and this one would end soon.
Big Ideas 2026: Part 1
A16z • December 9, 2025
AI•Tech•Agentic Infrastructure•Personalization•World Models•2026
Overview: Big Problem Spaces for 2026 Builders
The piece outlines major problem spaces a16z partners expect founders to tackle in 2026 across infrastructure, growth, bio/health, and games/consumer. The unifying theme is AI moving from point tools to deeply embedded systems that reshape data infrastructure, security, creative work, enterprise software, healthcare, education, and interactive worlds. Many predictions hinge on agents—autonomous AI systems—shifting load patterns in infrastructure, redefining how we build products, and changing what counts as value or engagement.
Infrastructure: From Unstructured Chaos to Agent-Native Systems
Multimodal data entropy as core bottleneck: Enterprises are drowning in PDFs, screenshots, logs, emails, videos, and other “semi-structured sludge.” This unstructured universe holds ~80% of corporate knowledge, but its decay in freshness, structure, and truth throttles AI performance, causing RAG hallucinations, fragile agent workflows, and heavy human QA. Startups that continuously clean, structure, reconcile, validate, and govern multimodal data become “keys to the kingdom” for use cases like contract analysis, onboarding, claims, compliance, engineering search, and sales enablement.
AI-automated cybersecurity work: Cybersecurity has long faced millions of unfilled roles because skilled technicians are stuck in “soul-crushing” level 1 work reviewing floods of alerts and logs. AI-native tools that automate repetitive triage, correlation, and response break the vicious cycle where tools detect everything and humans review everything. This frees security teams to “chase down bad guys,” design systems, and address deep vulnerabilities rather than drown in low-level noise.
Agent-native infrastructure as table stakes: Legacy backends were built for predictable, human-speed, 1:1 action-response patterns. Agents trigger recursive, bursty “thundering herds” of thousands of sub-tasks and API calls that resemble DDoS attacks to old systems. 2026 infrastructure must shrink cold starts, collapse latency variance, and massively increase concurrency. The true bottleneck becomes coordination—routing, locking, state management, and policy enforcement across huge parallel execution—defining winners in the agent era.
Multimodal creative tools: Early products like Kling O1 and Runway Aleph hint at a future where creators can feed models videos, reference images, voices, and motion clips and ask for precise extensions, reshoots from new angles, or consistent characters across scenes. The big opportunity is marrying powerful multimodal models with interfaces that give director-level control, enabling everything from meme creators to Hollywood studios to rely on AI as a core creative medium.
AI-native data stack evolution: The “modern data stack” is consolidating into unified platforms (e.g., Fivetran/dbt, Databricks), but a truly AI-native architecture is still emerging. Key fronts include: integrating performant vector stores with traditional structured data; enabling agents to solve the “context problem” by finding the right semantic layers and business definitions; and transforming BI tools and spreadsheets as workflows become more automated and agentic.
Growth: Enterprise Software, Agents, and New KPIs
Systems of record lose primacy: AI that can read, write, and reason across operational data turns ITSM/CRM from passive databases into active, autonomous workflow engines. The strategic locus shifts from the database to the intelligent agent layer that anticipates, coordinates, and executes end-to-end processes; systems of record become commoditized persistence tiers.
Vertical AI enters “multiplayer mode”: Vertical AI has already driven $100M+ ARR in domains like healthcare, legal, and housing. After information retrieval and reasoning, 2026 brings multi-agent, multi-party workflows. Vertical software, steeped in domain-specific interfaces and regulations, orchestrates agents for buyers, sellers, tenants, advisors, vendors, and partners. This coordination—negotiating within constraints, routing to specialists, syncing changes, and learning from expert markups—unlocks higher task success rates and strong network effects.
Designing for agents, not humans: As agents become primary consumers of web and app content, traditional optimization (SEO, UX, visual hierarchy, hooks) gives way to machine legibility. Agents won’t miss the critical insight buried on page five; they’ll extract and interpret telemetry, CRM data, and logs, then post concise insights where humans operate (e.g., Slack). Content and software must be structured for retrieval, reasoning, and interoperability rather than just human reading.
The end of screen-time as a KPI: AI applications increasingly deliver value with minimal attention or interaction—e.g., DeepResearch queries, AI clinical note tools like Abridge, AI coding tools like Cursor, or AI-driven financial analysis. This breaks the 15-year paradigm where screen time or click volume signaled value. New pricing and ROI measurement must account for outcomes like doctor satisfaction, developer productivity, analyst wellbeing, and user happiness, rewarding vendors that explain impact simply and credibly.
Bio + Health: The Rise of “Healthy MAUs”
A new healthcare segment, “healthy MAUs,” emerges: people who are not acutely sick but want ongoing monitoring, insights, and preventive care. Traditional reimbursement models and insurance have favored treatment over prevention, leaving “healthy YAUs” underserved until they become high-cost patients.
With AI dramatically lowering care delivery costs, novel prevention-focused insurance products, and consumer willingness to pay subscriptions, startups and incumbents can serve this large, recurring, data-rich segment. Continuous engagement, proactive insights, and personalized plans become core to healthtech growth.
Speedrun: World Models, Personalization, and AI-Native Education
World models as a new storytelling medium: Tools like Marble and Genie 3 generate interactive 3D worlds from text prompts, foreshadowing a “generative Minecraft” where players co-create evolving universes via language (e.g., “create a paintbrush that turns anything pink”). These shared, programmable spaces enable new genres, digital economies, and training grounds for AI agents and robots, blurring lines between creator and player.
“The year of me” and hyper-personalization: In 2026, products across education, health, and media pivot from optimizing for the average consumer to optimizing for each individual. AI tutors adapt to each student’s pace and curiosity; personalized health stacks tailor routines to one’s biology; media remixes news and stories into feeds tuned to unique interests and tone. The next generation of giants will win by “finding the individual inside the average.”
The first AI-native university: Beyond incremental tools, an AI-native university is envisioned as an adaptive organism: courses, advising, research collaboration, and operations continuously reconfigure via data and AI. Schedules self-optimize; reading lists update nightly with new research; learning paths adapt in real time. Precedents like ASU’s OpenAI partnership and SUNY’s AI literacy requirements hint at this future. Faculty become architects of learning systems, and assessment shifts to grading howstudents use AI. Graduates emerge fluent in orchestrating and governing AI, fueling the broader AI-driven economy.
Critical Takeaways and Implications
AI and agents are forcing a re-architecture of core infrastructure (data, backends, security) for scale, context, and coordination.
Enterprise value is migrating from static systems of record and human attention metrics to autonomous execution layers and outcome-based ROI.
Healthcare and education are poised for structural change, with continuous, preventive, and personalized models at the center.
New creative and interactive mediums—multimodal tools, world models, generative worlds—will create both cultural shifts and new economic frontiers.
Founders who build for agents, personalization, and multiplayer coordination are best positioned to define the 2026–2030 landscape.
Big Ideas 2026: Part 3
A16z • a16z New Media • December 11, 2025
Crypto•Blockchain•Stablecoins•Tokenization•AI•2026
This article presents a collection of 17 forward-looking predictions from a16z crypto partners and guest contributors on the key trends and innovations expected to shape the cryptocurrency and blockchain space in 2026. The forecasts span a wide range of topics, including privacy, AI integration, stablecoins, tokenization, security, and the evolving regulatory landscape, painting a picture of a maturing industry moving beyond speculation toward foundational infrastructure for the internet.
Privacy as a Critical Moat and Network Effect
Ali Yahya argues that privacy will become the most important differentiator for blockchains, creating a powerful “privacy network effect.” He posits that while bridging public assets is trivial, bridging secrets between chains is difficult and leaks metadata. This creates significant lock-in, as users on a private chain are less likely to move and risk exposure. Consequently, a handful of privacy-focused chains could capture most of the crypto market value, as privacy is deemed essential for real-world financial applications.
The Intersection of AI, Crypto, and New Economic Models
Several predictions focus on the convergence of AI and crypto. Andy Hall foresees prediction markets becoming “bigger, broader, and smarter,” leveraging AI agents for trading and analysis and using crypto for decentralized governance and proof-of-human verification. Scott Kominers anticipates AI being used for substantive research, enabling a new “polymath” style that harnesses AI “hallucinations” within layered agent workflows, a process that will require crypto for model interoperability and compensation.
Furthermore, Liz Harkavy warns of an “invisible tax on the open web,” where AI agents extract value from content without supporting the ad-based revenue models that fund it. The solution proposed is a shift to real-time, usage-based compensation systems, potentially powered by blockchain micropayments. Sean Neville identifies a related bottleneck: the shift from “Know Your Customer” (KYC) to “Know Your Agent” (KYA). He notes that with non-human identities vastly outnumbering human ones in finance, cryptographically signed credentials will be essential for agents to transact reliably.
The Evolution of Finance: Stablecoins, Tokenization, and Wealth Management
The financial infrastructure of crypto is predicted to deepen. Guy Wuollet calls for more “crypto-native” thinking in tokenizing real-world assets (RWAs), favoring synthetic representations like perpetual futures over skeuomorphic tokenization. He also advocates for the onchain origination of debt assets rather than the tokenization of offchain loans. Jeremy Zhang and Sam Broner highlight the critical need for better stablecoin on/off ramps and their role in modernizing legacy banking systems. Stablecoins, which processed an estimated $46 trillion in volume—nearly 3x Visa’s volume—offer a way to innovate without replacing decades-old core banking software.
Maggie Hsu envisions “wealth management for all,” where tokenization and crypto rails enable personalized, actively managed portfolios for everyone, not just high-net-worth individuals. This includes easier access to tokenized private market assets and automated rebalancing across a tokenized portfolio spectrum.
Security, Decentralization, and Regulatory Clarity
On security, Daejun Park predicts a shift from “’code is law’ to ‘spec is law’,” advocating for the use of AI-assisted tools to prove and enforce global invariants in DeFi protocols as a guardrail against exploits. Shane Mac argues for decentralized, quantum-resistant messaging protocols, stating that without decentralization, even unbreakable encryption can be switched off by a central authority.
Finally, Miles Jennings points to potential U.S. crypto market structure legislation as a pivotal moment for 2026. He argues that clear regulation would eliminate the “legal contortions” founders have faced, allowing blockchain networks to finally operate as designed—open, autonomous, and decentralized—unleashing their full technical potential.
The State of AI: life in 2030
Ft • December 8, 2025
AI•Tech•Automation•Inequality•FutureOfWork•2026
Overview: A 2030 Shaped by AI Everywhere, but Not for Everyone
The article envisions daily life in 2030 as deeply infused with artificial intelligence, from transport and healthcare to education and entertainment. It argues that AI will be less a visible “wow” technology and more a pervasive infrastructure, comparable to electricity or the internet. At the same time, it stresses that this AI-rich world will be sharply unequal, creating clear divides between those who can access, understand and shape AI systems and those who are largely subject to them. The central tension is between unprecedented convenience and productivity on one side, and new forms of dependency, surveillance and economic stratification on the other.
Robots, Robotaxis and the Automation of Everyday Mobility
Autonomous vehicles and fleets of robotaxis are portrayed as routine in many major cities by 2030, reducing the need for private car ownership and changing urban design around pick-up hubs and logistics.
Small delivery robots, warehouse bots and domestic helpers manage a spectrum of physical tasks, from last‑mile deliveries to basic home chores, particularly for affluent households and aging populations.
The piece notes that while accidents and regulatory disputes continue, overall safety records of autonomous systems surpass those of human drivers, reinforcing political and commercial momentum.
Public transport is increasingly orchestrated by AI for dynamic routing and predictive maintenance, making mobility more efficient but also more data‑intensive and dependent on a handful of large technology providers.
Work, Productivity and the New Division of Labour
Generative AI tools capable of coding, drafting documents, and producing media are embedded in most professional software by 2030, acting as “first‑draft workers” across white‑collar industries.
Routine cognitive tasks in law, accounting, marketing and customer service are heavily automated, compressing traditional career ladders: junior roles shrink, while demand grows for a smaller number of high‑skill “AI supervisors” and domain experts.
Manual and care work are less transformed: robots assist but do not fully replace cleaners, construction workers or caregivers, leaving many of the lowest‑paid jobs intact rather than eliminated.
Productivity statistics rise, but wage gains and job security disproportionately accrue to people who can design, deploy or manage AI systems, widening professional inequality.
AI Haves and Have-Nots: Economic and Social Inequality
The article emphasizes that powerful foundation models and robotic platforms are controlled by a small set of corporations and governments, creating an “AI elite” with privileged access to compute, data and talent.
Wealthy individuals and advanced economies use AI as a force multiplier—optimizing investments, education, healthcare and security—while poorer communities and countries rely on cheaper, more constrained AI services.
This disparity manifests in everyday experiences: premium AI tutors, health triage systems and personalized financial advisors for the rich, versus generic, ad‑driven or surveillance‑heavy tools for the rest.
The author suggests that AI will amplify existing structural divides (income, education, infrastructure) rather than automatically bridging them, unless deliberate redistribution and regulation are implemented.
Governance, Surveillance and the Struggle for Control
Governments increasingly depend on AI for welfare administration, policing, border control and national security, making algorithmic decision-making central to state power.
The piece warns of “soft coercion” through scoring systems, predictive policing and automated eligibility checks that can be opaque and hard to challenge, especially for marginalized populations.
Corporate surveillance also intensifies: workplaces use AI to track productivity and behavior, while consumer platforms profile users in real time for dynamic pricing and targeted content.
Regulatory efforts exist—requirements for transparency, auditability and limits on high‑risk applications—but enforcement is uneven across jurisdictions, leading to a fragmented global landscape of AI rights and protections.
Culture, Human Agency and Everyday Life
AI‑generated media—music, video, games, news summaries—becomes the default for much casual consumption, raising questions about authenticity and the dilution of human-made culture.
Personalized “AI companions” and chatbots offer emotional support, entertainment and practical assistance, particularly to the elderly and socially isolated, blurring lines between tool and relationship.
Education leans heavily on adaptive tutoring systems that tailor content to each student, improving outcomes for some but risking over‑standardization and data‑driven labeling of children from an early age.
The article concludes that by 2030 the key issue will not be whether AI is powerful or pervasive—it will be— but who sets the terms of its deployment, who captures the gains, and how much meaningful agency ordinary people retain in AI‑mediated systems.
Bloomberg News Now: Spacex Seeks $1.5T IPO Valuation
Youtube • Bloomberg Podcasts • December 9, 2025
Venture•2026
The central focus of the content is a report that SpaceX, the aerospace manufacturer and space transportation company founded by Elon Musk, is seeking a staggering $1.5 trillion valuation for a potential initial public offering (IPO). This figure represents an unprecedented ambition for a private company and would instantly position SpaceX as one of the most valuable public companies in the world, rivaling or surpassing the market capitalizations of tech giants like Apple and Microsoft at various points in their history. The discussion frames this not just as a financial milestone but as a pivotal moment for the space industry and public markets.
The Scale of the Ambition
The proposed $1.5 trillion valuation is the key data point driving the analysis. To provide context:
It dwarfs the valuations of other major recent IPOs and would be among the largest public market debuts in history.
This valuation reflects immense investor confidence in SpaceX’s dual-track business model: its established, revenue-generating rocket launch services (Starlink and Falcon rockets) and its long-term, high-risk/high-reward project to colonize Mars through the Starship program.
The discussion likely explores how this valuation is justified by SpaceX’s dominant market position in commercial launches and the potential future revenue from its Starlink satellite internet constellation, which aims to provide global broadband coverage.
Implications for Markets and the Space Sector
A SpaceX IPO at this valuation would have profound ripple effects. It would provide a massive liquidity event for early investors and employees, potentially creating a new wave of wealth. For the public markets, it would offer retail and institutional investors their first direct opportunity to invest in the forefront of the commercial space race, a sector previously accessible only to venture capital and private equity. Furthermore, such a successful public listing could unlock significant capital for SpaceX to fund its capital-intensive Mars ambitions, accelerating development timelines. It would also set a new benchmark for valuations across the entire aerospace and “New Space” sector, potentially driving up investment in competitors and ancillary service providers.
Challenges and Considerations
Despite the headline figure, the analysis would also consider significant hurdles. Regulatory scrutiny from bodies like the SEC would be intense for a deal of this magnitude and complexity. Market conditions at the time of the offering would be critical; achieving a $1.5 trillion valuation requires sustained investor appetite and a compelling narrative that balances near-term profitability with visionary long-term goals. There are also inherent risks in SpaceX’s operations, from the technical challenges of Starship development to the competitive and regulatory landscape of global satellite internet.
In conclusion, the report of SpaceX targeting a $1.5 trillion IPO valuation signifies a watershed moment where the space economy transitions from a niche, government-dominated field to a central pillar of the global financial and technological landscape. The success or failure of such an offering would not only determine SpaceX’s financial future but also signal the market’s belief in the long-term commercial viability of interplanetary ambition.
Private equity may regret inviting in mom and dad
Ft • December 9, 2025
Venture•2026
The private equity industry’s recent push to attract retail investors, or “mom and dad” capital, is creating a new set of risks that the sector may not be fully prepared to handle. While opening funds to a broader investor base provides a fresh source of capital, it also invites greater regulatory scrutiny, increased litigation risk, and a fundamental shift in the relationship between fund managers and their investors. The courts, rather than the industry itself, may ultimately define the terms of this democratization, potentially imposing stricter standards of transparency and fiduciary duty.
The Drive for Retail Capital
For years, private equity has been the domain of large institutional investors like pension funds and endowments. However, as competition for capital intensifies, major firms are increasingly marketing funds and products to accredited retail investors. This strategic shift is driven by several factors:
The vast, untapped pool of wealth held by high-net-worth individuals.
A desire to diversify their investor base beyond traditional institutions.
The perception that retail capital may be more “sticky” and less sensitive to short-term performance fluctuations.
This move is often framed as a democratization of an asset class that has historically delivered superior returns, albeit with higher risk and illiquidity.
The Inevitable Rise of Litigation
The article argues that inviting less sophisticated investors into complex, opaque private equity structures is a recipe for legal disputes. Retail investors, unlike seasoned institutional limited partners (LPs), are more likely to sue when investments underperform or when fee structures and conflicts of interest are not clearly communicated. The courts are poised to become a central arena where the obligations of private equity general partners (GPs) to these new investors are tested and defined. A series of high-profile lawsuits could establish new precedents around disclosure requirements and the standard of care owed to retail participants, effectively regulating the industry through case law.
Implications for the Private Equity Model
This judicial oversight could force significant changes to the traditional private equity operating model. The industry’s characteristic secrecy and complex fee arrangements—often negotiated in detail by sophisticated institutional LPs—may not withstand scrutiny from judges and juries sympathetic to individual investors. Firms may be compelled to adopt greater transparency, simplify fee structures, and provide more frequent and detailed reporting. Furthermore, the threat of litigation could alter the risk calculus for fund managers, potentially making them more cautious in their strategies and operations.
Ultimately, the article suggests that private equity’s pursuit of retail money is a double-edged sword. While it solves a capital-gathering problem, it introduces a powerful new counterweight: the legal system acting on behalf of the individual investor. The industry may find that in seeking democratization, it has inadvertently empowered a force that will demand accountability and reshape its practices in ways it did not anticipate.
Journalism will become the center of gravity for YouTube’s next era
Niemanlab • Joon Lee • December 11, 2025
Media•Journalism•YouTube•DigitalMedia•CreatorEconomy•2026
For the past decade, the dominant ethos on YouTube has been entertainment, with creators actively distancing themselves from the trappings of traditional journalism. However, a significant cultural and strategic shift is underway, positioning journalism as the central pillar for the platform’s future growth, prestige, and cultural relevance. This evolution is being driven by a convergence of external pressure, creator maturation, and YouTube’s own ambitions to compete on the largest screens in the home.
The Catalyst: A Crisis of Civic Responsibility
The 2024 U.S. election served as a stark turning point, exposing the platform’s vulnerabilities. YouTube faced intense criticism as creator-driven podcasts and conversations, operating without editorial oversight or fact-checking, heavily shaped political narratives and public understanding. This moment highlighted that YouTube’s immense scale had outpaced its infrastructure for civic responsibility, forcing a reckoning with the need for more trustworthy content.
The Creator Evolution: From Entertainers to Institutions
A new class of top creators is already evolving into roles that resemble legacy media, outgrowing pure entertainment.
Marques Brownlee has become a definitive voice in consumer technology, filling a role once held by traditional critics.
Philip DeFranco’s show has matured from creator drama into a format closer to a nightly news broadcast.
Even MrBeast is now treated as a public institution with civic weight, sparking speculation about building a company rivaling Disney.
Creators like Jon Youshaei and Colin and Samir effectively run trade publications for the creator economy itself.
As these creators are covered like celebrities and CEOs, they encounter the same need for legitimacy that traditional institutions have: they require journalism. Scaling to become a cultural force necessitates more care, structure, transparency, and ultimately, editorial standards and reporting.
YouTube’s Strategic Imperative: Trust Over Watch Time
YouTube’s competition with Netflix for dominance on the living room TV is a key driver of this shift. While Netflix relies on prestige programming for cultural authority, YouTube possesses scale and watch time but struggles with credibility on the big screen. The platform’s next era will be defined by building trust. Journalism is uniquely positioned to fill this gap—not as a primary revenue driver, but as a source of legitimacy. It signals that the platform helps users “make sense of the world,” transforming YouTube from an entertainment hub into a civic institution.
The Hybrid Future: A Two-Way Street
This transformation is a two-way street, demanding adaptation from both sides.
Creators moving toward journalism: Successful creators hitting a “ceiling” will need to adopt journalistic rigor, fact-checking, and editorial processes to maintain trust at scale.
Journalists moving toward creators: Journalists seeking relevance must master the intimate, human voice and relationship-building that YouTube demands. Credibility will be built through presence and emotional clarity as much as through traditional bylines. Reporters who thrive will be those who can translate complex ideas with both accuracy and a connective, accessible style.
The most successful early examples of this hybrid model come from journalists like Cleo Abram, Johnny Harris, Adam Cole, and Joss Fong, who have built independent, niche-focused enterprises on YouTube that often outperform their legacy media counterparts in reach and engagement.
The defining content of the late 2020s will be created by those who successfully fuse journalistic rigor with YouTube’s native language of intimacy and immediacy. This fusion will determine whether YouTube can sustainably compete with Netflix not just for entertainment minutes, but as a trusted institution that helps society understand itself.
Media
Netflix and the Hollywood End Game
Stratechery • Ben Thompson • December 8, 2025
Media•Film•Streaming•Netflix•Hollywood
The article presents a detailed analysis of the current state of the Hollywood entertainment industry, framing it as an “end game” driven by the strategic dominance of Netflix and the disruptive force of YouTube. It argues that the traditional studio model, built on controlling intellectual property (IP) and its theatrical release window, is being fundamentally dismantled. Netflix’s strategy is central to this shift, as it has successfully moved the industry’s center of gravity from theaters to the home, thereby devaluing the exclusive theatrical window that was once the studios’ primary leverage.
The Netflix Strategy: Owning the Customer Relationship
Netflix’s core advantage is its direct relationship with over 300 million global subscribers. This allows it to:
Amortize content costs globally: A show like Squid Game* can be a massive financial success based on its ability to attract and retain subscribers worldwide, not on its domestic box office or syndication revenue.
Operate without the constraints of theatrical release schedules: Netflix can release content on its own timeline, optimizing for subscriber engagement rather than maximizing opening weekend box office.
Utilize data to inform content decisions: The company’s vast trove of viewing data provides insights into what resonates with audiences, reducing the reliance on the high-risk, high-reward “blockbuster” model.
The article posits that Netflix is now leveraging this position to reshape the value of intellectual property itself. By offering massive upfront payments to secure global rights in perpetuity, Netflix is making a calculated bet that it can increase the long-term value of IP through its platform better than the traditional studios can through cyclical theatrical releases, home video, and licensing windows.
The YouTube Disruption and the “Aggregator” Theory
Simultaneously, the piece highlights YouTube as the other dominant force, representing a different kind of threat. While Netflix competes for premium, scripted content, YouTube dominates attention for everything else—user-generated content, vlogs, and unscripted entertainment. The article applies the “Aggregator Theory,” where platforms that control demand (users/attention) have power over suppliers (content creators). Netflix is an aggregator for high-budget content, while YouTube is the ultimate aggregator for everything else. This dual pressure squeezes traditional media companies from both sides.
The End Game for Traditional Studios
For legacy Hollywood studios, the options are narrowing. The article outlines several strategic paths, each with significant challenges:
Building their own direct-to-consumer platforms: This is the path chosen by Disney, Warner Bros. Discovery, and others, but it requires massive investment with no guarantee of reaching Netflix’s scale. The author is skeptical, noting that “it is not clear that any of them have a sustainable business model.”
Becoming arms dealers to the aggregators: This involves licensing content to Netflix, Amazon, and Apple. While providing short-term revenue, this strategy cedes the customer relationship and may accelerate the decline of their own platforms.
Doubling down on theatrical exclusives: A focus on must-see theatrical events (e.g., superhero sequels, franchise films) is one remaining area of leverage. However, this is a high-risk, hit-driven business that is becoming increasingly narrow.
The overarching conclusion is that the entertainment landscape is consolidating around a few giant aggregators. Netflix is positioned to be the primary aggregator for premium, narrative content, confident that its global scale and data capabilities allow it to extract more value from IP than the system it helped destroy. The “end game” is a market where a handful of platforms control audience access, and traditional studios are forced into a subordinate role as suppliers or niche players.
Disney to Invest $1 Billion in OpenAI and License Characters for Use in ChatGPT, Sora
Wsj • Ben Fritz • December 11, 2025
Media•AI•Entertainment•Intellectual Property•Partnership
Disney has agreed to invest $1 billion in OpenAI and license its characters for use in the startup’s products, according to people familiar with the matter, a major bet by the entertainment giant that the technology will be a boon to its business rather than a threat.
The three-year deal will let users of OpenAI’s ChatGPT and its Sora video generator create content featuring characters from Disney’s vast library, including those from Marvel, Pixar and Star Wars, the people said. Disney will also get a seat on OpenAI’s board.
The agreement, which is expected to be announced soon, is a landmark moment for both companies and the entertainment industry. It represents a significant commitment by a traditional media company to generative AI, a technology that has been seen by many in Hollywood as an existential threat to jobs and creative control.
For OpenAI, the deal provides a massive injection of capital and a powerful partner with one of the world’s most valuable portfolios of intellectual property. It also gives the startup a high-profile endorsement as it faces increasing regulatory scrutiny and competition.
The partnership comes as Disney is locked in a separate, high-stakes legal battle with Google over alleged copyright infringement related to AI. Disney and other major media companies have accused Google of using their content to train AI models without permission. The Disney-OpenAI deal, by contrast, is a consensual licensing agreement that could set a precedent for how media companies monetize their content in the AI era.
Netflix’s WBD bid is an antitrust drama without a villain
Ft • December 9, 2025
Media•Broadcasting•Antitrust•Streaming•Mergers
The article examines the complex antitrust considerations surrounding Netflix’s potential bid for Warner Bros. Discovery (WBD), framing it as a regulatory drama without a clear-cut antagonist. It argues that while the deal would create a media behemoth, traditional antitrust frameworks struggle to define the competitive harm in the rapidly evolving streaming landscape. The central conflict is not between a monopolistic predator and the market, but between old regulatory definitions and new market realities.
The Core Antitrust Challenge: Defining the Market
A primary hurdle for regulators would be defining the “relevant market” in which the combined entity would operate. This legal concept is crucial for assessing market power and potential harm to competition.
Lawyers for the companies would likely argue for a broad market definition encompassing all forms of video entertainment, including traditional linear TV, other streaming services, social media video, and even gaming. This would minimize the combined entity’s perceived market share.
Opponents, such as rival studios or consumer groups, would push for a narrow definition focused solely on premium subscription video-on-demand (SVOD) services. This would make Netflix-WBD’s market share appear dominant and raise significant red flags.
The article suggests regulators are caught between these two poles. The old world of cable bundles and broadcast TV is fading, but the new digital ecosystem is fragmented and includes competitors like YouTube, TikTok, and Amazon Prime, which operate on different economic models (ad-supported, part of a broader retail subscription).
Shifting Power Dynamics in Entertainment
The analysis highlights that the power in media has decisively shifted from distribution to content ownership and IP. Netflix’s interest in WBD is driven by the latter’s vast libraries (HBO, DC, Warner Bros. film catalog) and production capabilities. A merger would be a defensive move to secure must-have content in an era of escalating costs and competition, rather than an offensive play to corner a distribution market.
Furthermore, the article points out that consumer choice in streaming is paradoxically both vast and constrained. While there are many services, the cost of subscribing to all major ones is becoming prohibitive, leading to “subscription fatigue.” This could allow a truly scaled player with the deepest content library to exert significant pricing power, which is a classic antitrust concern, even if the market is hard to define.
Regulatory Implications and the Lack of a “Villain”
The piece concludes that this potential deal exposes the inadequacy of current antitrust tools. The usual narrative of a “villain” stifling competition doesn’t neatly fit. Netflix is competing with tech giants with immense balance sheets (Apple, Amazon) and legacy media companies desperate to transition (Disney, Paramount). Blocking the deal could weaken players against these larger rivals, while allowing it could reduce the number of major Hollywood studios and creative competitors.
The ultimate regulatory decision would hinge on whether authorities view the market through a traditional lens—where consolidation reduces competitor count—or a modern one, where competition comes from unpredictable quarters and scale is necessary for survival. The drama, therefore, lies in this philosophical and legal clash, with significant implications for the future structure of the global entertainment industry.
Netflix’s Swallowing of Warner Bros. Will Be the End of Hollywood
Nytimes • December 6, 2025
Media•Film•Netflix•WarnerBros•HollywoodConsolidation
Overview and Central Argument
The piece argues that a hypothetical acquisition of Warner Bros. by Netflix would mark a decisive, perhaps irreversible, break with the traditional Hollywood studio system. Past fears about “the end of Hollywood” have repeatedly surfaced with the advent of television, VHS, cable, DVDs, and streaming, but the article suggests this deal would be categorically different. Rather than merely disrupting how films and shows are distributed, it would erase the remaining institutional and cultural boundaries that separate Silicon Valley–style tech platforms from legacy studios, effectively turning Hollywood into a content division of a global tech company.
Why This Merger Is Uniquely Dangerous
The article emphasizes that Hollywood’s past crises involved new technologies but preserved a competitive ecosystem of distinct studios, talent agencies, and theater chains.
By contrast, letting Netflix absorb Warner Bros. would consolidate an enormous library (from classic films and DC superheroes to prestige TV) under a single, data-driven subscription platform.
This combination, the author suggests, would:
Greatly reduce bargaining power for writers, directors, actors, and independent producers.
Allow Netflix to dictate terms not just in streaming but across theatrical, TV, and licensing windows.
Set a precedent for further tech–studio mega-mergers, accelerating consolidation.
Impact on Creativity, Risk-Taking, and Culture
The article contends that Hollywood’s greatest achievements came from tension between commerce and artistry: studios needed hits but also relied on creative gambles that occasionally produced transformative cinema.
Netflix’s algorithm-centric model, when applied to Warner Bros.’ vast IP, would likely:
Prioritize predictable franchises, sequels, and “content” calibrated to churn and retention metrics over singular artistic visions.
Shorten the lifespan of films and series, as projects are judged quickly on engagement data rather than allowed to build word-of-mouth or cult status.
Reduce mid-budget, adult-oriented dramas and offbeat originals that don’t fit clear data patterns but historically defined much of Hollywood’s cultural influence.
The author warns that the result would be an entertainment landscape where cultural memory is shaped by what fits one company’s recommendation system, not by diverse creative experimentation.
Market Power, Labor, and Competition
The merger is framed as a power shift from a historically fragmented industry to a near-vertical platform:
Control over production, distribution, and discovery (via Netflix’s interface) would give the combined entity outsized leverage over labor, including guilds that only recently fought for protections in the streaming era.
Competitors—other studios, streamers, and theatrical exhibitors—would be pressured to respond with their own mega-mergers or risk marginalization.
The article suggests that regulatory scrutiny would be essential, not just in traditional antitrust terms (prices, consumer harm) but in broader cultural terms:
When one company commands global attention, it shapes which stories get told, which regions’ voices are amplified, and how democratic societies understand themselves.
Broader Cultural and Democratic Implications
Beyond business implications, the author argues that Hollywood has served as a global storytelling engine, exporting American narratives, ideals, and critiques of power.
Consolidating that function into a single corporate logic risks:
Narrowing the range of political, social, and historical perspectives that reach mass audiences.
Making controversial or challenging works more likely to be suppressed, quiet-released, or buried by an algorithm rather than overtly censored.
The fear is not just fewer movies, but a global culture increasingly mediated through the design choices and growth imperatives of one dominant platform, eroding the pluralism that historically characterized the film industry.
Conclusion and Call for Resistance
The article concludes that while Hollywood has repeatedly survived technological shocks, this merger would transform its underlying structure in a way that may be irreversible.
It calls for:
Strong regulatory intervention to block or heavily condition such a deal.
Collective resistance from creators, unions, and audiences who value a diverse, competitive cultural ecosystem.
If this acquisition proceeds, the author suggests, the phrase “the end of Hollywood” may no longer be hyperbole but a description of a new era in which Hollywood as an independent, multi-studio system effectively ceases to exist.
★ Meta Says Fuck That Metaverse Shit
Daring fireball • John Gruber • December 7, 2025
Media•Publishing•Meta•Metaverse•Branding
Mike Isaac, reporting for The New York Times, “Meta Weighs Cuts to Its Metaverse Unit” (gift link):
Meta is considering making cuts to a division in its Reality Labs unit that works on the so-called metaverse, said three employees with knowledge of the matter.
The cuts could come as soon as next month and amount to 10 to 30 percent of employees in the Metaverse unit, which works on virtual reality headsets and a V.R.-based social network, the people said. The numbers of potential layoffs are still in flux, they said. Other parts of the Reality Labs division develop smart glasses, wristbands and other wearable devices. The total number of employees in Reality Labs could not be learned.
Meta does not plan to abandon building the metaverse, the people said. Instead, executives expect to shift the savings from the cuts into investments in its augmented reality glasses, the people said.
Meta confirmed the cuts to the Wall Street Journal, and Bloomberg’s Kurt Wagner broke the news Thursday.
I’m so old that I remember ... checks notes ... four years ago, when Facebook renamed itself Meta in late 2021 with this statement: “Meta’s focus will be to bring the metaverse to life and help people connect, find communities and grow businesses.” And Mark Zuckerberg, announcing the change, wrote:
But all of our products, including our apps, now share a new vision: to help bring the metaverse to life. And now we have a name that reflects the breadth of what we do.
From now on, we will be metaverse-first, not Facebook-first. That means that over time you won’t need a Facebook account to use our other services. As our new brand starts showing up in our products, I hope people around the world come to know the Meta brand and the future we stand for.
Many of us never fell for this metaverse nonsense. For example, I’m also old enough to remember just one year later, near the end of Joanna Stern’s on-stage interview with Craig Federighi and Greg Joswiak at a 2022 WSJ event, seven months before Vision Pro was announced (at the 29:30 mark):
Stern: You have to finish this sentence, both of you. The metaverse is...
Joz: A word I’ll never use.
He might want to use the word now, just to make jokes.
Om Malik, writing in April this year:
Some of us are old enough to remember that the reason Mark renamed the company is because the Facebook brand was becoming toxic, and associated with misinformation and global-scale crap. It was viewed as a tired, last-generation company. Meta allowed the company to rebrand itself as something amazing and fresh.
Lastly, yours truly, linking to Malik’s post:
And so while “Meta” will never be remembered as the company that spearheaded the metaverse — because the metaverse never was or will be an actual thing — it’s in truth the perfect name for a company that believes in nothing other than its own success.
Venture
Seed Round Sizes
LinkedIn • Peter Walker • December 8, 2025
LinkedIn•Venture
Source: LinkedIn | Author
Founders - deep breaths. Don’t freak out when you read another post about a new company raising a $475 million seed round.
Here’s the deal:
• Yes, some tiny fraction of the “seed” market is playing at this scale. But it’s almost entirely chip companies or foundation AI research labs. If you aren’t building one of those businesses, feel free to ignore the headlines.
• Real benchmarks for what a seed round looks like in the chart below. Median is just under $4 million raised, top 5% is $15 million. Those are historically very large seed rounds!
• The stage name abuse continues unabated. We haven’t ever really lived through a time where capital raised could vary by 100x or more in the same round name.
• Run your own race, it’s more survivable that way 😅
AI boom transforming the venture capital, megacap investing landscape
Youtube • CNBC Television • December 8, 2025
Venture
Overview
The content centers on the ongoing boom in artificial intelligence and how it is reshaping both venture capital investing in startups and allocation decisions in large-cap public markets. The core theme is that AI is no longer a niche technology story but a structural force driving capital flows, corporate strategy, and market concentration. Investors are re-evaluating which companies will capture AI value—chip makers, cloud platforms, model providers, or application-layer startups—and adjusting portfolios accordingly. The discussion emphasizes both the significant opportunities created by AI and the growing dispersion between perceived winners and everyone else in the public markets.
AI as a Capital Allocation Magnet
AI is attracting disproportionate amounts of new capital, from seed-stage venture rounds to mega-cap public companies seeing surging valuations.
Venture investors are prioritizing AI-native or AI-first startups, often giving them funding and attention even in an otherwise selective or cautious funding environment.
In public markets, a handful of mega-cap technology companies associated with AI infrastructure, chips, and cloud services are becoming increasingly dominant in major indices.
This creates a feedback loop where strong AI narratives draw more capital, which then supports further investment in compute, talent, and acquisitions.
Changing Venture Capital Playbook
Traditional venture filters—such as market size, founding team, and traction—are being overlaid with a more specific question: “What is the durable AI advantage here?”
Investors are differentiating between:
Infrastructure and tooling (e.g., chips, data platforms, MLOps),
Core model players (e.g., foundation model providers),
Vertical and horizontal applications built on top of those models.
There is rising skepticism toward AI “wrappers” that add thin UX layers on top of commoditizing models, with more emphasis on proprietary data, distribution advantages, or deeply integrated workflows.
Timelines to scale can compress for the best AI startups, but capital requirements—especially around compute and data—can also be materially higher than in prior software waves.
Impact on Megacap and Index Investing
Large-cap indices are becoming ever more concentrated in a small set of AI-linked firms, amplifying the AI cycle’s impact on broad market returns.
Mega-cap technology companies are deploying large budgets toward AI infrastructure, model development, and AI-enhanced products, further widening the moat between them and smaller competitors.
This concentration creates both opportunity and risk for passive investors:
Outperformance if AI leaders continue to execute.
Vulnerability if expectations embedded in these valuations prove too optimistic.
Some investors are rebalancing toward or away from these names based on their conviction in AI’s durability and each company’s edge—such as proprietary data, cloud scale, or chip design leadership.
Valuation, Risk, and Cyclicality
The discussion underlines that while AI is a long-term secular trend, the stocks and private valuations linked to AI can still be cyclical and volatile.
There is concern about overpaying for growth if too much AI-driven optimism is already priced in, particularly for second-tier beneficiaries.
Venture investors face a tension between moving quickly to secure stakes in promising AI companies and maintaining discipline on entry valuations and business fundamentals.
For public investors, the challenge is distinguishing between sustainable earnings power from AI and more speculative multiple expansion.
Strategic and Market Implications
Corporates and investors alike must update their mental models for competitive advantage, as AI changes cost structures, product roadmaps, and market entry barriers.
The likely outcome is increased dispersion:
Among startups, where a minority achieve scale and defensibility.
Among public companies, where true AI leaders may compound while weaker peers lag despite using similar language around AI.
Over the medium term, AI is expected to influence sector leadership, job composition within companies, and cross-border competition, making it a central lens for both venture and megacap portfolio construction.
Pat Grady & Alfred Lin on the Tactics of Great Venture Investing | Ep. 36
Youtube • Uncapped with Jack Altman • December 9, 2025
Venture
Overview
The content centers on a long-form conversation with two prominent venture capitalists, focusing on how great venture investors think, decide, and act. The discussion explores the mindset required to consistently back category-defining companies, emphasizing disciplined judgment under uncertainty, deep partnership with founders, and long time horizons. It also touches on how top firms build investment processes and internal cultures that support repeated, high-quality decisions rather than relying on one-off “lucky” bets. Throughout, the conversation highlights the craft of venture investing as a blend of pattern recognition, rigorous analysis, and willingness to be contrarian when conviction is high. The tone is practical and tactical, aiming to translate high-level investing philosophy into concrete behaviors for both investors and founders.
Tactics of Great Venture Investing
Great venture investors balance “story and spreadsheet”: they respect compelling narratives but insist on understanding unit economics, market size, and the path to defensibility.
They focus on “power law” thinking—recognizing that a small number of investments will drive most returns, so the primary job is to identify and support potential outliers rather than optimize for average outcomes.
Process is designed to reduce unforced errors: pre-defined questions, devil’s advocates, and checklists are used to challenge enthusiasm and expose blind spots.
Strong investors maintain a clear separation between price and quality; they first ask “Is this a company we want to own for a decade?” and only then consider valuation and deal structure.
Working with Founders
The best investors see themselves as long-term partners, not just capital providers, and optimize for trust and candor.
They look for founders with a combination of insight (a non-obvious view of the world), intensity (willingness to endure hardship), and integrity (alignment with employees, customers, and shareholders).
Tactically, they try to be the founder’s first call in moments of crisis, helping with hiring, go-to-market adjustments, fundraising strategy, and board communication.
They prefer boards where difficult topics—churn, burn, morale, product-market fit—are surfaced early, not buried under growth metrics.
Firm Culture and Decision-Making
Top-tier venture firms cultivate cultures of debate and high standards: partners are expected to challenge each other’s assumptions while staying aligned on values.
Investment memos and partner meetings are used to build institutional memory, so lessons from past wins and misses inform future deals.
The partnership avoids short-term signaling games; reputational capital is treated as a core asset, influencing how they negotiate terms, handle down rounds, and support struggling portfolio companies.
They emphasize consistent behavior across cycles—resisting hype in bull markets and remaining active and supportive in downturns.
Implications for Founders and Emerging Investors
Founders can use these tactics as a filter when choosing investors: seek those who ask hard, thoughtful questions, engage deeply on the business, and think beyond the next round.
Early-career investors are encouraged to build a personal process: systematic sourcing, clear theses on sectors, and post-mortems on decisions, rather than chasing consensus “hot” deals.
The conversation ultimately frames great venture investing as a long-term craft: success is less about a single iconic investment and more about building a repeatable way to evaluate people, markets, and inflection points over many decades.
Are private valuations set for a correction? Henry Ward, CEO of Carta, on #capitalmarket trends
Youtube • Carta • December 8, 2025
Venture
Overview
The content centers on the question of whether private company valuations are likely to face a correction amid shifting capital market conditions.
It highlights the tension between previously inflated startup valuations—driven by abundant capital and aggressive growth expectations—and a newer environment of tighter funding, higher interest rates, and more demanding investors.
The central theme is that the private markets, which often lag public markets, may be due for a reset in pricing, with implications for founders, employees, and investors holding equity in private companies.
Drivers of Valuation Inflation
In recent years, low interest rates and plentiful venture capital allowed many startups to raise funds at increasingly higher valuations with limited scrutiny on profitability.
Investors prioritized growth and market share, frequently paying forward for revenue that might be realized years into the future.
Competitive deal-making among venture funds pushed valuations up further, as firms raced to win allocations in “hot” companies.
This environment created layers of private “unicorns” whose paper valuations were rarely tested by down rounds or secondary market price discovery.
Signals Pointing Toward a Correction
As broader macroeconomic conditions tighten—through higher rates, slower growth, or reduced liquidity—investors become more focused on fundamentals such as revenue quality, margins, and clear paths to profitability.
Public market repricings, especially in tech and growth sectors, act as a benchmark, revealing gaps between public comps and private valuations.
When late‑stage private companies are valued at revenue or earnings multiples far above comparable public peers, a correction becomes more likely in subsequent funding rounds or liquidity events.
Secondary transactions in private shares may start clearing at discounts to last primary round prices, signaling that headline valuations are no longer fully supported.
Implications for Founders and Employees
Founders may find that new funding rounds require accepting flat or down valuations, tighter terms, or more investor protections such as liquidation preferences and anti‑dilution provisions.
Companies that raised at very high valuations without corresponding business performance may need to prioritize efficiency, cost discipline, and sustainable unit economics to justify their cap tables.
Employees with stock options or RSUs tied to lofty valuations could face longer timelines to liquidity or lower eventual exit values than they had expected, affecting talent retention and morale.
Boards and leadership teams may be pushed to revisit growth-at-all-costs strategies, aligning compensation and planning with more realistic valuation assumptions.
Impacts on Investors and the Venture Ecosystem
Venture funds holding large positions in overvalued companies may need to mark portfolios down, affecting fund performance metrics and LP reporting.
New capital could become more selective, favoring companies with strong fundamentals and reasonable valuations, while weaker or overvalued companies struggle to raise.
A correction can also create opportunities: investors with dry powder may back promising companies at more rational prices, potentially improving long‑term returns.
Over time, a reset in valuations might lead to healthier market discipline, where pricing more accurately reflects risk, business quality, and realistic exit scenarios.
Long‑Term Outlook
While painful in the short term, a correction in private valuations can re-anchor expectations for all participants—founders, employees, and investors—around sustainable growth rather than speculative exuberance.
Companies that adjust quickly, focus on fundamentals, and embrace transparent pricing are better positioned to navigate changing market cycles and ultimately reach durable, value‑aligned exits.
#298: Q3 2025 Fund Performance Highlights
The fund cfo • Doug Dyer • December 11, 2025
Venture
Aduro Advisors has provided a preview of its forthcoming Q3 2025 Fund Performance Benchmark Report, offering a structured, data-driven analysis of private fund performance. The report is distinguished by its use of fund-level source data rather than self-reported inputs, aiming to give fund managers and limited partners (LPs) a clearer and more reliable baseline for comparison as they approach year-end evaluations.
Q3 Performance Trends Across Fund Sizes
The data reveals a spectrum of outcomes rather than a single narrative, with performance heavily influenced by fund size, vintage year, and market timing.
$0–$50mm Funds: Smaller funds exhibit wider dispersion in early returns. Mature vintages from 2014–2017 show median Net IRRs between 9–14%, with top quartile performance trending higher. More recent vintages are in the early-stage development phase, where Total Value to Paid-In Capital (TVPI) generally leads Distributions to Paid-In Capital (DPI) as realizations are still limited. The data illustrates how these funds typically normalize over time as their portfolios mature.
$50mm–$100mm Funds: Performance in this mid-size range appears more stable across vintages. Earlier funds show consistent distributions, with TVPI and Multiple on Invested Capital (MOIC) increasing predictably with maturity. Newer vintages (2021–2024) display patterns consistent with the standard private fund “J-curve,” characterized by low DPI but building unrealized value.
$100mm–$250mm Funds: This category shows notable value creation, particularly for older vintages. For funds from 2014–2016, top quartile TVPIs range from 4.83x to 6.07x, reflecting significant outcomes for long-duration portfolios. Newer vintages in this bracket track closer to broader industry trends, with unrealized value remaining the primary driver of performance to date.
$250mm+ Funds: Larger funds generally demonstrate more moderate performance dispersion across vintages. Median IRRs for vintages between 2019 and 2023 range from 0–13%, with top quartile outcomes illustrating the incremental upside available at scale. Early metrics for the 2023–2024 vintages reflect standard characteristics of the deployment phase, where performance bands are widest as outcomes are still forming.
Key Takeaways and Implications
The core takeaway from the benchmark data is the critical role of dispersion in driving outcomes—not just across different vintage years, but within fund size categories themselves. This environment makes comparative, source-data-driven analysis invaluable for GPs and LPs. The report underscores that in a more selective, quartile-driven market, practical tools for evaluating fund pacing, ownership concentration, reserve management, and DPI timing become essential for strategic decision-making. The preview positions Aduro’s report as a key resource for pressure-testing these decisions with greater clarity, moving beyond aggregate trends to understand the specific dynamics shaping fund performance.
8 Takeaways from Carta’s State of Seed Report
A16Z Speedrun • December 11, 2025
Venture
When speaking with founders, we often explain speedrun as a16z’s “first check in” program for brand new startups. But what happens after teams raise that initial capital, get some early traction with customers, and find their first strong signals of product-market fit?
For many, the answer is: you want to raise a seed round. Capital unlocks hiring, accelerates growth, and gives startups the runway needed to build something for the long haul. So understanding the state of the seed market is critical for founders building at the early stage.
This year, we asked our friends at Carta what trends they’re seeing in the markets for seed stage startups. They kindly produced a comprehensive 40-slide report drawn from an aggregated and anonymized sample of Carta customer data. Below, we’ve highlighted a few points from Carta’s report that caught our attention.
There’s more cash available at seed, but it’s going into fewer rounds. 2025 is set to be the biggest year since 2022 in terms of cash raised, but that feature is split among slightly fewer teams—perhaps just 2,000 from Carta’s sample by the time the year is up.
Raises and valuations vary widely by sector. When splitting seed round data by sector, we see clear breakouts in round size and valuation among startups focused on semiconductors, hardware, and analytics tooling. But what about AI startups? The chart shows how things might look if you treated AI-focused software companies as their own industry. Unsurprisingly, it’d be among the biggest targets for capital.
Across all sectors and throughout the last 7 quarters, startups have consistently been selling around 20% of their companies when raising a seed round. Though some extreme outliers in AI sell as little as 10.5%, the 20% median figure has been consistent.
After raising a seed round, the road ahead is long. After raising a seed round, what are your odds of raising a Series A? This figure has been taking a dive after 2020, though there are early signs of improvements for the most recent cohorts. Still, we’re seeing the median time between seeds and series As going up across the board in recent years. The pattern holds for the gaps between Series A and B rounds as well.
VCs are taking 17.4 months to close funds in 2025.
LinkedIn • Pavel Prata • December 10, 2025
LinkedIn•Venture
Source: LinkedIn | Author
VCs are taking 17.4 months to close funds in 2025.
Here’s my analysis of PitchBook‘s latest VC data 👇
◾️ Let’s start with the numbers that matter:
Through Q3 2025: $82.6B raised across 849 funds.
That’s tracking toward a third straight year of declines.
For context, this is the longest sustained downturn in VC fundraising in over a decade.
◾️ The data shows something fascinating:
The fastest quartile is closing funds in 9.2 months.
The slowest? 25.8 months – more than 2 years.
That gap is widening. The market is bifurcating into winners and everyone else.
◾️ Why is this happening?
Simple: The capital flywheel is broken.
VC fundraising runs on:
- Exits generate distributions
- Distributions drive re-ups
- Re-ups fund new managers
No exits = No distributions = No new funds.
◾️ Here’s what 17.4 months to close actually means: You’re pitching for 1.5 years.
That’s 1.5 years where you should be sourcing deals, supporting portfolio companies, and building relationships with founders.
Instead, you’re on the road selling your fund.
◾️ And the capital concentration is extreme.
PitchBook‘s Q1-Q2 2025 data showed: the top 30 firms raised $18.2B.
Founders Fund alone raised $4.6B. Meanwhile, thousands of other managers are fighting over scraps.
◾️ The geographic shift is telling:
57.3% of capital is now flowing to US-based funds. That’s a decade high.
Asia dominated the late 2010s, but geopolitics and tariffs have reshaped where LPs are comfortable deploying capital.
◾️ The experience divide matters but it’s subtle: 72% of capital went to experienced GPs in 2024.
In 2025 YTD, that’s dropped to 63%.
But don’t misread this – the data just isn’t complete yet due to disclosure lag.
◾️ I think the real story is this:
Only two types of funds are getting raised right now:
- Mega funds with proven track records
- Tiny, hyper-focused emerging funds with clear edges
The middle is getting crushed.
◾️ Based on my experience, here’s what’s happening behind these numbers:
LPs are exhausted. They’re overcommitted to vintage years that won’t return capital for years.
They’re being more selective because they have to be.
◾️ The capital crunch creates a vicious cycle:
Longer fundraises → Less time for portfolio support → Weaker returns → Even longer next fundraise
This is why fund construction matters more than ever right now.
◾️ What does this mean for GPs raising in 2026?
Your fund isn’t entitled to exist. You need to prove why LPs should allocate to you over:
- Public markets
- Private equity
- Their existing relationships
If you’re raising a multi-stage fund without a clear edge, you’re going to be part of that 25.8-month slowest quartile.
◾️ Looking ahead: Until we see meaningful exit activity, this won’t improve.
The median time to close will likely stay elevated through 2026.
Plan accordingly, set up realistic deadlines based on your firm’s profile.
What is your take on these insights?
Concentration in VC
LinkedIn • Yakubu Agbese • December 10, 2025
LinkedIn•Venture
Source: LinkedIn | Author
For the past two years, investors have warned about excessive concentration in the Magnificent 7--the mega-cap, mega-profitable tech giants that now make up 37% of the… | Yakubu Agbese
The rich are getting richer.
For the past two years, investors have warned about excessive concentration in the Magnificent 7--the mega-cap, mega-profitable tech giants that now make up 37% of the entire S&P 500. The logic is simple: when too much value sits in too few companies, the entire market becomes more fragile.
But the private tech market is now giving the public markets a run for their money.
According to PitchBook, the top 10 venture deals in 2025 absorbed over 40% of all VC dollars. What’s remarkable is how abruptly this happened: most of the jump occurred after 2023, when AI became THE investing theme.
Some key insights from what is likely an unprecedented level of concentration:
🤖 AI is scaling faster than any prior tech paradigm and it’s massively capital intensive.
Classical software has near-zero marginal cost and 80%+ gross margins. AI is the opposite: training and inference require huge compute budgets, pushing investors toward only the best-funded players.
😨 LPs and VCs have grown more risk-averse, and the safest bet is “brand-name AI.”
Why back an unknown startup when you can invest in Cursor, Perplexity, or Lovable and get a markup in 6–9 months? The flight to quality is now a flight to scale.
🎼 VC has always been hits-driven; but the list of hits is shrinking.
Traditionally, a handful of companies produced most of a fund’s returns. This graph suggests the future could be even more extreme: the top 5–10 companies in any year may drive the majority of total industry performance.
🗝️ Access is becoming the biggest differentiator in VC.
The more concentrated the market becomes, the more the game shifts from sourcing to allocation. If you can’t get into the top AI deals, it becomes dramatically harder to justify your existence as a fund.
📅 2026 may be even more extreme.
The top 10 deals went from absorbing 22.5% of all VC dollars at the start of 2025 to 40%+ by the end. At this pace, they could represent 60%+ of all venture dollars next year.
💵 AI is pulling in bigger balance sheets.
As deal sizes explode, VC no longer has enough capital to fund early stage innovation on its own. Sovereign wealth funds, governments, and mega-cap tech companies will increasingly dominate late-stage rounds (which explains why circular financing persists. Few others can afford the ante).
As AI comes to dominate portfolios and attention, tech will shift from a high margin, capital light business model into a capital intensive (and likely heavily regulated) industrial sector. The same forces concentrating power in the Magnificent 7 are shaping private markets at a rate that’s faster and sharper. Moreover, because the dynamics and momentum in the private markets are more extreme...
if AI is a bubble, we’re likely to see it burst there first. 😉
Are investment trusts the best route into private assets?
Ft • December 5, 2025
Venture
Overview
The article highlights how listed investment trusts that hold private assets are being relatively overlooked at a time when long-term asset funds and other vehicles for accessing illiquid private markets are attracting growing attention. While policymakers and asset managers focus on new fund structures to open up private equity, infrastructure and other alternative investments to a broader pool of investors, an established sector of the market — closed‑ended investment trusts — already offers many of the same benefits and is trading at conditions that may be attractive for long‑term buyers. The core argument is that investment trusts can be an effective, and sometimes superior, route into private assets, yet they are still under‑appreciated in the current debate.
Investment Trusts vs New Long‑Term Asset Vehicles
Investment trusts are closed‑ended funds listed on stock exchanges, able to hold illiquid assets without the daily redemption pressure that open‑ended funds face.
The article contrasts them with newer long‑term asset funds designed to give pension schemes and, increasingly, retail investors exposure to private equity, private credit and infrastructure.
It notes that regulatory and industry attention has centred on building these new vehicles, rather than leveraging the existing trust structure that has already demonstrated an ability to manage illiquidity and long‑term horizons.
The piece suggests that investment trusts’ corporate structure, permanent capital and boards of directors give them governance and flexibility advantages in navigating private markets.
Why the Sector Is Overlooked
One reason cited is distribution: long‑term asset funds are often created and marketed directly by large asset managers through advisory channels, while investment trusts may be less heavily promoted and sometimes fall outside mainstream model portfolios.
Another factor is perception. Some investors associate trusts with legacy structures or with listed equity exposure, not recognizing that many now hold significant private equity, property or infrastructure stakes.
The article alludes to the current environment in which pensions and wealth managers seek “patient capital” strategies, yet do not consistently include investment trusts in that conversation.
This oversight may also stem from episodic concerns over discounts to net asset value (NAV), which can make listed trusts appear volatile or out of favour, even when underlying private holdings remain robust.
Valuations, Discounts and Opportunities
A key theme is that many private‑asset investment trusts trade at material discounts to their NAV, effectively offering investors access to private holdings at a markdown.
While discounts can reflect concerns about valuation lag, rising rates, or liquidity, they may also present an opportunity for long‑term investors who can tolerate volatility in the share price while focusing on underlying asset performance and distributions.
The article suggests that if enthusiasm for private assets continues to grow, demand could eventually narrow these discounts, adding a potential source of upside beyond portfolio cash flows and growth.
It emphasizes that discounts should be evaluated in context — including the quality of the underlying managers, fee structures, and the transparency of valuation processes.
Risks, Liquidity and Suitability
Investment trusts provide daily liquidity on the stock market, but this liquidity comes through secondary trading in shares, not via redemption at NAV. As a result, investors must accept market price volatility and the possibility of persistent discounts.
For investors needing guaranteed short‑term access to cash, the trusts’ share price risk might be unsuitable; however, for those with genuinely long‑term horizons, the structure can more closely match the illiquid nature of private assets.
The article underscores that leverage, fee levels, and concentration in particular sectors (for example, technology‑focused private equity or niche infrastructure) should be carefully assessed.
Governance — independent boards, clear mandates and regular reporting — is framed as an important safeguard when investing in listed vehicles that hold largely unlisted assets.
Implications for Investors and Policymakers
The piece implies that regulators and industry bodies, in their push to broaden access to private markets, may be “reinventing the wheel” by promoting new long‑term funds while not fully utilising the investment trust model.
For individual investors and financial advisers, the message is that investment trusts can play a central role in building diversified long‑term portfolios with private asset exposure, provided due diligence is done on structure, strategy and valuation.
The article concludes that, given current discounts and the proven capacity of trusts to hold illiquid assets, this overlooked sector deserves a more prominent place in debates about how to democratise private markets and provide stable long‑term capital.
Jeff Bezos’s Project Prometheus Joins The Unicorn Board Alongside 18 Other Startups In November
Crunchbase • December 10, 2025
Venture
November brought another strong showing for The Crunchbase Unicorn Board with 19 companies joining the ranks of billion-dollar startups.
The largest round went to Jeff Bezos’ Project Prometheus, which has reportedly raised billions of dollars out of the gate with the intent to develop AI for manufacturing in aerospace, automobiles and computers.
Among the sectors for new unicorn creation last month, AI led once again. The largest number of companies hailed from the data and model side as well as workflow applications. At least 13 of the 19 new unicorns have AI at the center of what they do.
Other sectors with two or more new unicorns were healthcare and defense.
Fourteen of November’s new unicorns are U.S.-based. The remaining five companies were each from China, India, Hong Kong, Canada and Denmark.
In the U.S., Palo Alto rivaled San Francisco and Austin with three companies from the Silicon Valley city, compared to two each from the other cities.
New unicorns
Here are November’s 19 newly minted unicorns.
AI data and models
Bezos led the initial funding of $6.2 billion for his Project Prometheus, an AI builder for physical systems. The less than 1-year-old company’s headquarters and valuation was not disclosed.
AI video and image generator Luma AI raised a $900 million Series C led by Saudi Arabia-based AI compute provider Humain. Luma is also partnering with Humain to build a 2GW data center in Saudi Arabia. The 4-year-old Palo Alto, California-based company was valued at $4 billion.
d-Matrix, a chip developer for AI inference, raised a $275 million Series C led by Bullhound Capital, Temasek Holdings and Triatomic Capital. The 6-year-old Santa Clara, California-based company was valued at $2 billion.
Harmonic, an AI lab for mathematical intelligence, raised a $120 million Series C led by earlier investor Ribbit Capital. The 2-year-old Palo Alto, California-based company was valued at $1.5 billion.
Graduating from Seed to Series A
LinkedIn • Keith Teare • December 6, 2025
LinkedIn•Venture
Source: LinkedIn | Author
Shhhh don’t tell anyone - but there’s some optimism seeping back into seed startups.
Startups that raised their seed rounds in 2024 are graduating to Series A a bit faster than their counterparts did in 2023. The trend is positive.
But time has almost run out for those that raised their seeds in 2021 and 2022.
𝗖𝗵𝗮𝗿𝘁 𝗘𝘅𝗽𝗹𝗮𝗶𝗻𝗲𝗱
• Each row is a group of companies that raised their seed round in that quarter.
• Each column shows the time elapsed since that seed round.
• Percentages reflect the share of seed companies from that cohort that raised a Series A in that timeframe.
For example: of the startups that raised their seed round in Q1 2021, 32.7% had gone on to raise a Series A in 2 years (Q8 in the columns)
The reduction of green and gold cells, along with the steady march of red and pink into Year 2 and Year 3, make it clear that the boom times of venture fundraising are behind us.
But for the first time in a long time, graduation rates are picking up.
12.9% of seed rounds from Q3 2024 have made it to Series A after 1 year. Best graduation rate for a quarterly cohort since Q3 2021.
Optimism!
On the gloomy side, I’m not hopeful for the seed round companies of 2022. Only ~25% or so have made it to Series A after 3 years. These companies raised just before the AI wave into a downward trending market...really tough to beat the odds.
Last point - what about “seed-strapping”? Those startups that raised a seed round and planned to never raise an A.
1) I don’t think that was nearly as common as people think.
2) Many who thought they would seed-strap end up trying to raise more later on.
Startups go through waves. We’re at the launch point in another (AI-driven) swell.
Education
Transforming Nordic classrooms through responsible AI partnerships
Blog • Alexandra Ahtiainen • December 8, 2025
Education•Schools•AIinClassrooms•GeminiForEducation•NordicCountries
Schools across Northern Europe are integrating Google’s AI tools, particularly Gemini for Education and NotebookLM, to enhance learning while emphasizing safety, privacy, and responsible use. The article describes how Iceland, Norway and Sweden are collaborating with Google and local authorities to combine pedagogical innovation with robust data protection and governance, positioning the Nordic region as a global model for AI in education. A core theme is that AI literacy and responsible adoption are shared responsibilities among governments, school leaders, teachers and technology providers, not purely technical decisions delegated to IT departments.
Personalized Learning and AI Literacy in Iceland
Building on widespread use of Google Classroom, Iceland’s Ministry of Education launched a pilot involving around 300 teachers.
Gemini for Education and NotebookLM are used to create more personalized learning experiences, such as adapting materials to student needs and supporting differentiated instruction.
The initiative explicitly aims to build AI literacy among both teachers and students, with the ministry and Google treating this as a collaborative, long‑term effort rather than a quick rollout.
The program frames AI as a tool to augment teacher capacity—speeding up lesson planning, content creation and feedback—while keeping educators in control of pedagogy and assessment.
Norway’s National Approach to Privacy and Compliance
Norway has completed a national Data Protection Impact Assessment (DPIA) for Google Workspace for Education and ChromeOS, a “landmark” step for digital privacy and governance in schools.
Conducted jointly by Google Cloud and the Norwegian Association of Local and Regional Authorities (KS), the DPIA verifies that the tools meet stringent GDPR requirements.
A central DPIA removes the need for each municipality to run its own separate, complex assessment, freeing local IT teams from repetitive compliance work.
This centralized model allows administrators to redirect time and resources from paperwork to innovation and support for teachers and students, while providing a uniform, trusted baseline for student data protection nationwide.
Sweden’s Focus on Scale, Efficiency and Teacher Support
In Sweden, school districts are rolling out Gemini for Education at scale, reaching tens of thousands of students and staff.
Teachers use Gemini to rapidly generate and adapt high‑quality teaching materials, which previously required significant time and effort.
Educators and ICT coordinators highlight that AI can reduce routine workload, enabling more time for direct student interaction, feedback and individualized support.
These implementations are framed as collaborative experiments, with feedback loops among teachers, municipal leaders and Google to refine use cases and guardrails.
A Partnership Model for Responsible Classroom AI
Across Iceland, Norway and Sweden, AI adoption is portrayed as rooted in partnership: ministries, municipal associations, schools and Google co‑design pilots, privacy frameworks and training.
Responsible AI principles—transparency, consent, data minimization and adherence to European privacy law—are treated as prerequisites, not afterthoughts.
The initiatives underline that trust is essential: clear governance and shared standards make it easier for teachers and parents to accept AI tools in core learning environments.
Nordic experiences are presented as a blueprint for other regions: combine strong public‑sector governance (like national DPIAs) with teacher‑centered experimentation and continuous professional development.
Implications and Future Impact
These efforts suggest that large‑scale classroom AI can be both ambitious and cautious: boosting personalization and efficiency without compromising student rights.
By embedding AI literacy into teacher training and everyday classroom practice, Nordic systems aim to prepare students for an AI‑rich future, not merely automate current tasks.
The model also illustrates how central, collaborative approaches to regulation and procurement can lower barriers for smaller municipalities or schools that lack deep technical or legal resources.
If replicated elsewhere, this approach could accelerate equitable access to advanced educational technology while maintaining high standards of privacy, security and pedagogical integrity.
Regulation
Trump Says He’ll Sign Executive Order Curbing State AI Rules
Bloomberg • December 8, 2025
Regulation•USA•AIRegulation•Executive Order•Federal Preemption
Overview
President Donald Trump announced that he plans to sign an executive order that would create what he called a single, unified federal “ONE RULE” for artificial intelligence, specifically to curtail or override state-level regulations on the technology. The core idea is to preempt a growing patchwork of AI rules emerging from individual US states, consolidating authority over AI governance at the federal level. This move signals a clear attempt to centralize control over how AI is regulated, with major implications for tech companies, state governments, and consumer protections.
Federal Preemption and “ONE RULE” Concept
Trump’s executive order is described as establishing “ONE RULE” on AI, meaning a single federal framework that would supersede or sharply limit state laws on AI.
The proposal reflects concern from the federal executive branch and industry stakeholders that divergent state rules could create compliance complexity and hinder innovation.
By invoking an executive order, Trump is using presidential authority to immediately shape the regulatory landscape without waiting for Congress to pass comprehensive AI legislation.
Impact on State-Level AI Regulation
The stated goal is to “limit” state-level AI policies, suggesting that existing or proposed state rules on data use, algorithmic transparency, bias controls, or safety standards could be weakened or invalidated.
States such as California and others that have historically taken more aggressive positions on tech regulation could see their ability to implement stricter AI oversight constrained.
This would shift the balance of regulatory power away from governors, state legislatures, and local agencies and toward federal agencies tasked with interpreting and enforcing the new “ONE RULE.”
Implications for Industry and Innovation
A single national AI standard is likely to be welcomed by many large technology and AI companies that operate across multiple states and prefer uniform rules.
A unified framework could reduce compliance costs and administrative burden by eliminating the need to tailor products or practices to varying state-level standards.
However, the substance of the federal “ONE RULE” is not detailed in the provided content, leaving open questions about how strict or permissive the federal approach would be on key issues such as safety testing, bias mitigation, data privacy, and transparency.
Concerns and Potential Criticism
Centralizing AI regulation through executive action may draw criticism from those arguing that:
States serve as “laboratories of democracy,” experimenting with stronger safeguards that can later influence federal norms.
A single, relatively weak federal rule could undercut more protective state efforts on civil rights, consumer protection, and workplace fairness in AI deployment.
Civil liberties and consumer advocacy groups may worry that limiting state-level experimentation will slow the development of robust guardrails around high-risk AI applications in policing, hiring, healthcare, and finance.
The move could also spark legal and political battles over the scope of federal preemption and the president’s authority to constrain states in a domain where Congress has not passed comprehensive legislation.
Broader Policy and Political Context
The executive order reflects a broader national and global debate over how tightly to regulate AI at this stage of technological development.
It underlines the tension between promoting US leadership and innovation in AI and ensuring adequate protections against harms such as bias, misinformation, surveillance, and safety failures.
Politically, the initiative positions the administration as pro-business and pro-innovation, prioritizing a streamlined regulatory environment over a more decentralized, state-driven approach.
The long-term impact will depend on the text of the executive order, subsequent federal agency rulemaking, and whether courts uphold efforts to preempt more stringent state rules.
Key Takeaways
A forthcoming executive order will establish a federal “ONE RULE” for AI, aimed explicitly at limiting state-level AI regulations.
The policy would centralize regulatory authority at the federal level, likely easing compliance burdens for national tech firms while curtailing state experimentation.
The move could trigger legal, political, and policy debates about the balance between innovation, federal power, and robust safeguards against AI-related risks.
Opinion | Europe’s Foolish War on X.com
Wsj • The Editorial Board • December 7, 2025
Regulation•Europe•Xcom•Digital Services Act•Free Speech
Thesis and Overall Argument
The article argues that the European Commission’s decision to fine X.com, Elon Musk’s social-media platform, under its new regulatory regime is a politically motivated overreach that validates critics who view the EU as hostile to free expression and U.S. tech firms. The editorial contends that the case against X.com is weak on the law, selective in its enforcement, and damaging to Europe’s reputation as a defender of liberal values and open markets. It frames the action as part of a broader “war” on X that conflates policing disinformation with suppressing dissenting or inconvenient speech.
EU’s Legal Case and Its Weaknesses
The Commission is portrayed as stretching its own digital-oversight rules, especially the Digital Services Act (DSA), to target X more aggressively than rival platforms.
Regulators accuse X of failing to adequately remove or label so‑called “illegal” or “harmful” content, particularly around elections, public-health debates, and geopolitical conflicts.
The editorial suggests that the standards for what counts as “harmful” or “disinformation” are vague and politically loaded, making them ripe for abuse.
It argues that X has made good‑faith efforts—such as community‑notes style fact‑checking and user tools—yet is still singled out, implying that the real issue is its more open and less curated speech environment compared with competitors.
Selective Enforcement and Political Motives
The article claims the EU has not applied comparable scrutiny or penalties to larger platforms that are closer to the European political mainstream, despite similar or greater volumes of controversial content.
Musk’s public criticism of EU elites, support for a more absolutist conception of free speech, and willingness to host dissenting voices are presented as key reasons X has become a target.
The Commission’s actions are framed as confirming suspicions that Brussels wants to export a “managed speech” model, where officials and approved NGOs arbitrate what can be said on major platforms.
The editorial underscores that EU leaders routinely criticize X in public while relying on expansive regulatory tools behind the scenes, reinforcing the perception of politicized enforcement.
Implications for Free Speech and Innovation
By imposing large fines and threatening further sanctions, the EU is depicted as chilling online expression across platforms, since firms will likely over‑remove content to avoid regulatory risk.
The article warns that, under the DSA logic, any post that diverges from an official narrative on contentious topics—immigration, security, pandemics, climate—could be downgraded, hidden, or removed.
This environment is said to weaken Europe’s long‑standing claim to champion liberal democracy and open debate, and instead aligns the bloc more with a bureaucratic, technocratic control of discourse.
For innovation, the editorial argues that aggressive regulatory attacks on high‑profile U.S. platforms send a signal that Europe is a hostile environment for disruptive tech and social‑media business models, potentially pushing investment and experimentation elsewhere.
Transatlantic Tensions and Strategic Costs
The move against X.com is placed in a pattern of Brussels taking harsh actions against major American tech firms—through antitrust fines, privacy cases, and digital‑market rules—under the banner of “digital sovereignty.”
The article argues this undermines transatlantic cooperation at a time when the EU and U.S. ostensibly need closer alignment on issues like security, China, and AI governance.
It suggests that by focusing energy on punitive measures against U.S. companies instead of building homegrown competitors, Europe is “fooling itself” about what drives technological strength and resilience.
Conclusion and Editorial Judgment
The editorial concludes that the Commission’s fine against X.com is less about protecting Europeans from genuine harm and more about disciplining a politically inconvenient platform.
It asserts that this confirms the worst fears of critics: that EU digital regulation is an instrument for centralizing control over online speech rather than a neutral attempt to safeguard users.
The piece urges European leaders to rethink their confrontational approach, warning that continuing down this path will erode civil‑liberties credibility, stifle innovation, and deepen rifts with democratic partners, all while doing little to genuinely improve the quality of online information.
X deactivates European Commission’s ad account after the company was fined €120M
Techcrunch • Anthony Ha • December 7, 2025
Regulation•Europe•DigitalServicesAct•ElonMusk•ContentModeration
X’s head of product Nikita Bier fired back at the European Commission after the EC fined the social media company €120 million (around $140 million).
In its first fine under the European Union’s Digital Services Act, the commission called X’s blue checkmark system “deceptive” and said the paid verification system makes users vulnerable to impersonation and scams. It also said X’s advertising repository failed to meet the DSA’s requirements for transparency and accessibility.
The commission said that X must respond within 60 days to its concerns about blue checkmarks, and within 90 days to the ad transparency violations, or it could face additional penalties.
After the fine was announced, X owner Elon Musk described it as “bullshit” and also posted, “How long before the EU is gone? AbolishTheEU”.
Now it seems X has penalized the commission’s account on the platform — not, the company says, because of the fine, but rather the commission’s use of X’s advertising system.
Quoting the commission’s post announcing the fine, Bier accused the EC of logging into a “dormant ad account to take advantage of an exploit in our Ad Composer — to post a link that deceives users into thinking it’s a video and to artificially increase its reach.”
“As you may be aware, X believes everyone should have an equal voice on our platform,” Bier wrote. “However, it seems you believe that the rules should not apply to your account.”
As a result, he said the commission’s ad account had been “terminated.” Bier subsequently said the exploit “has never been abused like this” and has since been patched.
While the commission may have lost the ability to buy ads on X, its post announcing the fine remains up, and its account still has a grey checkmark indicating that it belongs to a government organization.
AI
OpenAI wants you to know it’s a B2B company, too
Cautiousoptimism • December 8, 2025
AI•Tech•EnterpriseAI•Productivity•Europe
OpenAI’s Dual Identity: Consumer Hit and Growing Enterprise Power
The piece argues that while OpenAI is popularly perceived as a consumer-facing company thanks to ChatGPT’s viral success, it is increasingly positioning itself as a serious B2B provider of AI tools and infrastructure. The central theme is that OpenAI’s enterprise business—encompassing workplace ChatGPT seats, APIs, coding tools, and agents—is scaling rapidly and delivering measurable productivity gains, suggesting durable demand beyond any speculative AI hype cycle. At the same time, the article situates OpenAI’s growth in a broader landscape of tech, regulation, and geopolitics, where European regulation, corporate AI adoption, and global competition all intersect.
Consumer Scale vs. Paid Conversion
ChatGPT is framed as a massive consumer success story, with more than 800 million weekly active users.
Yet only about 5% of those users currently pay for the service, highlighting a significant gap between reach and monetization.
OpenAI projects that this upgrade rate could rise to 8.5% by 2030, and likely on a much larger user base, which would materially enlarge its subscription revenue.
This dynamic underscores why OpenAI cannot rely purely on consumer subscriptions and needs robust enterprise revenue streams.
OpenAI’s Enterprise Product Suite and Traction
Beyond the public ChatGPT interface, OpenAI offers:
A “business-friendly” version of ChatGPT (ChatGPT Enterprise and workplace seats)
API access to its foundation models
Codex-style coding services
AI agent capabilities designed to automate workflows
OpenAI reports “more than 7 million ChatGPT workplace seats,” suggesting widespread organizational deployment rather than isolated pilots.
ChatGPT Enterprise seats have grown approximately 9x year-over-year, an extremely high growth rate even by AI-industry standards.
This reinforces the article’s thesis that OpenAI is rapidly becoming a B2B infrastructure and productivity provider, not just a consumer app.
Productivity Impact and ROI for Corporate Customers
The article confronts skepticism stemming from a high-profile MIT study that questioned whether AI in the enterprise produces real gains, asking whether OpenAI’s business customers are genuinely getting value.
OpenAI’s own survey data is used to answer “yes”:
75% of surveyed workers say AI has improved either the speed or quality of their output.
On average, ChatGPT Enterprise users report saving 40–60 minutes per active day.
Data science, engineering, and communications workers report even higher time savings—around 60–80 minutes daily.
The implied economics are compelling: if an employee’s hourly time is worth more than the monthly cost of OpenAI’s enterprise offerings, then the return on investment is strong, providing a powerful justification for continued or expanded deployment.
These numbers underpin the argument that AI is already creating tangible, quantifiable productivity improvements in many white-collar workflows.
Industry Penetration, Geography, and the Durability of the AI Boom
AI adoption is described as high across industries, with some sectors embracing the technology faster than others but overall uptake robust enough to support continued investment.
The author argues that this breadth of adoption means the “AI market collapse” some fear is unlikely. While valuations may “de-froth,” actual demand for AI tools is anchored in real use and improving models.
Recent advances from Anthropic, xAI and Google are cited as evidence that core models are still rapidly improving, giving enterprises more incentive to invest in AI infrastructure.
Geographically, OpenAI’s products are being adopted fastest by firms in Australia, Brazil, the Netherlands, and France.
Germany and the United Kingdom are OpenAI’s largest ChatGPT Enterprise markets outside the United States by number of customers, suggesting Europe more broadly is making “a huge bet on AI.”
This European embrace of AI is juxtaposed with the EU’s assertive regulatory stance, implying a complex but deeply engaged relationship between European institutions, their economies, and American AI providers.
Valuation, Resilience, and Broader Implications
The author notes uncertainty over whether OpenAI’s valuation should be $250 billion, $500 billion, or even $1 trillion, but concludes that it is “worth a tower of cash regardless.”
Crucially, the company already has enough revenue and growth momentum to survive even if the broader tech market sours, reducing the risk that it is merely a bubble phenomenon.
The broader implication is that AI is now embedded enough in business processes—and advancing quickly enough—that both investors and policymakers should treat it as structural infrastructure, not a passing fad.
Europe’s heavy adoption of OpenAI products raises the question of whether this will translate into improved productivity and stronger GDP growth for the EU over time, making AI deployment a macroeconomic as well as a corporate strategy issue.
Key Takeaways
OpenAI is actively repositioning itself in public discourse as a B2B powerhouse, not just a consumer app maker.
Enterprise adoption is both quantitatively large (millions of seats) and qualitatively meaningful (reported time savings and performance gains).
The breadth of industry and geographic uptake supports the thesis that AI demand is durable even if valuations correct.
Europe’s simultaneous role as a strict regulator and aggressive adopter of AI will be a critical test case for how AI influences productivity, competitiveness, and policy worldwide.
Disney CEO on $1 billion investment in OpenAI: ‘This is a good investment for the company’
Youtube • CNBC Television • December 11, 2025
AI•Funding•Media•CorporateStrategy•GenerativeAI
The CEO of The Walt Disney Company has publicly affirmed the strategic rationale behind the company’s $1 billion investment in OpenAI, framing it as a forward-looking move to secure a competitive advantage in the rapidly evolving media and entertainment landscape. The executive emphasized that this is not a speculative bet but a calculated investment in foundational technology that is expected to transform how content is created, distributed, and personalized for global audiences. The statement positions Disney as an active participant in the AI revolution, seeking to leverage OpenAI’s capabilities across its vast portfolio.
Strategic Rationale and Integration
The investment is portrayed as a core component of Disney’s long-term technology strategy. The CEO highlighted several key areas where AI integration is anticipated:
Enhancing Creativity and Production: AI tools are expected to assist Imagineers, animators, and storytellers, potentially streamlining complex production processes and enabling new forms of creative expression.
Personalizing Consumer Experiences: A major focus is on deepening audience engagement by using AI to tailor content recommendations, marketing, and interactive experiences across streaming platforms, theme parks, and consumer products.
Improving Operational Efficiency: The technology could be applied to backend operations, from data analysis for content strategy to optimizing supply chains and customer service.
The CEO explicitly stated that the goal is to “stay at the forefront” of technological innovation, suggesting that AI is seen as an existential imperative for legacy media companies. The investment secures Disney a strategic partnership and early access to cutting-edge developments from one of the field’s leading organizations.
Financial and Competitive Context
The $1 billion commitment, while substantial, is presented as a prudent allocation within Disney’s broader capital expenditure framework. The CEO characterized it as “a good investment for the company,” implying a calculated assessment of potential return on investment (ROI) through both direct applications and defensive positioning. This move places Disney alongside other major tech and media corporations making large-scale bets on generative AI, signaling an industry-wide arms race to adopt and integrate these capabilities.
The investment also serves as a public declaration of Disney’s innovation agenda to shareholders, potentially aiming to bolster confidence in the company’s ability to navigate digital disruption. It underscores a shift from viewing AI purely as a cost-saving tool to recognizing it as a driver of future growth and value creation.
Implications and Future Outlook
This strategic partnership suggests a future where AI is deeply embedded in the entertainment ecosystem. For consumers, it could lead to more immersive and interactive content, hyper-personalized streaming services, and innovative theme park attractions. For the industry, it raises questions about the future of creative jobs, intellectual property in the age of AI-generated content, and the competitive dynamics between tech-savvy incumbents and new entrants.
The CEO’s framing indicates that Disney views controlling and guiding this technological integration as critical to maintaining its brand identity and creative legacy. The success of this investment will ultimately be measured by its tangible impact on Disney’s product offerings, customer satisfaction, and financial performance in the coming years.
Nvidia Wins US Approval to Sell H200 Chips to China | Bloomberg Tech 12/9/2025
Bloomberg • December 9, 2025
AI•Tech•Nvidia•China•ExportControls
Bloomberg’s Caroline Hyde and Ed Ludlow discuss President Donald Trump’s decision to allow Nvidia Corp. to ship its H200 artificial intelligence chip to China, in exchange for a 25% surcharge on those sales. The move marks a significant shift in U.S. export policy after years of tightening controls on advanced AI accelerators bound for Chinese customers. Nvidia, which had previously been barred from selling its most powerful data-center GPUs such as the A100, H100 and H200 into China, now regains partial access to what was once one of its most important markets.
The arrangement allows Nvidia to sell the H200 to “approved customers” in China under a structure where 25% of revenue from those exports will be paid to the U.S. government. The administration argues this preserves national security by maintaining oversight and financial leverage, while still supporting U.S. chipmakers and domestic jobs. Supporters say the deal could help ensure American companies remain central to the global AI supply chain, rather than ceding ground to rival suppliers from other countries.
The program also raises questions about how Beijing and Chinese technology companies will respond. China has been pushing aggressively for semiconductor self‑sufficiency amid U.S. export curbs, and officials have encouraged local firms to reduce reliance on U.S. hardware. At the same time, the H200’s performance advantages for data‑center and AI workloads may be difficult for Chinese cloud providers and AI developers to ignore, potentially setting up a tension between industrial policy goals and practical computing needs.
Hyde and Ludlow also turn to the escalating battle for Warner Bros., where competing offers from Netflix and Paramount Skydance are drawing close scrutiny from U.S. antitrust authorities. Regulators are weighing how further consolidation in streaming and entertainment could affect competition, consumer choice and pricing, especially as large platforms seek to secure premium content libraries.
In addition, the program highlights Microsoft’s announcement that it will commit $17.5 billion over four years to bolster cloud and AI infrastructure in India. The investment aims to expand data‑center capacity, support local AI development and training, and deepen Microsoft’s presence in one of the world’s fastest‑growing technology markets. The initiative underscores how global cloud and AI providers are racing to establish or enlarge strategic footholds outside the U.S. and China, particularly in large, rapidly digitizing economies.
AI giveth and taketh away and nuclear gets hot
Ft • December 10, 2025
AI•Tech•NuclearEnergy•Jobs•GeoPolitics
The article examines the dual-edged impact of artificial intelligence on the global workforce and the concurrent resurgence of nuclear energy as a critical power source for data centers and industry.
AI’s Impact on Jobs and Productivity
The central theme explores how AI is simultaneously creating and eliminating jobs, leading to significant economic and social disruption. While AI boosts productivity and creates new roles in tech development and system maintenance, it also displaces workers in sectors like customer service, content creation, and administrative support. This creates a complex challenge for policymakers who must manage the transition for displaced workers while capitalizing on new economic opportunities. The analysis suggests that the net effect on employment remains uncertain and will vary significantly by industry and region.
The Nuclear Power Renaissance
A major focus of the piece is the renewed global interest in nuclear energy, driven by the immense power demands of AI data centers and the push for carbon-free electricity. Large language models and AI training require vast amounts of energy, making reliable, high-capacity baseload power essential. Nuclear power, particularly next-generation small modular reactors (SMRs), is positioned as a leading solution to meet this demand without exacerbating climate change. The article notes increased investment and policy support for nuclear projects in several countries, signaling a potential long-term shift in energy infrastructure planning.
Geopolitical and Economic Implications
The convergence of these two trends carries profound geopolitical weight. Nations that can successfully integrate AI innovation with a robust, clean energy supply chain may gain a significant strategic and economic advantage. This dynamic is influencing international competition, particularly between major powers like the United States and China. The article implies that energy policy is no longer just an environmental or economic issue but a core component of technological sovereignty and national security in the AI era.
Key Takeaways and Analysis
The AI revolution is not a purely digital phenomenon; its physical requirements, especially for energy, are reshaping global infrastructure and industrial policy.
The workforce transition prompted by AI will require substantial investment in retraining and social safety nets to mitigate inequality and social unrest.
Nuclear energy’s revival, while promising for decarbonization and powering tech growth, brings its own challenges, including high costs, long development timelines, and public concerns over safety and waste.
The interplay between AI development and energy capacity is creating new axes of international competition, where technological prowess is directly linked to energy independence.
The overarching conclusion is that we are entering a period where technological advancement and energy infrastructure are inextricably linked. Success in the AI-driven future will depend not only on software breakthroughs but also on the ability to power them sustainably and reliably.
The AI Value Gap : Where Does the $7,000 Per Seat Go?
Tomtunguz • December 8, 2025
AI•Work•Productivity•Pricing•SaaS
A new study from OpenAI shows AI saves the average white-collar worker 54 minutes per day. Where does all that value go?
BLS data shows median weekly earnings for full-time workers hit $1,165 in Q3 2024, or $60,580 annually. Across 2,080 working hours, hourly compensation equals $29.13. One hour saved per day equals 250 hours per year, or $7,282 in recovered productivity per seat.
Current AI pricing captures between 3% & 5% of this value:
Typically, vendors capture 10-15% of value, leaving employers & employees with 85-90% value capture.
This gap suggests significant pricing power remains untapped since the total value capture for these tools is only 5%. Microsoft announced price increases last week, with Microsoft 365 E3 rising from $36 to $39 per user monthly starting July 2026 - an 8.3% increase, closer to a COLA adjustment than a value-based price increase.
Bundling complicates the picture.
Microsoft 365 Copilot & Google Workspace AI include access to email, spreadsheets, & presentation software. ChatGPT Plus does not. Will enterprises pay for standalone AI applications on top of their existing Copilot or Workspace subscriptions? Must standalone AI tools also bundle these features?
Unbundling is already happening in some categories too :
Gamma, a $2.1B unicorn building AI presentation software, charges $480 per seat annually. 33% more than Microsoft 365 Copilot, which already includes PowerPoint with AI features.
Gamma has 600,000+ paying subscribers, suggesting a market exists for best-in-class vertical tools even when bundled alternatives exist.
In the SaaS ecosystem, bundling & unbundling were both motifs. AI doesn’t seem any different, aside from the significant productivity gains that create room for both strategies to win.
Data sources : OpenAI productivity study (Dec 2025), Microsoft/Google public pricing, Bureau of Labor Statistics Q3 2024 median weekly earnings
SoftBank and Nvidia reportedly in talks to fund Skild AI at $14B, nearly tripling its value
Techcrunch • December 8, 2025
AI•Funding•Skild AI•Robotics•SoftBank
SoftBank Group and Nvidia are in talks to lead an investment of over $1 billion at a $14 billion valuation in Skild AI, a software company building a foundational robotics model, Reuters reported.
The nearly three-year-old startup was last valued at $4.7 billion in May when it raised $500 million in a round led by SoftBank along with the participation of LG Technology Ventures, Samsung, Nvidia, and others, according to PitchBook data. Skild didn’t immediately respond to a request for comment. SoftBank and Nvidia declined to comment.
Unlike other heavily funded startups, Skild AI is not building proprietary hardware. Instead, it’s developing a robot-agnostic foundation model that can be customized for various types of robots and use cases.
The company unveiled its general-purpose robot model Skild Brain in July with videos showing robots picking up dishes and climbing up and down the stairs. The company has secured strategic partnerships with LG CNS and Hewlett Packard Enterprise to develop its ecosystem.
Investor interest in AI robotics has been steadily growing. Physical Intelligence, another company developing “brains” for a broad range of robots, has reportedly recently raised $600 million at a $5.6 billion valuation led by CapitalG. One investor who evaluated but declined to fund Physical Intelligence told TechCrunch that its model is still in the early stages of development.
In September, Figure, a company developing a humanoid robot, raised more than $1 billion at a massive $39 billion valuation. Meanwhile, 1X, another humanoid robot developer, was in talks to secure as much as $1 billion at a $10 billion valuation, The Information reported several months ago.
Why the A.I. Boom Is Unlike the Dot-Com Boom
Nytimes • December 9, 2025
AI•Tech•DotComBoom•Valuations•Infrastructure
Silicon Valley is again betting everything on a new technology. But the mania is not a reboot of the late-1990s frenzy.
The current artificial intelligence boom, which has sent the stock market soaring and minted a new generation of billionaires, is often compared to the dot-com bubble of the late 1990s. Both eras are defined by a widespread belief that a foundational new technology will reshape the economy, accompanied by eye-popping valuations for companies that promise to harness its power. Yet for all the surface similarities, the A.I. boom is fundamentally different in ways that suggest it could have a more lasting impact—or an even more spectacular crash.
The dot-com bubble was built on a new infrastructure—the internet—but many of the companies that rode it to enormous valuations were thin on actual business plans and profits. They were bets on a future that was broadly understood but not yet realized. Today’s leading A.I. companies, by contrast, are often divisions of enormous, profitable tech giants like Microsoft, Google, and Meta, which are pouring billions into building and deploying the technology. The infrastructure itself—vast data centers, advanced chips, and foundational A.I. models—requires colossal capital investment, creating a much higher barrier to entry.
Furthermore, the applications of A.I. are being integrated into existing, massive industries and products almost immediately, from search engines and office software to healthcare research and manufacturing. This rapid deployment into the core functions of the global economy gives the A.I. boom a tangible, revenue-generating foundation that the early commercial internet largely lacked. The risk, however, is that the astronomical costs of competing in A.I. could lead to a brutal consolidation, leaving only a few well-funded survivors if the expected productivity gains and new markets fail to materialize quickly enough.
OpenAI Unveils More Advanced Model as Race With Google Heats Up
Bloomberg • Rachel Metz • December 11, 2025
AI•Tech•OpenAI•Google•Competition
OpenAI is rolling out a new artificial intelligence model designed to make ChatGPT better at coding, science and a wide range of work tasks, weeks after Alphabet Inc.’s Google put the startup on defense with the well-received launch of Gemini 3.
The new model, called GPT-4o, is a more advanced version of the technology that powers the startup’s popular chatbot. It will be available to paying ChatGPT users starting Thursday, OpenAI said in a blog post. The company said the model is “more capable” than its predecessor, GPT-4, and is better at tasks like writing computer code, solving math problems and summarizing documents.
OpenAI’s announcement comes as the AI industry is in the midst of a heated race to develop and deploy increasingly powerful models. Google’s Gemini 3, which was unveiled in late November, was widely praised by AI researchers and developers for its capabilities. That launch put pressure on OpenAI to respond with its own advancements.
The startup said GPT-4o is also more efficient than previous models, meaning it can perform tasks faster and at a lower cost. This could help OpenAI attract more business customers who are looking to use AI for a variety of applications, from customer service to content creation.
A new product, a new customer, a new financing! Introducing Superpower
X • bscholl • December 9, 2025
X•AI
Source: The choice of natural gas positions it as a dispatchable, high-capacity solution that can run consistently, unlike intermittent renewables, though with carbon emissions implications.
The Launch of “Superpower”: A 42MW Gas Turbine Built for the AI Boom
Key Takeaway: A new company is launching a massive, purpose-built natural gas turbine designed specifically to power AI data centers, securing a foundational 1.21GW order from CrusoeAI. This signals a major move to build specialized energy infrastructure for the insatiable power demands of artificial intelligence.
The Announcement: The thread announces a triple milestone: a new product, a new customer, and new financing.
The Product: “Superpower” is a 42-megawatt (MW) natural gas turbine. It’s not a general-purpose generator; it’s explicitly optimized for AI data centers. The technology is based on the company’s existing “supersonic” tech platform.
The Launch Customer & Order: The product launches with a massive anchor order from @CrusoeAI for 1.21 gigawatts (GW). This is a fleet-scale commitment, not a pilot.
Rendering of the new “Superpower” 42MW natural gas turbine.
Context & Backstory (Implied from Thread Start):
The company is entering a market defined by an urgent need for power-dense, scalable energy to feed growing AI data centers.
Traditional grid power or less optimized generation may not suffice for the scale and reliability required.
The “supersonic technology” base suggests a focus on high efficiency and performance in a compact form factor.
The deal with CrusoeAI, a company known for leveraging energy for compute, validates the product-market fit for dedicated AI infrastructure.
Discussion Points & Nuance:
Scale of Demand: A 1.21GW launch order highlights the enormous power appetite of AI companies. This is equivalent to the output of a large nuclear reactor unit or powering nearly 1 million homes.
Infrastructure Specialization: The move goes beyond software and chips—it’s about building physical, energy infrastructure tailored for a single, high-growth industry (AI).
Strategic Partnership: CrusoeAI’s role as launch customer suggests deep collaboration between energy providers and AI operators to co-design solutions.
Financing: Mention of “new financing” indicates significant capital is flowing into this niche of climate-tech/energy-infrastructure for AI.
Interactions API: A unified foundation for models and agents
Blog • Ali Çevik • December 11, 2025
AI•Tech•APIs•Google•Development
Google’s Interactions API is a unified interface for interacting with Gemini models and agents. It simplifies the process of building applications that leverage AI by providing a single, consistent way to send requests and handle responses, regardless of whether you’re working with a core model or a specialized agent.
The API is designed to streamline development workflows. Instead of managing different endpoints and protocols for various AI capabilities, developers can use the Interactions API as a common foundation. This reduces complexity and accelerates the integration of advanced features like multi-step reasoning, tool use, and long-context interactions into applications.
A key aspect of the Interactions API is its support for both synchronous and asynchronous operations. This flexibility allows developers to choose the right interaction pattern for their use case, whether it’s a quick, real-time query or a longer-running task that requires background processing. The API also handles state management for conversational agents, making it easier to build coherent, multi-turn dialogues.
By offering a standardized interface, the Interactions API aims to foster a more modular and interoperable ecosystem for AI-powered applications. Developers can more easily swap components, experiment with different models or agent configurations, and maintain their code as the underlying AI technology evolves. This approach aligns with the broader industry trend towards abstraction and developer-friendly tooling in machine learning.
Bezos and Musk Race to Bring Data Centers to Space
Wsj • December 10, 2025
AI•Tech•Space•Infrastructure•Competition
Two of the world’s most prominent tech billionaires, Jeff Bezos and Elon Musk, are now competing to extend their rivalry beyond Earth’s atmosphere, with a new focus on building data centers in space. This ambitious vision aims to address the surging energy demands of artificial intelligence and cloud computing by leveraging the unique advantages of orbital infrastructure.
The Drivers Behind the Orbital Ambition
The primary catalyst for this space race is the explosive growth of AI, which requires immense computational power and energy. Terrestrial data centers are becoming increasingly constrained by land availability, local energy grids, and environmental concerns. Orbital data centers present a potential solution with several key advantages:
Uninterrupted Solar Power: In space, satellites can harness solar energy 24 hours a day without atmospheric interference or night cycles, offering a potent, consistent power source.
Natural Cooling: The cold vacuum of space provides a highly efficient medium for dissipating the enormous heat generated by computing hardware, potentially reducing or eliminating the need for energy-intensive cooling systems.
Global Latency Benefits: A constellation of orbital data centers could position computational resources optimally to serve global markets, potentially improving data transmission speeds for certain applications.
Diverging Corporate Strategies
While both entrepreneurs see the potential, their companies are approaching the challenge from different angles, reflecting their core competencies.
Elon Musk’s SpaceX is leveraging its proven prowess in launch reliability and cost reduction through reusable rockets. The company is reportedly in early discussions to launch data center modules as soon as 2025, potentially using its massive Starship vehicle. This approach focuses on using frequent, affordable launches to deploy and possibly service hardware in orbit.
Jeff Bezos’s Blue Origin is pursuing a more long-term and foundational strategy. The company is developing the “Orbital Reef” concept in partnership with Sierra Space—a vision for a scalable, mixed-use business park in low-Earth orbit. Within this ecosystem, data centers would be one of several commercial operations. Blue Origin is also investing heavily in next-generation space infrastructure, including advanced solar panels and wireless power transmission technology, which would be critical for large-scale orbital operations.
Significant Challenges and Skepticism
Despite the compelling theoretical benefits, the path to operational space-based data centers is fraught with monumental technical and economic hurdles. The industry faces intense skepticism from many experts.
Extreme Costs: The initial capital required to launch and assemble heavy, delicate computing hardware into orbit is astronomically high compared to building on Earth.
Maintenance and Reliability: Servicing and repairing hardware in space is currently prohibitively difficult and risky. Radiation in space can also degrade sensitive electronic components much faster than on Earth, requiring more robust and expensive hardware.
Data Transmission: Beaming vast quantities of data back to Earth reliably and securely through the atmosphere presents its own set of complex engineering challenges.
The overarching question remains whether the benefits of unlimited solar power and natural cooling can ever outweigh these formidable costs and complexities. Proponents believe that as AI’s energy appetite grows and launch costs continue to fall, a tipping point will be reached. Critics argue it is a solution in search of a problem, diverting resources from improving the efficiency and sustainability of terrestrial data centers.
This new frontier in the Bezos-Musk rivalry underscores a broader trend of looking to space to solve Earth-bound limitations. Whether orbital data centers become a niche for specific applications or evolve into a major pillar of global computing infrastructure will depend on which company—or perhaps both—can first overcome the steep physics and economics of doing business in orbit.
Adobe Integrates With ChatGPT
Wsj • December 10, 2025
AI•Tech•GenerativeAI•SoftwareIntegration•CreativeTools
Adobe has announced a significant integration between its suite of creative and productivity tools and OpenAI’s ChatGPT platform. This partnership will make Adobe Photoshop, Adobe Express, and Adobe Acrobat directly accessible within the ChatGPT interface. The move allows users to perform complex creative and document tasks through conversational prompts, effectively turning the AI chatbot into a powerful assistant for Adobe’s ecosystem.
Core Functionality and User Workflow
The integration is designed to streamline workflows by eliminating the need to switch between applications. Users will be able to ask ChatGPT to perform specific tasks, and the chatbot will leverage Adobe’s tools to execute them. For example, a user could instruct ChatGPT to “remove the background from this photo” or “create a social media post for a summer sale,” and the request would be carried out using the underlying Adobe software. This represents a shift from AI generating content from scratch to AI orchestrating and operating professional-grade creative software based on natural language commands.
Strategic Implications for the AI and Creative Markets
This partnership is a strategic maneuver for both companies in the highly competitive generative AI landscape. For Adobe, it embeds its industry-standard tools into a massively popular AI platform, potentially expanding its user base and reinforcing the relevance of its software as AI-native workflows emerge. It also represents a defensive play against pure-play AI image generators that compete with its core creative products. For OpenAI, the integration adds significant, trusted enterprise functionality to ChatGPT, enhancing its utility for professional use cases beyond text generation and moving it closer to being a comprehensive AI operating system.
Analysis of Potential Impact and Challenges
The collaboration could democratize advanced design and document editing, making complex software more accessible to non-experts through a simple chat interface. However, it also raises questions about the future of traditional software interfaces and the depth of control users will retain when operating through an AI intermediary. The success of the integration will depend on the precision and reliability of ChatGPT’s interpretation of user requests and its ability to leverage Adobe’s tools effectively without constant manual correction. Furthermore, it sets a precedent for other major software providers to form similar alliances with leading AI platforms, potentially reshaping how software is accessed and used.
LeCun’s Alternative Future: A Gentle Guide to World-Model AI
Artificial intelligence made simple • Devansh • December 6, 2025
AI•Tech•WorldModels•LeCun•CoCreation
It takes time to create work that’s clear, independent, and genuinely useful. If you’ve found value in this newsletter, consider becoming a paid subscriber. It helps me dive deeper into research, reach more people, stay free from ads/hidden agendas, and supports my crippling chocolate milk addiction. We run on a “pay what you can” model—so if you believe in the mission, there’s likely a plan that fits (over here).
Every subscription helps me stay independent, avoid clickbait, and focus on depth over noise, and I deeply appreciate everyone who chooses to support our cult.
Help me buy chocolate milk
PS – Supporting this work doesn’t have to come out of your pocket. If you read this as part of your professional development, you can use this email template to request reimbursement for your subscription.
Every month, the Chocolate Milk Cult reaches over a million Builders, Investors, Policy Makers, Leaders, and more. If you’d like to meet other members of our community, please fill out this contact form here (I will never sell your data nor will I make intros w/o your explicit permission)- https://forms.gle/Pi1pGLuS1FmzXoLr6
Barak Epstein has been a senior technology leader for over a decade. He has led efforts in Cloud Computing and Infrastructure at Dell and now at Google. Currently, he is leading efforts to leverage Parallel Filesystems to AI and HPC workloads on Google Cloud. Barak and I have had several interesting conversations about infrastructure, strategy, and how investments in large-scale computing can introduce new paradigms for next-gen AI (instead of just enabling more of the same, which has been the current approach). Some of you may remember his excellent guest post last year, where he talked about how it was important to go beyond surface level discussions around AI to think about how advancements in AI capabilities would redefine our relationships with it (and even our own identity about ourselves and our abilities).
Barak combines his experience as an educator and product manager to present us with an accessible mental model for how next-generation AI might work and how humans might collaborate with it. It breaks down joint embeddings, energy-based models, and world-model planning in a way that anyone can follow, and it frames them around a useful idea: how to think like an “AI co-creator” instead of a casual user. This article will help you develop the intuition for how the next generation of AI will work, laying the foundation for our eventual deep dives.
This is a refined and updated version of a post from my blog, Tao of AI, where we discuss the interaction between deep technical evolution in AI and its impact on social domains, such as business, government, defense, and the professions. I’m very delighted to be posting again at Artificial Intelligence Made Simple. Please come join our conversation.
Yann LeCun, Chief Scientist of Meta AI, has spent several years evangelizing–and then developing (1, 2, 3)–an architectural alternative to LLMs, that he argues will help define the future of AI. Perhaps another day, I’ll strive to comment intelligently on whether he’s right or not, but for today I want to apply a more pragmatic lens: assuming that LeCun is right, how would that change the recommendations we’ve provided about how to become an AI Co-Creator? The goal is to think about how we would optimally interact with a specific, novel underlying model architecture. First, we’ll get to know the innovations that LeCun promotes. Then, we’ll apply the lens of the “AI co-creator”—discussed in my recent post on The New Literacy—to these new architectures.
Foundation Model Consolidation Is No Longer a Forecast — It’s a Mechanical Outcome
Four week mba • December 7, 2025
AI•Funding•Foundation Models•Consolidation•Compute Economics
Every hype cycle eventually hits physics. In the foundation model layer, that moment arrives faster, and with far more force, than most founders expect. What looks like “competition” today — 20+ players, dozens of emergent labs, a thriving open-source frontier — is structurally misleading. The deeper mechanics of capital, compute, and talent all point toward a narrow set of winners and a long tail of specialized, derivative, or commoditized players.
What initially feels like a vibrant, permissionless race is in fact a capital expenditure arms race with compounding advantages. The entities that can consistently raise tens of billions, secure frontier compute at scale, and attract and retain dense clusters of world-class research talent start to pull away from the rest of the field. Over time, they don’t just outspend; they out-learn, out-optimize, and out-distribute, creating feedback loops that make genuine catch-up increasingly implausible.
Meanwhile, the surface-level diversity in model APIs, benchmarks, and marketing narratives obscures how similar the underlying economics are. Foundation models demand enormous, lumpy investments up front, with uncertain and highly skewed payoffs. This dynamic resembles other infrastructure-heavy industries where consolidation around a few dominant platforms has been the rule, not the exception. The more the ecosystem matures, the more this mechanical logic asserts itself.
As costs rise and performance deltas narrow, many nominal “competitors” are pushed into roles as customers, fine-tuners, or distribution channels for the true foundation-layer incumbents. Open-source ecosystems remain powerful, but they increasingly orbit around weights, research, and tooling seeded—directly or indirectly—by the few actors that control the largest training runs. In this world, differentiation shifts away from training yet another general-purpose model and toward owning data, workflows, domain distribution, or regulatory positioning.
For founders, investors, and policymakers, the implication is clear: treating the foundation model landscape as a broad, level playing field is a category error. The apparent plurality we see today is a transient phase in a process whose endpoint is much more concentrated. Consolidation is not just likely; it is baked into the physics of the problem.
The Rise of Neolabs: Where the Next AI Breakthroughs Will Come From & 11 AI Labs to follow
Theaiopportunities • December 7, 2025
AI•Funding•FrontierLabs•GenerativeAI•AIResearch
Hey everybody, welcome to The AI Opportunity.
Here I distill the best money-making opportunities before they hit the mainstream. Most of this newsletter is for paid subscribers, so if you want full access, subscribe today.
Today I wanted to explore one of the types of AI companies that are playing a huge role in the development of AI and that are key to be ahead of the curve: AI Neolabs
Neolabs are not companies in the traditional sense. They operate more like private research institutions (founded by former OpenAI, DeepMind, Anthropic and Google Brain researchers) with the freedom to explore ideas that would be impossible inside a typical startup or large lab.
Their goal is not to ship a product quickly but to widen the space of what AI can do.
Here is a clear overview of all the 11 Neolabs you need to know:
Black Forest Labs
The founding team is led by Robin Rombach, Andreas Blattmann, and Patrick Esser. Their background is rooted in years of university research that directly led to the foundation of modern visual AI. They are globally known as the original co-creators of the Stable Diffusion models. Their work defined the initial frontier for open-source image generation.
Black Forest Labs was created to translate that research into a commercial lab. The core technical goal is to move beyond simple image output towards a unified “visual intelligence” that integrates perception, generation, and reasoning. Their flagship model, FLUX, delivers high-resolution image generation and multi-reference editing, with a critical focus on visual consistency across complex scenes.
Founding Team
Original co-creators of Stable Diffusion: Robin Rombach, Andreas Blattmann (Linkedin), and Patrick Esser.
Mission
Develop frontier generative models for image and video (FLUX), focusing on the evolution toward unified visual intelligence.
Funding & Valuation
Funding:$300M in 2024;
Valuation:$3.25B
2. Humans
Eric Zelikman has spent years exploring how models can reason about intent rather than simply output text. At Stanford, his work on self-reflective reasoning pushed models to critique their own intermediate steps. At xAI, he deepened this line of research with colleagues who shared the belief that true alignment comes from modeling human values, not from post-processing techniques.
Humans& is the direct continuation of this path. The team is building models designed to infer user intent, long-term preferences and patterns of decision-making.
Their goal is not assistants that execute commands, but systems that collaborate, anticipate and adapt. The lab exists because Zelikman’s academic and industry experience converge toward the same conclusion: intelligence becomes useful when it becomes human-aware.
Founding Team
Founded by Eric Zelikman (Linkedin), former xAI researcher and Stanford PhD student.
Mission
Developing more human-aligned models able to understand intentions, values and context.
Funding & Valuation
Funding: $1B in 2025;
Valuation: ~$4B.
3. Isara
Eddie Zhang spent his time at OpenAI focused on safety, control and how agents behave at scale. He worked on systems meant to supervise complex model behaviors and early prototypes of multi-agent coordination. His belief was consistent: real-world tasks are handled better by many small agents working together than by a single general model.
Isara grows directly out of that conviction. The lab is building infrastructure where large networks of agents can collaborate on operational workflows like customer support, commerce automation and internal process handling.
Everything is built around orchestration, monitoring and dynamic correction. It is a natural extension of the work Zhang led inside OpenAI, now turned into a full-scale research effort.
Founding Team
Co-founded by Eddie Zhang (Linkedin), former OpenAI safety researcher.
Mission
AI that understands large volumes of human conversations and builds infrastructure for large-scale AI agent coordination.
Funding & Valuation
Funding: Not disclosed, hundreds of millions in 2025;
Valuation: ~$1B.
4. Richard Socher’s Lab
Richard Socher has lived the entire arc of applied AI. From Stanford research to founding MetaMind, from leading AI at Salesforce to running You.com, he experienced the same limit repeatedly: improving models depends on slow, human-driven experimentation cycles. Architecture variations, tuning, dataset design and evaluation always hit throughput constraints.
His new lab is designed to accelerate that loop. Instead of automating small steps, it aims to assist the full research stack: generating model ideas, organizing experiments, running controlled comparisons and surfacing the most promising directions.
The purpose is not to replace researchers but to give them a faster way to explore the space of possible models. It’s the natural outcome of Socher’s decade spent wrestling with slow iteration.
Founding Team
Created by Richard Socher (Linkedin), former chief scientist at Salesforce and founder of MetaMind.
Mission
Automating parts of AI research itself, with systems aimed at accelerating model development.
Funding & Valuation
Funding: $1B in discussions in 2025;
Valuation: not disclosed.
OpenAI says it’s turned off app suggestions that look like ads
Techcrunch • December 7, 2025
AI•Tech•ChatGPT•Advertising•UserExperience
While OpenAI continues to insist that there are currently no ads — or tests for advertising — live in ChatGPT, the company’s chief research officer Mark Chen also acknowledged that the company “fell short” with recent promotional messages and is working to improve the experience.
Chen and other OpenAI executives were responding to posts from ChatGPT’s paying subscribers who complained about seeing promotional messages for companies like Peloton and Target.
In response, the company said it was only testing ways to show apps built on the ChatGPT app platform that it announced in October, with “no financial component” to those suggestions. (One of the users who’d complained initially about the ads responded skeptically, writing, “Bruhhh… Don’t insult your paying users.”)
I’m in ChatGPT (paid Plus subscription), asking about Windows BitLocker
and it’s F-ing showing me ADS TO SHOP AT TARGET.
Yeah, screw this. Lose all your users. pic.twitter.com/2Z5AG8pnlJ
> — Benjamin De Kraker (@BenjaminDEKR) December 3, 2025
Similarly, ChatGPT head Nick Turley posted Friday that he was “seeing lots of confusion about ads rumors in ChatGPT.”
“There are no live tests for ads – any screenshots you’ve seen are either not real or not ads,” Turley wrote. “If we do pursue ads, we’ll take a thoughtful approach. People trust ChatGPT and anything we do will be designed to respect that.”
Earlier that same day, however, Chen responded in a more apologetic tone, acknowledging that the controversy isn’t just a matter of user confusion.
“I agree that anything that feels like an ad needs to be handled with care, and we fell short,” he wrote. “We’ve turned off this kind of suggestion while we improve the model’s precision. We’re also looking at better controls so you can dial this down or off if you don’t find it helpful.”
Earlier this year, former Instacart and Facebook executive Fidji Sumo joined OpenAI as CEO of Applications and was widely expected to build up the company’s advertising business. However, the Wall Street Journal reported this week that a recent memo from OpenAI CEO Sam Altman declared a “code red,” prioritizing work to improve the quality of ChatGPT and pushing back other products including advertising.
China
Chinese AI in 2025, Wrapped
Chinatalk • Irene Zhang • December 11, 2025
China•Technology•AI•Semiconductors•OpenSource
The year 2025 was a transformative period for Chinese artificial intelligence, marked by the global ascendance of its open-source models, intense geopolitical maneuvering in the semiconductor sector, and a significant shift in domestic policy and corporate ambition towards AGI. The year began with the seismic release of DeepSeek-R1 and concluded with Chinese models like Qwen becoming foundational to Silicon Valley’s startup ecosystem, fundamentally reshaping perceptions of global AI competition.
The DeepSeek Moment and the Open-Source Paradigm
The January release of DeepSeek-R1, a cost-efficient model using a Mixture-of-Experts (MoE) architecture, forced a global re-evaluation of China’s frontier AI capabilities and the economics of model scaling. Funded by a Hangzhou-based quantitative trading firm and built by domestic engineering talent, DeepSeek demonstrated that world-class models could emerge from outside the traditional Silicon Valley ecosystem. Its success catalyzed an open-source race dominated by Chinese companies. As noted in the article, “Nearly every notable model released by Chinese companies in 2025 has been open source,” with engineers and executives crediting DeepSeek for setting this orientation. This wave included models like Kimi’s K2 in July, followed by releases from Z.ai, Qwen, and MiniMax, establishing open source as a primary strategy for expanding technical influence globally.
Corporate Ambition and the AGI Discussion
The year saw a pronounced “vibe shift” within Chinese tech, with industry leaders beginning to frame their work as pivotal to the nation’s destiny. A key moment was Alibaba CEO Eddie Wu’s landmark speech at the Yunqi Conference in September, which sketched a prophetic vision for transformative AI. This corporate rhetoric aligned with high-level political attention, including a Politburo “study session” on AI where the invited experts signaled a focus on transformative, AGI-aligned research rather than purely applied work. The debate over whether China genuinely “believes in” AGI was a recurring theme, with arguments presented from both believers and skeptics.
The Volatile Chip War
The US-China technology conflict, particularly over advanced semiconductors, was characterized by dramatic policy swings throughout 2025. A complex timeline of actions and reactions unfolded:
January: The Biden administration’s “AI diffusion” export rule.
April: The Bureau of Industry and Security (BIS) closed loopholes in chip export controls.
July: The “Summer of Jensen” saw Nvidia’s CEO secure permission to resume H20 chip sales to China, followed by a backlash from Chinese regulators concerned about remote “kill switches,” and a subsequent US-China deal where the US government would receive 15% of revenue from such sales.
August: Reports emerged of the US embedding trackers in high-end chips to prevent diversion to China.
October: A Trump-Xi summit deal temporarily suspended new US “Affiliates Rule” restrictions in exchange for China pausing new rare earth export controls.
December: The Trump administration announced it would permit Nvidia to sell more advanced H200 chips to China.
Amid this turbulence, Huawei continued its push to build an alternative ecosystem to Nvidia’s CUDA, while China’s pursuit of indigenous High-Bandwidth Memory (HBM) advanced in the face of lithography export controls.
Policy: Domestic Integration and Global Governance
Beijing articulated a clear, two-pronged policy vision. Domestically, the State Council’s “AI+ Plan,” released in August, was a landmark document pushing for comprehensive AI diffusion across all economic sectors and government ministries, framing it as a national strategic priority. It notably endorsed “emotional consumption” as a valid AI application. Internationally, China released a “Global AI Governance Action Plan” in July, aiming to position itself as a leader in setting AI standards, particularly for the developing world, and warning against global technological fragmentation. In contrast, the Cyberspace Administration of China’s (CAC) AI-generated content labeling requirements, enacted in September, were largely ineffective in practice, with widespread non-compliance on major platforms like Xiaohongshu and WeChat.
Robotics and Embodied AI
Robotics emerged as a major focus, buoyed by its first-ever mention in the Chinese Government Work Report and a white-hot competitive landscape with at least ten companies releasing humanoid robot models. The field sits at the intersection of China’s manufacturing prowess and advances in vision-language models. However, concerns about a potential investment bubble persist due to a lack of clear business models, even as Western policymakers begin to fret about the market share of firms like Unitree.
Implications and Looking Ahead
The events of 2025 suggest Chinese AI is pursuing a distinct path: leveraging open-source models for global influence, aggressively navigating chip restrictions, and aligning corporate AGI ambition with state policy for sector-wide integration. The international expansion of companies like Manus, which relocated to Singapore to access global capital, highlights the tension between Chinese technological roots and global market ambitions. As Chinese models become deeply embedded in global developer workflows, the coming years will test the resilience of this strategy against evolving geopolitical and regulatory headwinds.
Trump Allows H200 Sales to China, The Sliding Scale, A Good Decision
Stratechery • Ben Thompson • December 10, 2025
Geo Politics•USA•Semiconductors•Export Controls•China
The article analyzes the Trump administration’s decision to allow the sale of Nvidia’s H200 AI chips to China, framing it as a significant reversal of Biden-era export controls and a return to a more traditional U.S. policy framework. The author argues this is a strategically sound move that acknowledges the practical limitations of a full embargo while aiming to maintain a competitive edge for American technology.
The Policy Reversal and the “Sliding Scale”
The core of the decision is a shift from the Biden administration’s broad, restrictive approach to a more nuanced “sliding scale” strategy. The previous policy sought to severely limit China’s access to advanced semiconductors, particularly those crucial for training cutting-edge AI models. The new approach permits the sale of the H200—a powerful but not the absolute most advanced chip—while presumably withholding the next-generation Blackwell architecture (B100/B200) chips. This creates a calculated gap, allowing U.S. companies like Nvidia to generate revenue from the Chinese market while attempting to keep China’s AI development at least one step behind the frontier.
Economic Realities and the Futility of a Full Blockade
A central argument supporting the decision is the practical impossibility of a complete technological blockade. The author suggests that a total ban simply drives China to accelerate its own domestic chip development and find alternative suppliers, ultimately fostering the independence it seeks to prevent. By allowing sales of current-generation technology, the U.S. maintains economic leverage and keeps Chinese firms tethered to the American ecosystem, generating profits that can be reinvested into the next cycle of innovation. The policy accepts that China will obtain advanced computing power but aims to control the pace.
A Return to Historical Precedent
The article positions this not as a novel concept but as a reversion to a longstanding, successful U.S. strategy used during the Cold War. The historical model involved consistently staying several generations ahead of rivals in key technologies (like jet engines), rather than attempting an unenforceable total ban. This “sliding scale” or “moving target” approach is seen as more sustainable and effective than a static wall, which the adversary is inevitably motivated and eventually able to breach.
Analysis and Implications
The author concludes that this is a “good decision” because it aligns policy with reality. It balances national security concerns with economic interests, recognizing that American tech leadership is fueled by global market success. The decision also imposes a clearer strategic cost on China: it can access powerful, but not frontier, technology, forcing it to choose between using readily available U.S. chips or spending vast resources to duplicate slightly inferior products. The risk, however, is that the permitted technology may still be sufficient for China to make significant advances in applied AI, and managing the “sliding scale” requires continuous and precise calibration to be effective.
Interview of the Week
How Capitalism Can Save Capitalism: The Case for Stakeholder Capitalism
Keenon • December 9, 2025
Venture•Interview of the Week
The article presents a conversation with venture capitalist Seth Levine, focusing on the evolution of capital and the shifting dynamics within the venture capital industry. Levine argues that the traditional VC model is undergoing significant transformation, driven by changes in the sources of capital, the strategies of funds, and the broader economic environment. The discussion centers on how these evolutions are reshaping investment theses, founder expectations, and the very structure of the venture asset class.
The Changing Landscape of Capital Sources
A primary theme is the shift in where venture capital comes from. Levine highlights the growing influence of non-traditional players, particularly large asset managers and crossover funds, which have moved aggressively into late-stage private company investing. This influx has altered market dynamics, often compressing the time between funding rounds and inflating valuations. Concurrently, there’s a noted trend of more capital being concentrated in fewer, larger funds, creating a “barbell effect” in the industry.
The rise of “tourist capital” from public market investors during boom cycles, which can retreat quickly during downturns, adding volatility.
An increased focus on the role of Limited Partners (LPs) and their changing appetite for venture risk and liquidity timelines.
The impact of quantitative easing and low interest rates in the past decade, which fueled the growth of mega-funds and mega-rounds.
Evolution of Fund Strategy and Founder Relations
Levine discusses how successful VC firms are adapting their strategies in response to these market changes. The conversation moves beyond simply writing checks to emphasizing the value-add components of venture capital. Firms are increasingly differentiated by their ability to provide operational support, talent networks, and strategic guidance. Furthermore, Levine touches on the evolving relationship between VCs and founders, noting a trend toward more founder-friendly terms and a greater emphasis on alignment, especially as the competition for top-tier deals remains fierce.
Implications for the Future of Venture Capital
The analysis suggests the venture industry is maturing and segmenting. Levine implies a future where there is clear stratification between large, multi-stage asset managers and smaller, niche-focused firms that compete on specialized expertise and access. The cycle of capital availability is also a critical focus, with the interview conducted in a period of market correction following the exuberance of 2021. This leads to a discussion on the return to fundamentals, with a renewed emphasis on unit economics, sustainable growth, and paths to profitability, as opposed to growth-at-all-costs.
The overarching conclusion is that venture capital is not a static industry. Its evolution is a natural response to market forces, technological change, and the lifecycle of the asset class itself. For entrepreneurs, this means a more complex but potentially more supportive landscape. For investors, it demands adaptation, specialization, and a long-term perspective that can navigate the cyclical nature of capital availability. The “evolution” referenced in the title is framed as an ongoing process essential for the health and relevance of venture capitalism itself.
A reminder for new readers. Each week, That Was The Week, includes a collection of selected essays on critical issues in tech, startups, and venture capital.
I choose the articles based on their interest to me. The selections often include viewpoints I can't entirely agree with. I include them if they make me think or add to my knowledge. Click on the headline, the contents section link, or the ‘Read More’ link at the bottom of each piece to go to the original.
I express my point of view in the editorial and the weekly video.





























![LeCun’s Alternative Future: A Gentle Guide to World-Model AI [Guest] LeCun’s Alternative Future: A Gentle Guide to World-Model AI [Guest]](https://substackcdn.com/image/fetch/$s_!4mRl!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fae59a1d8-af1f-4f49-b418-a3cb8b5e34d5_1600x899.png)











