Contents
Editorial: Is China the New America? Is there an AI Race?
Essay
China vs US
Bubble?
AI
Venture
Interview of the Week
Startup of the Week
Post of the Week
Editorial: Is China the New America?
China’s economy has been growing as a percentage of the global economy for decades. Over ten years ago, it overtook the United States in GDP measured by purchasing power parity (PPP) — a metric that compares what each country’s currency can actually buy. In nominal terms, China’s share of global GDP now stands at nearly 17%, compared with the U.S. at 26%.
PPP reflects real domestic consumption and production capacity, not just exchange-rate value. On a per-capita basis, however, the gap remains large: the U.S. stands at about $67,000 nominal GDP per person, while China reaches $13,000. Adjusted for PPP, those figures become $86,000 for the U.S. and $27,000 for China. The European Union sits between them, at roughly $54,000 per person.
Historical Parallels
The last time the world witnessed such a shift in global economic weight was between 1870 and 1918, when the United States caught up to and then surpassed the British Empire in share of world GDP. That transformation marked the beginning of a new global order — one driven by industrialization, innovation, and scale.
These transitions are cyclical. Economies rise, mature, and then slow as others accelerate. High-investment economies typically expand faster than established ones, benefiting from new technologies and lower production costs.
Today, China plays that same role. Its major cities are thriving industrial and financial centers. Investment rates remain well above those of the U.S., and the country excels in manufacturing capacity, infrastructure, and energy production — especially in renewables. As AI accelerates demand for compute power and energy, China’s advantages in these foundational sectors are growing more pronounced.
The Competing Narratives
This week Nvidia CEO Jensen Huang predicted that “China will win the AI race”, citing Beijing’s regulatory flexibility, industrial-scale mobilization, and cheaper energy costs for data centers. Conversely, Silicon Valley figures such as David Sacks, Marc Andreessen, and Ben Horowitz argue that the U.S. can retain leadership if it re-embraces open innovation and pro-growth energy policies.
In the a16z conversation “How America Wins the AI Race Against China”, several key ideas stand out:
AI & Crypto Synergy: AI requires distributed computing and trustworthy data; crypto provides incentives and verification.
Regulatory Capture: U.S. incumbents like Anthropic lobby for tight regulation that could freeze open-source competitors out.
Permissionless Innovation: The U.S. must preserve its culture of experimentation that birthed the internet and microprocessor revolutions.
Energy & Infrastructure: Expanding nuclear and natural gas capacity, and cutting NIMBY-style data center restrictions, is essential.
Open Source Advantage: Open models remain America’s “secret weapon” for adaptability and resilience.
This view aligns with the historic American formula: openness, creativity, and individual initiative. Yet it contrasts sharply with the Financial Times analysis by John Thornhill and Caiwei Chen, who describe a China already outpacing the U.S. in AI adoption and diffusion.
China’s Execution Model
According to Thornhill and Chen, China has the means, motive, and opportunity to dominate AI. It holds 70% of global AI patents and more than 22% of academic citations, while the U.S. share in top-tier AI research is slowly declining. Domestic policy pushes AI literacy into every layer of education and governance, ensuring an AI-ready workforce.
China’s leading models — DeepSeek-V3 and Alibaba’s Qwen 2.5-Max — match or surpass Western systems in algorithmic efficiency. Unlike U.S. tech giants that guard proprietary systems, China has leaned into open-weight models, driving broad diffusion across fintech, logistics, robotics, and drones.
Huang’s view supports this: Beijing’s pragmatic deregulation, lower energy costs, and unified national strategy are catalyzing mass-scale AI deployment. By contrast, the U.S. faces policy fragmentation and escalating energy prices that constrain growth.
Beyond the AI Race Frame
It is easy to feel whiplash from the dueling headlines — each predicting either a Chinese ascendance or an American resurgence. But framing this as a zero-sum race misunderstands the nature of AI itself.
AI is not a discrete contest with a finish line. It is a continuum of innovation, an evolving infrastructure that will shape productivity, culture, and society for decades. Both the U.S. and China are building toward different strengths: America in frontier innovation and software ecosystems, China in industrial-scale implementation and manufacturing integration.
As Box CEO Aaron Levie noted, his U.S.-based company already uses Alibaba’s Qwen open model in production systems. Likewise, OpenAI’s projected $100 billion in 2027 revenue will generate value for developers and enterprises worldwide, including in China. The global AI ecosystem is becoming deeply interdependent.
Meanwhile, China continues to advance domestically. The release of MiniMax 2, a new Chinese reasoning model surpassing DeepSeek in accuracy and efficiency, exemplifies the speed of iteration. It is open-source, free, and capable of running on consumer hardware.
The Bigger Picture
Both the Financial Times and a16z camps are right — but only partially. China is executing horizontally at scale; the U.S. innovates vertically at the frontier. One focuses on systems, the other on breakthroughs. Both are indispensable.
The truth is that AI will never be “finished.” It will evolve continuously, with contributions from thousands of organizations across nations. The benefits will not be confined by borders. Every new model, tool, and application advances global productivity, from Shenzhen to Palo Alto.
And none of this will halt China’s steady rise in global GDP share or its climb in per-capita wealth. Closing the per-capita gap with the U.S. will take time — perhaps decades — but it will happen. By then, America will likely have moved further ahead in quality of life and innovation capacity. Economic and technological leadership are not singular; they coexist and cycle.
Closing Thought
The real opportunity lies not in rivalry but in reciprocity. If the 20th century rewarded nations that industrialized, the 21st will reward those that humanize intelligence — aligning technology with global well-being rather than national dominance. Let’s celebrate AI’s capacity to enhance human life, no matter its origin, and embrace a world where collaboration, not competition, defines progress.
Essay
The AI Shift: should LLMs be allowed in the classroom?
Ft • November 6, 2025
Education•Curriculum•LLMs•AcademicIntegrity•PersonalizedLearning•Essay
The central question is framed starkly: “Is the technology a dangerous crutch or a personal tutor?” The debate over large language models in classrooms turns on whether they erode independent thinking or unlock individualized, scalable support. The core tension is not about if students will use AI, but how educators can channel its capabilities toward learning rather than outsourcing cognition. At stake are assessment integrity, equity of access, and the future design of curricula that incorporate AI literacy as a foundational skill alongside reading, writing, and numeracy.
Why LLMs Appeal in Education
Personalized scaffolding: LLMs can adapt explanations to a student’s level, offer step-by-step guidance, and provide multiple modalities (examples, analogies, Socratic prompts) on demand.
Feedback at scale: Instant commentary on drafts, code, and problem-solving approaches can reduce turnaround times and free teachers to focus on higher-order coaching.
Inclusion and accessibility: For multilingual learners and students with disabilities, AI can translate, rephrase, summarize, or generate accessible formats, mitigating barriers to participation.
Teacher productivity: Lesson planning, rubric generation, formative quiz creation, and differentiation can be accelerated, potentially improving instructional quality and teacher wellbeing.
Risks of Dependency and Misdirection
Cognitive offloading: Routine reliance on AI for first drafts or problem solutions may dilute practice in core skills—argumentation, proof construction, and original synthesis.
Hallucinations and bias: Fabricated facts or subtly biased outputs can mislead learners, especially when students over-trust fluent prose as authority.
Academic integrity: Invisible assistance complicates authorship, authenticity, and fairness; conventional plagiarism detection is poorly matched to generative outputs.
Unequal access: Device quality, connectivity, and paid features risk widening attainment gaps if AI becomes a prerequisite for success.
Privacy and safety: Student data, prompt histories, and sensitive contexts require robust guardrails and transparent data handling.
Pedagogical Redesign, Not Prohibition
AI literacy: Teach students to critique prompts, verify claims, identify hallucinations, and document AI contributions—treating tools as sources to be evaluated.
Assessment reform: Shift weight to process evidence (planning notes, prompt logs, version histories), in-class performance, oral defenses, and authentic, project-based tasks.
Transparent norms: Specify “allowed, encouraged, and prohibited” uses by assignment type; require disclosure statements outlining what AI did and what the student did.
Model alignment with curriculum: Constrain tools to course materials where possible to improve relevance and reduce error; prefer institutionally governed systems over public chatbots for sensitive work.
Governance and Implementation
Data protection and safety-by-design: Minimize data collection, disable training on student prompts, and prefer on-device or institution-hosted models when feasible.
Quality assurance: Establish accuracy, bias, and robustness checks; require clear documentation of limitations; integrate human review for high-stakes feedback.
Professional development: Equip educators with practical workflows—co-writing rubrics, generating exemplars with annotations, and building question banks that probe reasoning, not recall.
Equity and Access Considerations
Guarantee baseline access (devices, accounts, and support) to prevent AI from becoming a new private tutor for the already advantaged.
Leverage AI for universal design: On-demand translation, reading-level adjustment, and multimodal explanations can raise the floor if implemented institution-wide.
Practical Classroom Guidelines
Start with low-stakes, formative contexts; require citations or links for claims AI provides; encourage “explain your reasoning” prompts.
Mandate reflection: Students should summarize what they learned with and without the tool, clarifying how AI shaped their understanding.
Monitor and iterate: Collect evidence on learning outcomes, not tool usage volume, and adjust policies accordingly.
Bottom Line
Used passively, LLMs can become a crutch that masks gaps in understanding and undermines trust in assessment. Used deliberately—with transparency, verification, and redesigned pedagogy—they function as a tireless personal tutor and teacher’s aide. The decisive factor is institutional choice: clear norms, equitable access, and assessments that reward thinking rather than mere output.
Key takeaways:
Treat AI as augmentative, not substitutive.
Rebuild assessment to value process and explanation.
Center equity, privacy, and teacher training in any rollout.
How AI Browsers Sneak Past Blockers and Paywalls
Cjr • Aisvarya Chandrasekar and Klaudia Jaźwińska • October 30, 2025
Media•Publishing•AI•Paywalls•Journalism•Essay
Last week, OpenAI released Atlas, which joins a growing wave of AI browsers, including Perplexity’s Comet and Microsoft’s Copilot mode in Edge, that aim to transform how people interact with the Web. These AI browsers differ from Chrome or Safari in that they have “agentic capabilities,” or tools designed to execute complex, multistep tasks such as “look at my calendar and brief me for upcoming client meetings based on recent news.”
AI browsers present new problems for media outlets, because agentic systems are making it even more difficult for publishers to know and control how their articles are being used. For instance, when we asked Atlas and Comet to retrieve the full text of a nine-thousand-word subscriber-exclusive article in the MIT Technology Review, the browsers were able to do it. When we issued the same prompt in ChatGPT’s and Perplexity’s standard interfaces, both responded that they could not access the article because the Review had blocked the companies’ crawlers.

Atlas and Comet were able to read the article for two reasons. The first is that, to a website, Atlas’s AI agent is indistinguishable from a person using a standard Chrome browser. When automated systems like crawlers and scrapers visit a website, they identify themselves using a digital ID that tells the site what kind of software is making the request and what its purpose is. Publishers can selectively block certain crawlers using the Robots Exclusion Protocol—and indeed many do.
But as TollBit’s most recent State of the Bots report states, “The next wave of AI visitors [are] increasingly looking like humans.” Because AI browsers like Comet and Atlas appear in site logs as normal Chrome sessions, blocking them might also prevent legitimate human users from accessing a site. This makes it much more difficult for publishers to detect, block, or monitor these AI agents.
Furthermore, the MIT Technology Review, like many publishers including National Geographic and the Philadelphia Inquirer, uses a client-side overlay paywall: the text loads on the page but is hidden behind a pop-up that asks a user to subscribe or log in. While this content is invisible to humans, AI agents like Atlas and Comet can still read it. Other outlets like the Wall Street Journal and Bloomberg use server-side paywalls, which don’t send the full text to the browser until a user’s credentials are verified. Once a user is logged in, the AI browser can read and interact with the article on their behalf.

OpenAI says that, by default, it does not train its large language models on the content users encounter in Atlas unless they opt in to “browser memories.” Pages that have blocked OpenAI’s scraper will still not be used for training, but “ChatGPT will remember key details from content you browse.” As Geoffrey Fowler of the Washington Post wrote last week, “the details of what Atlas will or won’t remember get confusing fast.” It remains unclear how much OpenAI is learning from paywalled content that users are unlocking for the agents to read.
We did find that Atlas seems to avoid reading content from media companies that are currently suing OpenAI. (We did not observe the same with Comet.) However, when we prompted Atlas to interact with these publications, it employed various work-arounds to try to satisfy our requests.
For instance, when we prompted Atlas to summarize an article from PCMag, whose parent company Ziff Davis sued OpenAI for copyright infringement in April, the agent produced a composite summary, drawing on tweets about the article, syndicated versions, citations in other outlets, and related coverage across the Web. Online research expert Henk van Ess first documented this behavior in July, observing that AI agents can reverse-engineer an article using “digital breadcrumbs.”

When we asked Atlas to summarize an article from the New York Times, which is also suing OpenAI, it took a different approach. Instead of reconstructing the article, it generated a summary based on reporting from four alternative outlets—The Guardian, the Washington Post, Reuters, and the Associated Press—three of which have licensing agreements with OpenAI.
By reframing the user’s request from a specific article to a general topic, the agent reshapes what that user ultimately reads. Even when a media outlet is able to prevent the agent from accessing its content, it faces a catch-22: the agent simply suggests alternative coverage.
China vs US
The State of AI: is China about to win the race?
Ft • November 3, 2025
GeoPolitics•Asia•AI Competition•Industrial Policy•Essay•China vs US
Overview and Central Claim
The piece argues that while global attention fixates on the United States’ commanding position in artificial intelligence, Beijing possesses the “means, motive, and opportunity” to overtake. It frames the AI race as a strategic contest whose outcome will shape economic power, security architectures, and technological standards for the next decade.
Why Attention Skews to the United States
The discussion acknowledges the US lead in frontier model labs, top-tier research universities, venture funding, and an ecosystem of cloud, chips, and talent.
High-visibility breakthroughs and commercial deployments centered in US companies have reinforced a perception of durable American dominance, guiding investor sentiment and policy debates worldwide.
Beijing’s “Means”
State alignment: Central and provincial authorities are portrayed as mobilizing capital, procurement, and regulatory support to accelerate AI adoption across industry and government.
Scale advantages: A vast domestic market, dense manufacturing base, and large developer communities create feedback loops for rapid iteration and deployment.
Integration capability: The system highlights China’s strength in converting research into applied systems across logistics, finance, e-commerce, and urban services, compressing the path from pilot to mass rollout.
Beijing’s “Motive”
Strategic autonomy: Reducing reliance on foreign technologies and supply chains is presented as a core driver, with AI seen as foundational to economic security and national resilience.
Economic upgrading: AI is positioned as a lever to escape margin pressure by moving up the value chain—from assembly to intelligent design, production, and services.
Security competition: The article situates AI within broader great-power rivalry, where leadership influences cyber capabilities, intelligence, and defense technologies.
Beijing’s “Opportunity”
Policy windows: Industrial policy, public-sector demand, and city-level experimentation create testbeds that can scale quickly once solutions prove effective.
Adoption-led edge: Even if constrained on the most advanced chips, faster diffusion into real-world workflows can yield productivity gains that compound over time.
Emerging markets: Partnerships, infrastructure projects, and platform expansion in the Global South present channels for standard-setting and market share beyond the West.
Constraints and Uncertainties
Hardware bottlenecks: Access to cutting-edge semiconductors and advanced manufacturing tools could slow the training of frontier models and certain high-performance applications.
Research openness: Limits on cross-border collaboration and information flows may affect publication pipelines and talent exchange.
Regulatory trade-offs: Efforts to steer content and mitigate societal risks might impose friction on experimentation or model generality, depending on how rules are enforced.
Implications
Economic: Leadership in applied AI could tilt global supply chains, with knock-on effects for pricing power, services exports, and platform ecosystems.
Standards and governance: Whichever bloc diffuses its technologies more widely will shape technical norms, safety benchmarks, and interoperability.
Policy responses: For other economies, the analysis implies choices around immigration, R&D incentives, compute access, and cross-border alliances to avoid dependence on a single power’s stack.
Key Takeaways
US leadership is real but not absolute; China’s coordinated push could narrow or reverse gaps in strategically important layers.
Speed of adoption and system-level integration may matter as much as frontier model performance.
Global outcomes will hinge on chips, capital intensity, talent mobility, and the ability to translate research into scaled deployments—areas where policy can rapidly shift the balance.
Sacks, Andreessen & Horowitz: How America Wins the AI Race Against China
Youtube • a16z • November 3, 2025
GeoPolitics•USA•AICompetition•Semiconductors•IndustrialPolicy•Essay•China vs US
Thesis and Context
The discussion centers on how the United States can sustain leadership in artificial intelligence amid intensifying competition with China. The speakers frame AI as a general-purpose technology with national security, economic, and cultural stakes, arguing that victory depends on building at unprecedented scale across compute, energy, talent, data, and deployment—not on any single policy lever.
Compute, Chips, and Supply Chains
A core argument is that compute capacity is the new industrial base. The U.S. must expand domestic manufacturing of advanced semiconductors, packaging, and memory, while diversifying away from single-point-of-failure geographies.
Recommended actions include accelerating fab buildouts, streamlining permitting for data centers and power, and aligning export controls with a long-term industrial strategy so they don’t inadvertently cede international markets.
They highlight advanced packaging, specialty tooling, and high-bandwidth memory as choke points; suggest public–private procurement to de-risk first-of-a-kind facilities; and emphasize resilient logistics for critical components.
Energy as the Bottleneck
AI buildout is constrained by electricity. The conversation calls for an “all-of-the-above” approach—grid upgrades, long-duration storage, advanced nuclear, and faster interconnection queues—paired with regulatory modernization to shorten timelines from years to months.
Stable, low-cost baseload is positioned as a comparative advantage that compounds over time, enabling denser AI clusters and cheaper inference.
Talent, Immigration, and Education
Winning requires attracting and retaining the world’s top AI researchers, chip designers, and power engineers. Proposals include fast-track visas for specialized roles, streamlined green cards for STEM graduates, and apprenticeships that pair model training, robotics, and power systems.
They advocate scaling AI literacy across universities and community colleges, plus targeted fellowships for co-ops in fabs, robotics labs, and defense primes to translate research into deployment.
Data, Models, and Open Innovation
The panel underscores lawful access to broad, high-quality datasets for training and the importance of clear rules for text, code, images, and public web data.
On model strategy, they argue for a dynamic ecosystem where both open and proprietary approaches thrive, with incentives for safety evaluations, red-teaming, and reproducible benchmarks—shifting the focus from headline parameters to reliability, latency, and total cost of ownership.
Robotics, Embodied AI, and Dual-Use Tech
The U.S. is urged to move faster on embodied AI and robotics to close gaps in industrial automation and defense-relevant platforms. Priorities include domestic capacity for actuators, sensors, and edge compute; shared simulation environments; and test ranges that speed validation for logistics, manufacturing, and public-safety use cases.
Defense procurement should favor iterative, software-upgradable systems and leverage commercial off-the-shelf components to compress deployment cycles.
Regulation, Standards, and Global Adoption
The speakers argue for “pro-innovation” governance—clear liability and safety expectations without prescriptive design mandates. Standards should emphasize transparency on model provenance, evaluation practices, and incident reporting.
Internationally, they stress that the U.S. must win by adoption: aligning allies on interoperable APIs, cloud-to-edge toolchains, and IP protections so the American AI stack becomes the default choice for developers, enterprises, and governments.
Capital Formation and Public–Private Partnerships
To bridge the “trough of build risk,” they recommend targeted loan guarantees, accelerated depreciation for AI infrastructure, and outcome-based government procurement. This de-risks frontier projects in compute, power, and advanced manufacturing while crowding in private capital.
Key Takeaways
Scale beats slogans: compute, energy, and supply-chain capacity determine the pace of AI progress.
Talent is policy: immigration and training pipelines are as decisive as subsidies.
Ship, measure, secure: deployment, evaluation, and safety must advance together.
Win the world’s developers: interoperability and reliable economics drive global adoption of the U.S. AI stack.
Implications
If executed, this agenda positions the U.S. to compound advantages across chips, power, and software, ensuring both economic growth and strategic deterrence. Failure risks path dependence on foreign hardware, slower deployment, and diminished influence over AI norms and safety practices.
Nvidia’s Jensen Huang says China ‘will win’ AI race with US
Ft • November 5, 2025
GeoPolitics•Asia•AI Competition•Data Centers•Energy Policy•Essay•China vs US
Overview
A leading US chipmaker’s chief executive argues that China is positioned to outpace the West in artificial intelligence, linking the outcome to policy choices and operating conditions rather than pure technical prowess.
He criticises what he calls Western “cynicism,” framing it as a cultural and regulatory drag on experimentation, infrastructure build‑out, and commercialization.
In contrast, Beijing is loosening rules and cutting energy costs for data centres—two levers that directly improve the economics and speed of AI deployment.
What the CEO is signaling
The remarks highlight a belief that mindset and policy, not just talent, determine winners in AI. “Cynicism” suggests risk aversion, slower approvals, and heightened skepticism around safety and misinformation that can translate into delays.
By calling out Western attitudes, he implies that even with cutting‑edge silicon and research, a more restrictive environment can blunt competitive advantage if compute capacity and market adoption lag.
Regulatory and cost asymmetries
Looser regulations in China can reduce friction for AI model training, product launches, and integration in public and enterprise services. Faster permitting for data centres and fewer procedural choke points shorten time‑to‑market.
Cuts to energy costs directly lower the total cost of ownership (TCO) for data centres, where electricity is one of the largest recurring expenses. For training large models, energy pricing materially influences whether projects are economically viable.
When combined, lighter regulation and cheaper power can accelerate capital formation, improve utilization of GPUs at scale, and enable more ambitious AI deployments.
Infrastructure dynamics
AI competitiveness is increasingly constrained by infrastructure: data centre capacity, power availability, and cooling. Policy choices that clear land, grid connections, and power purchase agreements can unlock growth.
Cheaper energy and predictable policy reduce the risk premium for building new facilities, encouraging both domestic investment and partnerships with global suppliers of chips, networking, and memory.
These conditions support the rapid scaling of inference services, not just training—expanding AI into consumer apps, industrial automation, and public services.
Implications for the West
If Western markets are seen as slower or more skeptical adopters, capital and talent may chase faster paths to production elsewhere, even if core research leadership remains diversified.
Companies may lobby for clearer, more enabling frameworks that separate high‑risk AI from low‑risk applications, streamline permitting, and align incentives for clean, abundant power dedicated to compute.
The critique invites reassessment of how to balance safety and innovation: over‑correction could cede ground; under‑correction risks societal harms. The competitive benchmark is shifting from model breakthroughs to deployment velocity and unit economics.
Strategic takeaways
AI leadership hinges on three controllables: regulatory clarity, energy affordability, and infrastructure speed. China’s current stance—loosening rules and lowering data‑centre power costs—addresses all three.
Western “cynicism,” if it manifests as prolonged uncertainty and fragmented rules, can slow scaling and dull returns on frontier chips and research.
Policymakers and industry alike face a strategic choice: reduce friction in approvals and power provisioning, or accept slower diffusion of AI capabilities and potential loss of market share.
Key takeaways
Policy and cost structures, not just technology, will decide the AI race.
Looser rules plus cheaper energy materially improve AI deployment economics.
Western skepticism can translate into slower adoption and diminished competitiveness.
Infrastructure speed—power, land, and permitting—has become a primary moat alongside algorithms and hardware.
Bubble?
Are bubbles good, actually?
Ft • November 5, 2025
Essay•AI•AIMania•Bubble?
Thesis
The piece explores the argument that speculative “mania” around artificial intelligence can be beneficial, framing enthusiasm and rapid capital inflows as drivers of technological progress rather than distortions to be feared.
It presents a defense of intense investment cycles and exuberant expectations, contending that such periods compress time, mobilize resources, and attract talent toward a general-purpose technology with wide spillover effects.
Why “Mania” Can Be Productive
Innovation flywheels: Surges in attention and funding create feedback loops—more experiments, faster iteration, and a broader base of infrastructure that subsequent innovators can reuse.
Market formation: Early, sometimes over-optimistic demand signals help establish standards, platforms, and supply chains that would not materialize under cautious, linear investment.
Talent magnet: High-profile enthusiasm draws engineers, researchers, and entrepreneurs who might otherwise remain in incumbents or adjacent fields, expanding the frontier of what is attempted.
Counterpoints Acknowledged
Misallocation risk: Bubbles can funnel capital into weak ideas, creating waste and subsequent backlash when expectations reset.
Cyclical hangovers: Overbuild and hype can lead to consolidation, layoffs, and diminished public trust if promised breakthroughs lag.
Governance gaps: Rapid deployment raises questions around safety, privacy, intellectual property, and competitive fairness that outpace existing frameworks.
Strategic Interpretation
Measured optimism: A pragmatic stance is to welcome accelerated investment while demanding transparency on model capabilities, compute usage, and safety testing.
Infrastructure bias: Directing exuberance toward shared infrastructure—data pipelines, compute efficiency, evaluation benchmarks—can minimize deadweight loss and maximize long-run productivity.
Outcome over narratives: The ultimate test for “mania” is durable productivity gains, not short-term valuation spikes; focusing on real adoption, unit economics, and verifiable performance is key.
Implications
For builders: Use the window of abundant capital and attention to tackle bottlenecks with compounding value—tooling, safety, and reliability—rather than thin wrappers.
For investors: Expect a power-law distribution of outcomes; cultivate stage-appropriate discipline while staying exposed to outlier upside that bubbles make possible.
For policymakers: Prepare for non-linear diffusion of AI by modernizing competition, data, and safety regimes without reflexively throttling experimentation that seeds future gains.
Key Takeaways
“AI mania” is reframed as a catalyst for progress when channeled into platforms and standards.
The productive path blends aggressive investment with rigorous evaluation and governance.
Long-term value depends on translating hype-funded exploration into measurable productivity improvements.
The Benefits of Bubbles
Stratechery • Ben Thompson • November 5, 2025
Essay•AI•AIInfrastructure•Bubble?
Thesis
The central claim is that we are currently in an AI bubble, and the right question isn’t “Is this a bubble?” but “Will the bubble be worth it?” The answer hinges on whether today’s exuberant capital and activity leave behind durable physical infrastructure and catalyze coordinated innovation that persist after valuations normalize. In other words, bubbles can be socially productive if they finance assets and ecosystems whose long-term spillovers exceed the short-term mispricing and waste they entail.
What the bubble finances
Capital intensity: AI requires extraordinary spending on compute, data centers, networking gear, memory, power, and cooling. Bubble conditions accelerate this buildout by lowering effective capital costs and concentrating investor attention.
Supply chain expansion: Surges in demand pull forward capacity in semiconductors, advanced packaging, optics, and specialized power equipment. These “lumpy” investments often need a bubble to get financed quickly.
Talent and tooling: Bubbles over-hire and over-tool, but they also create dense clusters of engineers, researchers, and operators, and proliferate frameworks, libraries, and deployment tooling that later generations use more efficiently.
Why coordinated innovation matters
Platform alignment: Major actors—chipmakers, cloud providers, model labs, enterprise software, and startups—must align roadmaps (hardware cycles, model releases, APIs, safety and compliance layers) so that progress compounds rather than fragments.
Standardization: Shared interfaces for inference, training, data governance, and evaluation reduce integration friction, letting complements (apps, agents, vertical solutions) scale beyond pilot purgatory.
Market formation: Bubbles can create early demand signals that help define categories (AI copilots, customer service automation, design tools, scientific discovery) and establish reference customers, enabling follow-on adoption after hype cools.
The benefit calculus: assets that outlast hype
Durable assets: Data centers, grid interconnects, substations, high-voltage upgrades, heat reuse systems, and fiber backbones persist and can be repurposed for future compute waves.
Cost curves: If capex today shifts learning curves—cheaper inference per token, better energy efficiency per FLOP—then downstream productivity gains can dwarf initial overinvestment.
Option value: Bubble-fueled experimentation surfaces new use cases and architectural breakthroughs (model compression, retrieval, agent orchestration) that wouldn’t be tried in leaner times.
Risks and potential misallocation
Power scarcity and siting: Data-center growth can strain local grids, exacerbate congestion, and entrench fossil-heavy capacity if procurement outpaces clean generation and transmission planning.
Lock-in and concentration: Overreliance on a narrow set of suppliers or clouds can reduce competition and slow open innovation, leaving less resilient ecosystems post-bubble.
Talent hoarding: High salaries and signing frenzies can drain adjacent sectors, delaying diffusion of benefits to the broader economy.
Historical analogies and their lesson
Railroads and telegraphy, electrification waves, and the dot-com era show a recurring pattern: speculative excess finances overbuild; after the shakeout, society enjoys cheaper access (transport, power, bandwidth) that enables new business models. The question today is whether AI’s capex similarly leaves behind abundant compute and robust infrastructure that lower barriers for future innovators.
Signals to watch to judge “worth it”
Utilization and flexibility: Rising average utilization of GPUs/accelerators and ease of repurposing compute for new workloads.
Falling total cost of ownership: Sustained declines in cost per token/inference and energy per FLOP, not just transient price promotions.
Ecosystem breadth: Growth of independent developers, open-source components, and interoperable standards alongside proprietary stacks.
Diffusion: Meaningful productivity gains beyond frontier tech firms—SMBs, public services, and industrial incumbents adopting AI at scale.
Energy mix and grid upgrades: Evidence that data-center demand pulls clean generation and transmission forward rather than crowding it out.
Bottom line
Yes, it’s a bubble—but bubbles can be investments in public option value. If today’s frenzy yields a denser compute fabric, more resilient supply chains, interoperable standards, and widely diffused productivity, then the social return can exceed the financial losses inherent to bubbles. The decisive variable is whether the capital surge translates into enduring physical infrastructure and coordinated innovation that compound long after the hype fades.
Elon Musk Wins $1 Trillion Tesla Pay Package
Nytimes • Rebecca F. Elliott, Jack Ewing and Reid J. Epstein • November 6, 2025
Regulation•USA•Executive Compensation•Shareholder Vote•Tesla
Overview
Shareholders have approved a performance-based equity plan that could grant Elon Musk shares valued at nearly $1 trillion, contingent on achieving ambitious milestones. The central premise is to directly link compensation to extraordinary outcomes, with one of the headline requirements being a vast expansion of the company’s overall stock market valuation. The package is designed to reward measurable, market-driven success rather than time in role, framing executive pay as a function of value creation rather than traditional salary and bonus structures.
What Was Approved
A plan to issue shares worth close to $1 trillion, but only if a predefined set of aggressive targets is met.
The goals include substantially increasing the company’s market capitalization, among other performance thresholds.
The structure concentrates compensation in equity, meaning potential rewards scale with shareholder outcomes and are realized only upon meeting the specified milestones.
Rationale and Support
Proponents argue the plan aligns the CEO’s incentives with long-term shareholder value: if shareholders benefit from a dramatically larger company, the CEO participates proportionally.
The package functions as a retention mechanism during a period when the company’s strategic priorities—product innovation, manufacturing scale, autonomy, and energy—require consistent leadership and risk-taking.
Backers view the high bar for achievement as self-selecting: the value transfers only occur if exceptional results materialize, mitigating concerns about paying for underperformance.
Concerns and Risks
Size and dilution: Issuing a large volume of new shares may dilute existing shareholders’ stake. While rising market value could offset dilution, the sheer magnitude raises governance questions about proportionality.
Precedent-setting: A package of this scale could encourage other firms to pursue similarly outsized, market-cap–linked awards, escalating executive pay norms and complicating board oversight.
Goal design and measurement: Tying rewards primarily to market valuation can expose pay outcomes to factors beyond managerial control, such as macroeconomic cycles or sector-wide multiple expansion.
Governance balance: The plan concentrates potential ownership and influence with the CEO. Some investors may worry about checks and balances, board independence, and minority shareholder protections.
How the Targets Drive Behavior
Emphasis on long-term value: Because compensation depends on meeting ambitious, multi-year milestones, management is nudged toward initiatives that can sustainably elevate the company’s scale and competitiveness.
Risk appetite: The design encourages bold bets—new products, technologies, and manufacturing methods—consistent with the goal of dramatically expanding market value.
Execution pressure: Hitting aggressive thresholds typically requires operational excellence across product development, supply chain, cost structure, and capital allocation.
Implications for Investors and the Market
For shareholders, the plan clarifies the performance bar and makes future dilution conditional on success. If the milestones are reached, investors may benefit from a significantly more valuable company, though they also accept dilution as part of that growth.
For boards and compensation committees elsewhere, the approval may influence pay design toward larger, outcome-contingent equity grants, intensifying debates about fairness, proportionality, and oversight.
For regulators and governance watchdogs, the package will likely remain a focal point in discussions about executive compensation frameworks, disclosure, and alignment with shareholder interests.
Key Takeaways
Shareholders approved a contingent, equity-heavy compensation plan worth nearly $1 trillion at potential value.
Rewards depend on meeting ambitious goals, notably a dramatic increase in market capitalization.
Supporters cite alignment, retention, and pay-for-performance; critics point to dilution, governance concentration, and precedent risks.
The decision could shape executive pay practices across the market, with continued attention from investors, boards, and regulators.
AI
Satya Nadella & Sam Almant: Inside the OpenAI–Microsoft Pact, OpenAI investment, the restructure, IPO, circular financing, layoffs, AGI full breakdown
Theaiopportunities • Guillermo Flor • November 2, 2025
AI•Funding•OpenAI•Microsoft•AGI
Overview
An extended conversation between Sam Altman and Satya Nadella—moderated by Brad Gerstner—maps the Microsoft–OpenAI relationship, its new governance, and the economics powering AI’s next phase. Set against a “historic tech week” (Nvidia at ~$5T market cap, strong big-tech prints, and U.S. tailwinds like rate cuts, trade deals, and $80B for nuclear fission), the pair frame compute and power as the new strategic resources and outline how exclusivity, revenue sharing, and a verification process for AGI aim to keep the partnership both ambitious and bounded by governance.
Deal structure and governance
Microsoft has invested roughly $13–14B since 2019 and now owns ~27% of OpenAI on a fully diluted basis (down from ~33% after dilution).
OpenAI is reorganized with a nonprofit OpenAI Foundation above a Public Benefit Corporation. The foundation is capitalized with ~$130B in OpenAI stock and plans an initial $25B program focused on health, AI security, and resilience.
Altman says capitalism is the best engine for value creation, but some aims—curing disease, AI-driven science, societal stability—require non-market mechanisms the foundation can fund.
Exclusivity, revenue share, and the AGI clause
Azure has exclusive rights to distribute OpenAI’s frontier models (e.g., GPT‑4/5) until about 2030–2032 or until an independently verified AGI event.
OpenAI can distribute open-source and certain non-frontier assets elsewhere, but “stateless APIs” for frontier capabilities remain Azure-exclusive.
Microsoft receives roughly a 15% revenue share from OpenAI until 2032 or AGI verification. If OpenAI claims AGI, a third‑party expert “jury” would validate before key clauses change. Nadella: “Nobody’s even close.”
Scale, revenue, and compute economics
Gerstner cites ~$13B OpenAI revenue; Altman replies it’s “well more than that.”
OpenAI plans ~$1.4T in compute commitments over 4–5 years: ~$500B to Nvidia, ~$300B to AMD, ~$250B to Azure, with the remainder to Oracle and others—asserting demand growth justifies the scale.
Bottleneck has shifted from chips to power and data‑center capacity. Nadella notes “idle GPUs” due to insufficient power/cooling and elevates “tokens per dollar per watt” as the core productivity metric.
Both foresee non‑linear supply cycles (temporary gluts), falling unit costs, and potential “infrastructure bubble” shakeouts as prices decline.
Devices, agents, and the future of SaaS
Altman expects consumer devices to run GPT‑5/6 class models locally, complemented by edge inference and new form factors (e.g., wearable agents).
Nadella emphasizes a “fungible fleet” that fluidly reallocates training and inference across generations (H200→H300) for cloud efficiency.
Software architecture is shifting from tightly coupled data+logic+UI to an agent layer mediating between them. “Agents are the new seats”: enterprise monetization will tilt toward usage/consumption tied to agent output, while consumer monetization remains nascent. GitHub Copilot “auto‑mode” already routes tasks across models based on feedback.
IPO, financing optics, and Microsoft’s P&L
Rumors suggest a 2026–2027 OpenAI IPO; Altman denies concrete plans, prefers growth funded by revenue, but likes broader retail ownership “someday.”
Gerstner projects $100B+ OpenAI revenue by 2028–29, implying a potential $1T valuation; Nadella stresses Microsoft’s strategic upside regardless, citing exclusive model access and ecosystem pull (“We have a frontier model for free.”).
On “circular financing” concerns, Nadella says Microsoft’s $13.5B was equity, not recognized as revenue; vendor financing exists across the sector, but sustainability hinges on real end-demand.
Microsoft consolidates roughly $4B in quarterly losses from OpenAI, which Nadella frames as worthwhile given Azure’s growth (up ~39% YoY on a ~$93B run rate) that could have been higher with more compute.
Regulation, reindustrialization, and outlook
With federal preemption removed from the Senate AI bill, Altman warns of a 50‑state patchwork (e.g., Colorado AI Act). Both advocate a coherent federal framework to avoid EU‑style overregulation and protect startup viability.
Gerstner and Nadella cast AI infrastructure as a capex super‑cycle: ~$4T across U.S. hyperscalers (a “10× Manhattan Project”), complemented by $80B in fission and reshoring trends. Microsoft’s Wisconsin data center is cited as a local anchor as the U.S. exports “compute factories.”
Near‑term breakthroughs expected in AI‑driven science (first small discoveries by 2026), multi‑day coding agents, and robotics—ushering in a “macro‑delegation, micro‑steering” UI paradigm and a broad productivity uplift rather than net job destruction.
In sum, the partnership’s clarified governance, Azure exclusivity, and rev‑share align incentives while power, not silicon, becomes the gating factor. If demand matches Altman’s thesis and unit costs fall as Nadella expects, agents and edge inference could redefine SaaS economics—setting up both firms to capture outsized value while the U.S. races to build the world’s compute backbone.
Oops, I got emotionally attached to this $429 AI pet
Fastcompany • November 3, 2025
AI•Tech•Robotics•AICompanions•Moflin
It’s 10 a.m. on an October morning, and I’m in the middle of a one-on-one Zoom interview when a sudden trilling sounds from behind me. I try to ignore it, but several other strange noises follow. My eyes glaze over as I commit myself to feigning complete obliviousness to my sonic surroundings. It’s easier than explaining that the noises are coming from my AI-powered pet.
This awkward encounter came thanks to Moflin, a $429 AI pet built by the electronics company Casio. According to Casio’s official description, the Moflin is “a smart companion powered by AI, with emotions like a living creature.” This robot friend looks a bit like a Star Trek tribble, in that it’s an amorphous blob covered in fur. It comes in either gold or silver.
For ‘90s kids, the device is perhaps described as a modern-day Furby. Like a Furby, the Moflin speaks its own language of chirps and trills that change over time; but unlike a Furby, its learning is actually molded by an AI model that allows it to become “attached” to its owner. According to the pet’s makers, the Moflin learns to recognize its owner’s voice and preferences, and it slowly develops new ways of moving and vocalizing to express a bond with the user.
As of this writing, I’ve had my Moflin for close to three weeks, and I’m going to make a bold claim: This device might just be one of the first “AI companions” that’s actually useful.
The graveyard of AI companions past
Over the past several months, we’ve seen many companies try and fail to sell users on a variety of AI wearables.
That includes devices like the Humane AI pin and Rabbit R1, which both debuted to a chorus of scathingly negative reviews after users determined that neither could really do many of the tasks that they were supposed to. Currently, the hottest topic in the AI wearable space is the Friend AI necklace from entrepreneur Avi Schiffman, which is billed as an “AI companion” that’s always listening to its users’ surroundings.
Launches like these have made it clear that, as of right now, most AI companions are just “promiseware,” or devices that make a lot of claims about their capabilities that simply aren’t there at launch. I think that the Moflin lands solidly outside of this unfortunate category, primarily because it doesn’t try to make any lofty claims about changing the world or altering everyday habits: it’s just meant to look cute, sound silly, and make users feel a little bit better.
How A.I. Is Transforming Dating Apps
Nytimes • November 3, 2025
AI•Tech•Dating Apps•Recommender Systems•User Experience
“Meet your artificial intelligence matchmakers.” The central idea is that new A.I. tools are reshaping the dating-app experience so people spend less time swiping through endless profiles and more time engaging with a smaller set of higher-quality introductions. Instead of manual browsing, these systems act like a digital concierge that filters, prioritizes, and frames potential connections, aiming to reduce choice overload and improve the path from discovery to conversation and, ultimately, to meeting in real life.
What’s Changing in the Experience
From scrolling to curation: Rather than presenting a continuous deck of profiles, A.I. can rank and surface a handful of candidates designed to fit a user’s stated preferences and inferred tastes.
From static bios to dynamic guidance: A.I. can summarize profiles, highlight compatibility signals, and suggest conversation starters to lower the friction of the first message.
From effortful search to assisted discovery: Users are nudged toward actionable next steps—sending a note, proposing a time to meet—versus aimlessly browsing.
How A.I. “Matchmakers” Typically Work
Preference learning: Models infer fine-grained tastes from micro-behaviors (what you linger on, who you like or pass, how conversations flow).
Contextual matching: Systems weigh shared interests, communication style, and practical constraints (location, availability) to prioritize matches most likely to lead to a response.
Assistive features: Drafting prompts, icebreakers, and replies; proposing dates and venues; and smoothing logistics to shorten the gap between match and meetup.
Benefits for Users
Time savings: Fewer, more relevant options help users avoid fatigue and the paradox of choice.
Better first impressions: Smart prompts and profile summaries can spark quicker, more substantive conversations.
Confidence and momentum: Guided steps reduce anxiety around what to say or do next, creating a clearer progression from match to date.
Risks and Trade-offs
Bias and fairness: If models learn from historical swiping patterns, they may amplify existing biases; platforms need monitoring and corrective feedback loops.
Transparency and control: Users may want to know why certain profiles surface—explanations and adjustable settings can build trust.
Privacy and security: Fine-grained preference learning depends on sensitive behavioral data; strong data governance and consent are essential.
Authenticity concerns: Over-assistance in messaging might make interactions feel scripted unless tools are designed to preserve the user’s voice.
Product and Market Implications
Differentiation: Apps that excel at curation and momentum may compete less on profile volume and more on match quality and outcome rates.
Monetization: Premium tiers could offer deeper personalization, enhanced explainability, and concierge-like planning features.
Metrics shift: Success may move from counting matches to tracking response quality, conversation depth, and successful date conversions.
Design Principles to Watch
Human-in-the-loop controls: Let users tune the “degree of curation,” from lightly assisted browsing to fully guided introductions.
Explainable recommendations: Brief, readable “why this match” notes maintain agency and reduce mystery.
Safety-by-design: Built-in detection for bots, harassment, and scams, along with tools for easy reporting and boundary-setting.
Progress scaffolding: Structured nudges—icebreakers, scheduling helpers—encourage forward motion without feeling pushy.
Key Takeaways
A.I. is repositioning dating apps from infinite swiping toward curated, higher-intent matchmaking.
The promise is less cognitive load, better conversations, and faster movement from interest to action.
Success depends on balancing personalization with transparency, safeguarding privacy, and ensuring that assistance enhances—not replaces—authentic human connection.
OpenAI signs $38 billion deal with Amazon, partnering with cloud leader for the first time
Youtube • CNBC Television • November 3, 2025
AI•Funding•CloudComputing•OpenAI•Amazon
Overview
The video reports a landmark agreement in which an AI leader enters a multi‑year, $38 billion partnership with a major cloud provider, described as “partnering with [a] cloud leader for the first time.”
The deal positions the cloud company as a primary infrastructure partner for large‑scale AI training and inference, signaling a step‑change in compute access, cost structure, and go‑to‑market reach for both sides.
Headline significance centers on the dollar value and the first‑time nature of the partnership, implying a strategic realignment in the competitive landscape for AI infrastructure.
What the Deal Likely Covers
Multi‑year cloud spend commitment across compute, networking, and storage to support training and serving frontier models at scale.
Access to GPU capacity and the provider’s custom AI accelerators, enabling diversified supply and potential cost/performance gains for training and inference workloads.
Closer technical collaboration on model deployment stacks, security, and reliability, with an emphasis on enterprise‑grade SLAs and global footprint.
Joint commercialization motions—distribution through the cloud provider’s marketplace and integrations with its data, analytics, and MLOps services—targeting rapid enterprise adoption.
Strategic Context and Rationale
Scale and cost: Frontier model development is compute‑intensive; a $38B commitment indicates long‑horizon capacity planning to stabilize unit economics amid volatile chip supply.
Vendor diversification: Aligning with another hyperscaler reduces single‑cloud dependency, improves bargaining power, and hedges against capacity bottlenecks.
Silicon competition: By tapping both GPUs and the provider’s in‑house accelerators, the AI company can benchmark price/performance and optimize for training vs. inference phases.
Enterprise channel: The cloud partner brings a vast enterprise customer base, compliance tooling, and data locality options—key for regulated sectors considering generative AI.
Implications for the Market
Competitive pressure on rival hyperscalers to match long‑term capacity guarantees, specialized silicon roadmaps, and bundled economics for large AI tenants.
Potential shifts in AI stack standardization as model hosting, vector databases, and orchestration tools consolidate around the cloud partner’s services.
Accelerated pace of feature releases in safety, observability, and governance to satisfy enterprise risk requirements at scale.
Possible regulatory attention on cloud concentration and the interplay between compute commitments and competition in foundational AI markets.
What to Watch Next
Timelines for migrating or augmenting training runs and inference endpoints on the partner’s infrastructure, and any announcements on co‑engineered hardware or control planes.
Benchmarks comparing cost, latency, and reliability across GPU and custom accelerator options as workloads scale.
New enterprise offerings—pre‑integrated connectors, data governance features, and domain‑specific model variants—surfacing through the cloud’s marketplace.
Any ripple effects on existing partnerships, including interoperability, data portability, and multicloud strategies.
Key Takeaways
This is framed as a first‑of‑its‑kind partnership between a leading AI model developer and a top cloud provider, with a headline value of $38B.
The agreement underscores how compute access, specialized silicon, and enterprise distribution have become decisive levers in the AI platform race.
Expect intensified competition among hyperscalers, faster enterprise adoption pathways, and closer scrutiny of cost, safety, and compliance in large‑scale AI deployments.
AI may fatally wound web’s ad model, warns Tim Berners-Lee
Ft • November 5, 2025
AI•Tech•Advertising•Digital Economy•Business Models
The inventor of the World Wide Web has issued a stark warning that artificial intelligence could fundamentally undermine the dominant advertising model that has powered the internet for decades. Tim Berners-Lee cautions that the growing reliance on AI agents to find information and complete tasks online poses an existential threat to the multibillion-dollar revenue streams of tech giants like Google and Meta.
The Shift from Search to AI Agents
Berners-Lee’s central concern revolves around the transition from traditional search engines to AI-powered personal agents. Instead of users visiting websites through search results and being exposed to advertisements, AI agents will increasingly gather information directly and present synthesized answers. This bypasses the traditional web browsing experience where advertising generates revenue.
The web inventor explained that when AI agents provide direct answers without requiring users to click through to websites, the entire ecosystem of online advertising is disrupted. This represents a fundamental challenge to the business models that have sustained major platforms, as users interacting primarily with AI interfaces will see far fewer ads.
Economic Impact on Tech Giants
The economic implications of this shift are substantial for companies that have built their fortunes on digital advertising:
• Google’s parent company Alphabet and Meta collectively generate hundreds of billions in annual revenue primarily from advertising
• The current web advertising model supports not only tech platforms but also countless publishers and content creators
• As AI agents become more sophisticated, they could dramatically reduce the number of occasions when users view advertisements
Berners-Lee emphasized that this transition threatens to “cut off the revenue” that has fueled the growth of these technology behemoths and the broader digital content ecosystem.
Broader Implications for the Web Ecosystem
The potential disruption extends beyond just the largest tech companies. The entire digital economy that has developed around web advertising faces significant challenges:
• Content publishers who rely on advertising revenue may struggle to sustain their operations
• The relationship between users, platforms, and advertisers would need to be renegotiated
• New business models would need to emerge to support quality content creation online
This transformation represents one of the most significant shifts since the commercialization of the internet, potentially requiring a fundamental restructuring of how value is created and captured online.
The Future of Web Economics
While Berners-Lee identifies the threat to existing advertising models, he also points to the need for innovation in how the web is funded. The development poses critical questions about sustainable models for supporting online content and services in an AI-dominated landscape.
The warning serves as a call to action for the technology industry to develop new approaches that can maintain a healthy digital ecosystem while embracing the benefits of artificial intelligence. The transition to AI agents may ultimately require reinventing the economic foundations of the web itself.
Perplexity to pay Snap $400M to power search in Snapchat
Techcrunch • November 6, 2025
AI•Funding•Snapchat•Perplexity•Chatbot
The article announces a significant partnership in which an AI-powered search company will power a new chatbot for a major social platform, with the AI firm agreeing to pay $400 million in a mix of cash and equity. The central themes are the deepening convergence of social media and AI, the monetization of conversational search within consumer apps, and the use of hybrid deal structures (cash plus equity) to align long-term incentives between technology providers and distribution-heavy platforms.
What’s Being Agreed
The social platform has signed a deal to integrate the AI firm’s search-driven capabilities into a new chatbot experience.
As part of the arrangement, the AI company will pay $400 million to the platform, structured as cash and equity.
The structure suggests not only immediate value transfer but also a shared upside if the integration drives usage and revenue growth.
Deal Structure and Signals
Cash plus equity indicates two things: near-term compensation for access, distribution, or IP, and a bet on the platform’s future performance via equity.
The $400 million figure underscores the perceived value of embedding AI-native search into a high-frequency consumer environment like messaging and social discovery.
Equity participation can help both sides remain aligned on roadmap, performance metrics, and the speed of feature rollouts.
Strategic Rationale for Each Side
For the social platform:
Accelerates time-to-market for an AI chatbot that can enhance engagement, retention, and search utility within the app.
Converts platform distribution into immediate financial return while trialing differentiated user experiences.
For the AI search provider:
Gains mass consumer exposure and real-world interaction data critical for improving conversational retrieval and answer quality.
Establishes a marquee integration that can serve as proof-of-concept for additional partnerships.
Product and User Experience Implications
Users may see more accurate, conversational answers embedded directly in chat or search surfaces, reducing friction between query and action.
If designed well, the chatbot could streamline content discovery, recommendations, and in-chat assistance (e.g., answering questions, pulling references, or guiding search flows).
Clear UX cues and privacy safeguards will be essential to maintain trust, given AI-generated responses in a social context.
Competitive and Monetization Context
The partnership reflects an industry trend: social platforms are racing to embed AI that can both delight users and unlock new monetization vectors (sponsored answers, affiliate results, premium features).
For competitors, the bar for AI-native search and assistance within consumer apps rises, potentially triggering similar cash/equity deals to secure differentiated models or data advantages.
Key Takeaways
A large, strategically structured payment highlights the value placed on AI-enhanced search within consumer social ecosystems.
The equity component aligns long-term incentives and suggests confidence in mutual growth.
Success will hinge on execution quality: relevance of answers, latency, safety, and seamless UX integration.
Who’s right about AI: economists or technologists?
Ft • November 6, 2025
AI•Tech•Productivity
“Forecasting the impact of artificial intelligence has become fraught, with evangelists pitched against sceptics.” This line captures a widening debate over whether AI will deliver a step-change in productivity and prosperity or prove another overhyped technology that underperforms outside demos and carefully staged showcases. The central tension hinges on how fast AI diffuses from prototypes to everyday workflows, and whether complementary investments in skills, data, and organizational change can unlock its potential. The article frames the discussion as a clash between technologists, who see compounding capability gains, and economists, who look for measurable outcomes in growth, jobs, and productivity before rendering judgment.
The Central Divide
Evangelists argue AI is a general-purpose technology that will rewire knowledge work, compress R&D cycles, and raise creative and analytical throughput across sectors.
Sceptics counter that most gains are narrow, brittle, or offset by integration costs, quality risks, and the slow pace of organizational adoption.
Economically minded observers seek hard evidence in macro data and firm-level productivity, while technologists emphasize rapid capability improvements and user enthusiasm.
What’s at Stake
Productivity growth: Will AI lift output per worker, or will benefits be limited to a few frontier firms?
Jobs and skills: Will AI automate routine cognitive tasks while complementing higher-value roles, or displace broad swaths of white-collar work?
Inequality: Do gains accrue to capital, top talent, and data-rich incumbents, or diffuse widely through cheaper services and tools?
Corporate strategy: How should companies prioritize AI experiments versus core operations, and when do pilots scale?
Public policy: How to balance innovation incentives with safeguards on accuracy, safety, and market power?
Why Predictions Are Hard
Diffusion lags: Transformative technologies often face an adoption “J-curve,” where upfront costs and retooling precede measurable gains.
Complementarities: Real value hinges on data quality, process redesign, and workforce training—investments that are expensive and time-consuming.
Measurement gaps: Many AI benefits—quality improvements, faster iteration, reduced errors—are hard to capture in near-term statistics.
System risks: Model brittleness, hallucination, security vulnerabilities, and governance challenges can slow deployment or necessitate costly guardrails.
A Possible Synthesis
Near term: Expect selective wins in tasks with clear interfaces and abundant data; wide performance variance across teams and sectors.
Medium term: As tools stabilize and practices mature, adoption broadens; productivity lifts appear first in firms that reorganize around AI-first workflows.
Long term: Outcomes depend on institutions—education, safety standards, competition policy—that shape diffusion speed and who benefits.
Signals to Watch
Firm-level metrics: sustained improvements in throughput, error rates, and cycle times in AI-integrated processes.
Investment patterns: spending on data infrastructure, tooling, and reskilling as leading indicators of future productivity.
Labor market shifts: changing task compositions, wage premiums for AI-augmented roles, and demand for complementary skills.
Operational scale: movement from pilots and prototypes to enterprise-wide deployments in non-tech sectors.
Implications for Leaders
Treat AI as an organizational change program, not a plug-in: redesign processes, governance, and incentive structures.
Prioritize data readiness and responsible-use frameworks to reduce risk and accelerate scaling.
Invest in human capital: pair automation with upskilling to capture complementarity gains and maintain trust.
Use staged experimentation with clear success metrics to separate durable value from hype.
Conclusion
The question “who’s right—economists or technologists?” frames a necessary tension. Technologists spotlight the realm of the possible; economists insist on the discipline of measured outcomes. The most realistic path blends both: ambitious exploration coupled with rigorous evaluation. Whether AI becomes a broad productivity engine or a narrower set of tools will hinge less on headline demos and more on the slow, practical work of integration, training, and governance.
Sam Altman says OpenAI is not ‘trying to become too big to fail’
Ft • November 6, 2025
AI•Funding•OpenAI•Sam Altman•Compute Infrastructure
Core message
OpenAI’s chief executive states the company is not “trying to become too big to fail” and is not seeking a US federal financial backstop for what is framed as a $1.4 trillion investment program. The remarks aim to distance the company’s growth ambitions from any expectation of taxpayer support, even as the scale of planned spending underscores the capital intensity of advanced AI development.
Scale and context
The reference to a $1.4 trillion “investment binge” signals a transformational build-out across the AI stack—encompassing compute, data centers, networking, and power. Such a figure, if pursued over time through partnerships and private financing, would place AI infrastructure among the largest industrial undertakings of the digital era. It reflects the mounting costs of training and deploying frontier models and the global race to secure scarce inputs such as cutting-edge chips, clean energy, and specialized talent.
Financing signals
By explicitly rejecting a federal financial backstop, the company positions its funding strategy around private capital markets and commercial partnerships rather than public guarantees. That message is designed to reassure policymakers and the public that the risks of large-scale AI investment will be borne by investors, suppliers, and customers—not by the government. It also communicates confidence that sufficient private liquidity exists to underwrite the multi-year pipelines required for compute and energy procurement.
Policy and regulatory implications
Declining a government backstop is meant to counter “moral hazard” concerns associated with entities deemed too systemically important to fail. The statement draws a line between OpenAI and sectors such as banking, which rely on formal safety nets. It may soften regulatory pushback by signaling that expansion will proceed without expectation of rescue, even if project timelines, costs, or market conditions shift. At the same time, the extraordinary magnitude of planned investment invites scrutiny on competition, supply-chain concentration, energy demand, and national security—issues that often trigger government interest regardless of financial guarantees.
Market and infrastructure impact
A multi-trillion-dollar build-out would reverberate through semiconductors, cloud providers, utilities, real estate, and equipment manufacturers. Chipmakers and data-center operators could see demand visibility stretching years out, while utilities and grid planners would confront new baseload and peak requirements tied to AI clusters. For enterprise customers, the message suggests continued advances in model capability and availability, potentially lowering unit costs over time but raising near-term capital intensity for providers racing to keep pace.
Risks and open questions
Execution risk looms large: procurement of advanced chips, construction of hyperscale campuses, power contracting (including renewables and transmission), and supply-chain resilience. Financial risk includes interest-rate exposure, partnership dependencies, and potential cost overruns. Strategically, the company must balance rapid capacity expansion with responsible deployment, safety, and governance—areas under growing public and regulatory scrutiny. Finally, even without a federal backstop, the systemic importance of AI infrastructure could still prompt policy responses if failures or bottlenecks spill over into broader economic activity.
Key takeaways
The company rejects a US federal financial backstop and disavows any strategy to become “too big to fail.”
The cited $1.4 trillion investment ambition highlights unprecedented capital needs across chips, data centers, networks, and power.
Funding will be sought from private markets and partnerships, signaling confidence in non-governmental financing capacity.
Regulatory scrutiny is likely to focus on competition, energy usage, supply-chain concentration, and national security—not just on financial guarantees.
Success hinges on disciplined execution and governance as AI infrastructure scales to meet surging demand.
OpenAI Races to Quell Concerns Over Its Finances
Nytimes • Mike Isaac • November 6, 2025
AI•Funding•GovernmentAid•AI Bubble•Public Subsidies
The article centers on growing unease about the financial underpinnings of a leading A.I. company after a senior executive publicly floated the idea of government assistance. The suggestion sparked pushback, highlighting a broader debate over whether the A.I. sector’s rapid expansion is sustainable or drifting into bubble territory. The piece frames the controversy as a test of how cutting-edge technology should be financed, who bears the risks, and how public policy might shape an industry moving at breakneck speed.
What happened
A top executive proposed the possibility of government aid to support the company’s ambitions.
The idea met immediate resistance, reflecting skepticism about subsidizing a fast-growing private enterprise.
The pushback unfolded against mounting concerns that the A.I. industry could be inflating a dangerous bubble, fueled by high expectations and intense capital flows.
Why the suggestion matters
Public aid raises questions about fairness and moral hazard: should taxpayers underwrite the risks of private companies pursuing frontier technologies?
It foregrounds the scale and cost of A.I. development, implicitly acknowledging that private financing alone may be stretched by the industry’s capital intensity.
The debate could shape norms for public–private collaboration, including whether A.I. development should be treated as strategic infrastructure.
Signals of a looming bubble
Rapid valuation growth and exuberant investment often detach from near-term cash flows, creating vulnerabilities if expectations reset.
Market sentiment can be self-reinforcing: headlines about government support may initially buoy confidence but also invite scrutiny about underlying economics.
Pushback to public aid is a barometer of political and social tolerance for A.I. risk-taking, suggesting limits to the “growth at all costs” narrative.
Stakeholder perspectives
Critics worry that public subsidies would socialize losses while privatizing gains, distorting competition and rewarding scale over discipline.
Supporters might argue that government involvement can reduce systemic risk, align national priorities, and accelerate innovation with broad economic spillovers.
Investors are likely to parse the episode for clues about funding durability, execution risk, and the potential for policy-driven constraints or tailwinds.
Policy and governance implications
If government aid becomes part of the A.I. financing toolkit, policymakers will face design choices about oversight, transparency, and performance metrics to ensure public value.
Conditions could include commitments on safety, workforce development, or energy and environmental standards, linking funding to measurable outcomes.
The controversy could catalyze broader frameworks for A.I. risk management, including disclosure norms, stress testing of business models, and contingency planning.
What to watch next
Whether the company clarifies its stance—doubling down on private financing, outlining specific public–private mechanisms, or stepping back from the idea altogether.
The market’s reaction: shifts in sentiment toward A.I. sustainability narratives, capital availability, and the cost of financing.
Policy discourse: proposals that define eligibility, guardrails, and accountability for any A.I.-related public support—signaling how governments might balance innovation with fiscal prudence.
Bottom line
The pushback to a top executive’s mention of government aid crystallizes a pivotal question: can the A.I. industry finance its ambitions through private means alone, or will public support be necessary—and acceptable—to bridge the gap? With bubble concerns rising, the episode underscores that funding structures, not just technical breakthroughs, will determine how responsibly and sustainably A.I. scales from promise to durable economic impact.
Big Tech’s market dominance is becoming ever more extreme
Ft • October 31, 2025
AI•Tech•Market Concentration•BigTech•Economic Impact
The dominance of major technology companies in global markets has reached unprecedented levels, with recent developments highlighting how extreme this concentration has become. Nvidia’s market valuation reaching $5 trillion serves as a powerful indicator of how the artificial intelligence boom is reshaping market dynamics and investor exposure patterns. Even investors who believe they are diversified through broad global indexes are finding themselves heavily concentrated in a handful of technology giants driving the AI revolution.
Market Concentration Dynamics
The current market landscape demonstrates remarkable concentration in the technology sector, with a small number of companies commanding extraordinary market power and valuation. This trend has accelerated with the AI boom, as companies positioned to benefit from artificial intelligence infrastructure and applications have seen their valuations soar. The phenomenon affects even the most diversified investment portfolios, as major indexes increasingly reflect the performance of these dominant tech firms.
The implications of this concentration extend beyond mere market statistics. When a single company like Nvidia reaches a $5 trillion valuation, it represents a significant portion of major market indexes, meaning that investors tracking these benchmarks automatically maintain substantial exposure to these tech giants. This creates a self-reinforcing cycle where index funds and passive investment strategies continually channel capital toward the largest companies, potentially exacerbating market concentration.
Structural Implications
This extreme market dominance raises important questions about market stability, competition, and economic resilience. The concentration of value in a few technology companies creates systemic risks where market movements become increasingly dependent on the performance of these firms. Additionally, the ability of these companies to invest in research and development at scales that dwarf potential competitors creates significant barriers to entry in critical technology sectors.
The AI boom has particularly accelerated these trends, as the computational requirements and infrastructure investments needed to compete in artificial intelligence have created natural monopolies and oligopolies. Companies controlling key components of the AI ecosystem—from semiconductor manufacturing to cloud computing infrastructure to foundational AI models—have established positions that may be difficult for new entrants to challenge.
Broader Economic Impact
The dominance of big tech extends beyond financial markets into the broader economy. These companies influence innovation patterns, talent distribution, and even geographic economic development as regions compete to host tech hubs and related industries. The concentration of technological advancement within a small number of corporations may also shape the direction of innovation, potentially prioritizing areas that benefit incumbent firms over broader societal needs.
As artificial intelligence continues to evolve and integrate across industries, the companies controlling key AI technologies and infrastructure are positioned to capture disproportionate value from this transformation. This suggests that without significant changes in market dynamics or regulatory approaches, the current trend of increasing concentration may continue, with profound implications for competition, innovation, and economic equity.
Is OpenAI Becoming Too Big to Fail?
Wsj • November 2, 2025
AI•Tech•Regulation•Systemic Risk•OpenAI
OpenAI’s growing integration with major technology companies and its central role in the artificial intelligence ecosystem are raising questions about whether the company has become systemically important to the U.S. economy. Under CEO Sam Altman’s leadership, the company has strategically positioned itself through a complex web of partnerships and investments that connect it to numerous critical sectors.
Strategic Partnerships and Integration
The company’s approach involves deep integration with major tech platforms, creating interdependencies that extend far beyond simple vendor-customer relationships. Microsoft’s multi-billion dollar investment represents one of the most significant corporate partnerships in recent technology history, embedding OpenAI’s technology directly into Microsoft’s core products including Office, Windows, and Azure cloud services. This creates a situation where Microsoft’s substantial business operations now depend on OpenAI’s continued stability and performance.
Beyond Microsoft, the company has established relationships with other technology giants and thousands of businesses through its API services. These integrations mean that disruptions to OpenAI’s operations could potentially affect everything from customer service operations to software development workflows across multiple industries.
Economic Implications and Systemic Risk
The concern about OpenAI becoming “too big to fail” stems from its positioning at the center of the rapidly expanding AI economy. As businesses across sectors increasingly build their operations around AI capabilities, the failure of a central provider like OpenAI could create cascading effects throughout the economy. This mirrors historical concerns about financial institutions whose failure could trigger broader economic crises.
The company’s technology now underpins critical functions in sectors including healthcare, finance, education, and government services. This widespread adoption creates a form of structural dependency where alternative providers cannot easily replace OpenAI’s capabilities at scale, potentially creating a single point of failure in the AI ecosystem.
Regulatory and Market Considerations
The concentration of AI capability within one dominant company raises questions about market competition and innovation. While OpenAI’s technology has accelerated AI adoption, some experts worry that its dominant position could stifle competition and create barriers to entry for smaller AI companies. This market dynamic echoes previous technology cycles where platform companies achieved such scale that they became essential infrastructure.
The situation presents regulatory challenges, as policymakers must balance encouraging innovation with ensuring market stability. There are ongoing debates about whether companies achieving this level of systemic importance should face additional oversight or requirements to ensure their stability and continued operation.
Future Outlook and Industry Impact
Looking forward, OpenAI’s trajectory suggests its economic importance will only increase as AI becomes more deeply embedded in business processes and consumer products. The company’s continued expansion into new domains and applications strengthens its position while simultaneously increasing the potential impact of any operational disruptions.
The fundamental question remains whether the technology industry and broader economy have become so dependent on a single AI provider that its failure would create unacceptable systemic risk. This development represents a significant test for how emerging technologies achieve critical mass and what responsibilities come with that position in the modern digital economy.
This Is How AI Will Reshape Startups Forever | Henry Ward, Co-Founder of Carta
Neweconomies • Ollie Forsyth • November 2, 2025
AI•Startups•Venture Capital•Technology•Innovation
Welcome to BIG IDEAS by NEW ECONOMIES — a show where we learn how the most iconic founders have turned crucible moments into global companies. Throughout these episodes, we also discover the trends our guests are most excited about, and the types of businesses those trends could go on to create.
In this week’s episode, we sit down with Henry Ward, co-founder and CEO of Carta, for an in-person conversation recorded in San Francisco during Tech Week — with thanks to Founders Inc for hosting us.
Carta supports thousands of founders with their cap tables and is now expanding into the private equity space. We cover the current AI hype and how founders can think beyond it, Carta’s transformation over the years, the state of emerging managers, the growing importance of secondaries, lessons from Marc Andreessen, and more.
Throughout the episode, we discuss the AI hype and where AI goes next, how founders can land their first users, whether we’re in an AI bubble, the importance of secondaries, lessons from raising $1 billion, the state of emerging managers, and Carta’s future plans.
Venture
Will Wall Street keep gobbling up secondary activity
Linkedin • Ben Black • November 6, 2025
Venture
Will Wall Street Keep Gobbling Up FOF & Secondary Venture Platforms?
Goldman Sachs and Morgan Stanley both made bold moves into private markets this month. Goldman acquired Industry Ventures. Morgan Stanley bought EquityZen.
At first glance, both signal a push for greater access to private markets. But the similarity ends there.
The Goldman deal is a masterstroke. Industry Ventures sits at the center of the venture ecosystem, with more than 1,000 investments and reporting on thousands of companies. That gives Goldman a real-time view of who’s raising, who’s buying, and who’s growing. In venture, better data drives better investing—and Goldman just bought one of the richest information engines in the market. That visibility fuels secondaries, co-investments, and wealth-management opportunities. Goldman didn’t just buy a venture firm; it bought 𝘥𝘢𝘵𝘢 𝘪𝘯𝘧𝘳𝘢𝘴𝘵𝘳𝘶𝘤𝘵𝘶𝘳𝘦 𝘧𝘰𝘳 𝘱𝘳𝘪𝘷𝘢𝘵𝘦 𝘮𝘢𝘳𝘬𝘦𝘵𝘴.
The Morgan Stanley deal is harder to fathom. EquityZen has 800K+ retail investors, but its business thrives on small, fragmented transactions—$50K here, $100K there. These trades are thin on information, and buyers often purchase a logo, not a company they truly understand. EquityZen works as middleware between private companies and smaller investors, but it doesn’t scale to institutional flows or compliance standards. How that meshes with Morgan Stanley’s compliance culture is a mystery to me—unless the bank is playing a long game around retail infrastructure or data aggregation.
Still, both deals underscore one thing: Wall Street is taking venture capital very seriously.
We’ve seen this before. In 2021, StepStone acquired Greenspring Associates, adding $9.5B in AUM for roughly $185M in cash and $600M in incentives. The Goldman–Industry tie-up feels eerily similar, a canary in the coal mine.
The message is clear: Big banks now see top-tier venture assets as central to their wealth management franchises. Private markets have become the battleground for client retention, data advantage, and differentiated returns.
If you’re the CEO of J.P. Morgan, UBS, or Morgan Stanley—and Goldman just bought Industry Ventures—what’s your move? The list of major venture platforms with fund-of-funds, directs, and secondaries is short: Sapphire Ventures ($11B AUM), Top Tier Capital ($7B), and TrueBridge Capital ($7B) come top of mind, though none rival Industry Ventures’ dominance in secondaries or its panoramic view of the venture ecosystem.
For a truly transformational play, however, there’s only one obvious target: StepStone Group, valued around $7B as a public company. And most VCs would agree—StepStone takes meetings just to download your entire data room, so their information asset is premier.
Wall Street’s appetite for venture exposure is only growing. The question isn’t if more acquisitions are coming—but 𝘸𝘩𝘪𝘤𝘩 firms will be next.
How Many Seeded Companies Progeress?
Linkedin • Peter Walker • November 6, 2025
Venture
Summary
The piece argues that the long‑held “50% drop-off between rounds” rule is breaking down for seed-to‑Series A transitions in the post-2021 market. Using a dataset of 13,466 US startups that raised seed rounds, it highlights a sharp deterioration in graduation rates beginning with the 2021 cohort. Most strikingly, only 24% of companies that raised seed in 2022 have reached Series A so far, well below historical expectations. The author contrasts this with the pre‑2020 period—when 6 in 10 seed‑backed companies progressed—and suggests that today’s environment remains challenging despite robust attention to AI startups. The thesis: founders should recalibrate expectations and milestones; investors should recognize that the funnel from seed to Series A has materially tightened since the 2021 boom in company formation.
Key data points
Historic benchmark: In VC-backed companies, it’s often assumed half of seed-stage startups will advance to Series A.
2016–2018 cohorts: About 60% of seed‑stage startups ultimately raised Series A. The author notes this may be slightly inflated due to “sample heuristics” when the dataset provider was younger and the overall market was strong.
2020 cohort: 51% have raised at least a Series A, and 85% of the cohort sits at seed or Series A today—indicating many are still in the early pipeline rather than clearly failed or advanced further.
Post-2021 shift: Companies that raised seed in 2021 were “much less likely” to reach Series A than prior years, and the difficulty has persisted for subsequent cohorts.
2021 denominator shock: The number of seed‑stage companies “ballooned” in 2021, expanding the denominator and contributing to reduced graduation rates.
2022 cohort snapshot: Only 24% have progressed to Series A so far, underscoring how markedly the conversion rate has fallen relative to the old 50% heuristic.
Interpretation and drivers
Oversupply from the 2021 boom: Abundant capital and a wave of new formations created intense competition for Series A dollars, pushing up the bar for traction and product‑market fit.
Data maturity and market quality effects: Earlier high conversion rates partly reflect a strong market and evolving data coverage; the current environment is simultaneously tougher and better measured.
AI caveat: While there’s a risk of replaying 2021 exuberance, the author argues today’s AI funding is flowing into fewer early‑stage companies, concentrating resources rather than inflating the entire denominator.
Implications for founders
Recalibrate milestones: Plan for a longer seed journey and a higher traction threshold for Series A. Extend runway, prioritize efficient growth, and validate monetization earlier.
Strategic positioning: Sharpen differentiation and proof points (retention, gross margins, sales efficiency) to stand out in a crowded seed landscape.
Psychological readiness: As the author puts it, after celebrating the seed round, “about half” of peers won’t reach the next checkpoint—prepare a plan B and maintain investor communication discipline.
Implications for investors
Funnel management: Expect lower seed‑to‑A conversion and adjust pacing, reserves, and follow‑on strategies accordingly.
Selection and support: Emphasize post‑seed company building—go‑to‑market, hiring, and unit economics—to help portfolio companies cross the Series A bar.
AI concentration risk: Fewer AI companies capturing more capital may produce a barbell outcome; diversify across enabling infrastructure and defensible application layers.
Bottom line
The seed‑to‑Series A path has grown materially harder since 2021, with the 2022 cohort’s 24% progression a stark signal. With more companies chasing selective Series A checks, success hinges on disciplined execution, capital efficiency, and unmistakable traction. Respect to the founders pressing on—this market rewards clarity, focus, and evidence over momentum.
Venture Capital is Not Competitive
Credistick • @credistick • November 6, 2025
Venture
Thesis
Early-stage venture capital bifurcates into two distinct modes: a collaborative, positive-sum discipline or an adversarial, zero-sum contest. The determining factor is whether investors accept the illusion of a stable consensus on “good deals.” When that illusion takes hold, competition and access become performative, pushing the industry toward brand-driven oligopoly, rent-seeking, and a finite-game mindset that shrinks overall opportunity rather than expanding it.
How VC Becomes Oligopolistic
When firms believe in a consensus on quality, they optimize for brand compounding, relationship dominance, and “scale,” concentrating capital and influence in a small set of actors.
This dynamic transforms a network of boutiques into an “oligopolistic mob,” rewarding distribution power over differentiated judgment and eroding the field’s diversity of ideas.
The downstream effect is a class of followers whose conviction atrophies, leaving them as passive participants reliant on the tastes of a few powerful leaders rather than genuine sourcing and conviction-driven underwriting.
Contrary Evidence Ignored
Evidence undermining the consensus myth is abundant yet dismissed: weak persistence of performance across funds; relationships fail to explain persistence; consensus investments underperform; and scale erodes returns.
These findings challenge the notion that bigger platforms, warmer access, and club deals lead to superior outcomes; instead, they suggest diminishing returns to scale and herding penalties when everyone chases the same patterns.
Finite vs. Infinite Games
The essay reframes “competition” using James Carse’s finite versus infinite games: finite games prize winning fixed contests; infinite games maximize the scale and durability of play.
VC feels “more competitive” today only because it is more adversarial. As power concentrates around a few gatekeepers and the hunt is for a narrow set of “best” deals, the system becomes less genuinely competitive in the infinite-game sense.
The basketball analogy clarifies this: a finite player fixates on the next championship; an infinite player cares about the infrastructure—roughly 42,000 U.S. city parks with hoops, ~40% of teens playing pickup, and ~18,000 high schools fielding teams—because that expands the pool of talent and outcomes.
US vs. Europe: Divergent Equilibria
In the U.S., boutiques are being crowded out by large agglomerators who, despite not always competing for the same LP dollars, increasingly define norms and incentives for the whole ecosystem.
Europe exemplifies a more entrenched finite-game equilibrium: venture capital arrived as an import, captured by opportunists rather than ecosystem builders. Without a vision for abundance, the market remained relatively small, limiting growth and prosperity.
“City Parks” for Entrepreneurship
An infinite-game orientation prioritizes access and a healthy “interface” between capital and talent.
Agglomerators address this interface through scout programs optimized for partner approval, funneling capital to the same geographies (SF/NY), credentials (Stanford/Harvard), pedigrees (ex-Mag7), and themes (AI tooling). The opportunity set narrows, even if the winners aim to capture a larger share.
Boutique investors expand the interface by scouring campuses and builder communities for outliers who might not even be seeking funding yet—growing the pool and surface area of opportunity.
Rothenberg’s Paradox and the Extraction Trap
A paradox arises: why call it a finite game when it attracts ever-larger pools of capital? Because successful infinite games create rent-seeking opportunities for finite-game players; the scale of capital itself becomes the prize, fueling financial engineering and wealth extraction rather than new company creation.
The industry can appear booming while its positive-sum engine—the habit of expanding the playing field—quietly stalls.
Key Takeaways and Implications
Incentive design matters: optimize for origination breadth and conviction rather than brand signaling and access hoarding.
Healthy competition in VC is measured by the expansion of opportunity (infinite game), not by victory in a handful of “hot” rounds (finite game).
Ecosystem builders should prioritize “city park” equivalents for entrepreneurship—wider awareness, lower barriers, and more onramps—to restore positive-sum dynamics and long-run returns.
Sequoia Plans for More AI, Less Partisanship Under Grady and Lin
Bloomberg • November 5, 2025
Venture
Overview
Sequoia Capital’s new co-stewards, Pat Grady and Alfred Lin, are planning a strategic reset centered on two pillars: doubling down on artificial intelligence and recalibrating the firm’s external posture to appear less politically partisan. The shift follows concerns voiced by some partners about the leadership approach under Roelof Botha, according to people familiar with the matter. The combined message is that Sequoia aims to be both technologically aggressive—especially in AI—and culturally neutral in ways that keep the focus on founders and company-building rather than politics.
Strategic Priorities: Deepening the AI Focus
The firm intends to intensify its attention on AI across the stack, from foundational technologies to applied use cases, signaling a view that AI remains the defining platform shift for the next decade.
A deeper AI focus typically translates into more active scouting, larger reserves for follow-on capital, and tighter partnerships with technical founders—moves that can improve win rates in competitive deals.
Expect greater emphasis on category-creating startups rather than incremental features, consistent with a strategy that prioritizes durable moats built on data, distribution, and defensible models.
Reputation and Positioning: “Less Partisanship”
The recalibration toward a less partisan image suggests a desire to minimize political signaling that could alienate founders, customers, or limited partners. The goal appears to be a reputation anchored in execution and outcomes, not ideology.
For a firm of Sequoia’s stature, perceived neutrality can be a strategic asset: it broadens access to deal flow across geographies and founder communities and reduces reputational risk in polarized environments.
Internally, a de-emphasis on political identity may help align partners around shared investment theses and governance norms, reducing friction that can arise when personal political views spill into firm strategy.
Leadership Dynamics and Governance
With Grady and Lin as co-stewards, Sequoia is signaling a collaborative leadership model intended to blend complementary strengths while addressing partner concerns about prior decision-making dynamics.
Co-stewardship can distribute responsibilities across sourcing, portfolio support, and firm-building, while creating multiple trusted interfaces for founders and LPs.
Referencing concerns about earlier leadership acknowledges internal feedback loops; responding with a strategic and cultural reset communicates institutional adaptability—critical for long-lived venture franchises.
Implications for Founders, LPs, and the Market
Founders: A clearer AI mandate can accelerate diligence and conviction for technical teams, potentially leading to faster decisions and stronger post-investment support (hiring, go-to-market, partnerships).
Limited Partners: A focus on AI combined with reputational neutrality may be read as risk-managed offense—pursuing high-growth themes while limiting non-investment controversies that can distract from returns.
Competitors: The move raises the competitive bar in AI deals; rival firms may need tighter theses, deeper technical benches, or specialized funds to compete for top-tier allocations.
Ecosystem: A major firm signaling “more AI, less partisanship” could influence norms across Sand Hill Road, encouraging firms to emphasize domain depth and founder-centric neutrality over political branding.
Key Takeaways
Sequoia plans to make AI an even more central investment theme while reorienting its public posture to be less politically partisan, according to people familiar with the situation.
The leadership handoff to Pat Grady and Alfred Lin is coupled with a cultural message of focus and neutrality, likely aiming to unify partners, attract broad founder pipelines, and reassure LPs.
If executed well, the strategy could reinforce Sequoia’s edge in the most consequential technology cycle while reducing reputational noise—positioning the firm for sustained influence in the next wave of AI-driven company formation.
The Great VC Concentration
Youtube • Carta • November 6, 2025
Venture
The venture capital industry is experiencing unprecedented concentration, with a small number of elite firms capturing an increasingly dominant share of capital, deals, and returns. This concentration phenomenon is reshaping the startup ecosystem, creating a tiered system where access to top-tier VC funding has become a critical determinant of startup success. The trend represents a fundamental shift from the more distributed venture landscape of previous decades.
The Data Behind the Concentration
Recent data reveals the dramatic scale of this concentration. A mere 1% of venture capital firms now account for over 50% of the total capital raised in the industry. This elite group consistently participates in the most promising funding rounds and captures a disproportionate share of successful exits. The analysis shows that companies backed by these top-tier firms demonstrate significantly higher survival rates and valuation growth compared to those funded by other venture investors.
The concentration extends beyond just capital allocation to deal flow access. Founders of promising startups increasingly prioritize securing investment from this exclusive group, creating a self-reinforcing cycle where the best opportunities flow to the same small set of firms. This dynamic has created what industry observers describe as a “winner-take-most” environment in venture capital.
Implications for Startups and Founders
For entrepreneurs, this concentration creates both challenges and opportunities. Startups that secure backing from top-tier firms benefit from:
Enhanced credibility and signaling effects that attract additional investors, talent, and customers
Access to extensive networks of potential hires, customers, and follow-on investors
Larger funding rounds that provide longer runways and competitive advantages
However, the concentration also means that the bar for securing funding from elite firms has risen dramatically. Many promising companies that would have secured venture funding in previous eras now find themselves unable to access the capital needed to scale, potentially stifling innovation and limiting entrepreneurial diversity.
Impact on the Broader Venture Ecosystem
The concentration trend has ripple effects throughout the entire startup investment landscape. Emerging fund managers and regional venture firms face increased competition for allocation in attractive deals. Limited partners (LPs) are increasingly concentrating their venture allocations with the perceived top performers, creating challenges for newer or smaller funds trying to raise capital.
This dynamic has also influenced investment strategies across the industry. Some firms are responding by specializing in specific sectors or stages where they can develop competitive advantages. Others are focusing on geographic regions or underrepresented founder groups that may be overlooked by the largest firms.
Future Outlook and Structural Shifts
The current concentration appears to be a structural rather than cyclical phenomenon, driven by the scaling benefits and network effects that accrue to successful venture firms. As the industry evolves, several potential outcomes emerge. The concentration could intensify further, creating an even more stratified ecosystem. Alternatively, new models of venture investing may emerge to serve the segments and founders currently underserved by the concentrated system.
The long-term implications for innovation and economic growth remain uncertain. While concentration may efficiently direct capital to the most promising opportunities, it also risks creating blind spots where valuable innovations outside the view of top firms struggle to secure funding. The evolution of this trend will likely shape the next generation of technology companies and the future direction of technological innovation.
Sequoia’s shake-up may be a VC turning point
Ft • November 6, 2025
Venture
Overview
Leadership churn across top venture firms and tech groups underscores how difficult it is to navigate the sharp swings of financial and technology cycles.
The latest shake-ups suggest a broader reassessment of firm structures, partner roles, and risk tolerances after an era of easy money and rapid growth.
Drivers of Turnover
Capital cycle whiplash: The shift from near-zero rates to a higher-for-longer environment has compressed valuations, extended exit timelines, and exposed strategy mismatches.
Portfolio drag: Slower IPO and M&A markets mean partners spend more time on triage and bridge financing, creating internal pressure over capital allocation and follow-ons.
Governance strain: Larger multi-stage platforms must balance seed-to-growth mandates, often sparking debates over reserves, write-downs, and performance accountability.
Tech platform shifts: AI infrastructure and compute costs, changing distribution, and platform risk demand new expertise, pushing firms to rotate leadership toward operators or specialists.
Impact on the Venture Model
Fund pacing and size: Expect tighter fundraises, smaller vehicles for earlier stages, and more disciplined deployment tied to concrete milestones rather than momentum.
Economics and incentives: Carried interest and clawback dynamics become more salient as paper gains are marked down, sharpening scrutiny on partner performance.
Secondaries and liquidity: GP- and LP-led secondaries grow in importance as firms seek liquidity solutions without waiting for public markets to reopen fully.
Implications for Start-ups and Investors
For founders: Governance, efficient burn, and path-to-profitability matter more than top-line growth; board composition and investor value-add become differentiators.
For LPs: Greater dispersion between managers; due diligence shifts from brand to demonstrated realizations, DPI over TVPI, and consistency across cycles.
For the ecosystem: Consolidation likely among platforms that cannot adapt; specialist and smaller, focused funds may outperform generalists during transition periods.
What to Watch Next
Partner realignments and new fund theses centered on AI, capital efficiency, and durable unit economics.
Recovery signals in IPO and M&A windows, and how firms adjust reserve strategies.
Whether leadership changes catalyze a more sustainable cadence in venture—less momentum chasing, more underwriting rigor, and clearer accountability through the cycle.
Private capital zombie firms will pile up in next decade, says EQT chief
Ft • November 1, 2025
Venture
Per Franzén, the chief executive of Swedish private equity firm EQT, has issued a stark warning about the future of the private capital industry, predicting that a significant number of “zombie firms” will emerge over the next decade. These are private equity and venture capital firms that will be unable to raise new funds but will continue managing their existing portfolios without the capacity for new investments or growth.
The Zombie Firm Phenomenon
According to Franzén, the current market environment is creating conditions where many private capital firms will become effectively defunct while maintaining a semblance of operation. He stated, “There will be a lot of zombie firms in the next 10 years. They will not be able to raise a new fund, but they will still be managing their old funds.” This situation arises from the combination of high interest rates, challenging fundraising conditions, and a backlog of unsold assets accumulated during the boom years of cheap money.
The inability to raise fresh capital means these firms will be trapped managing legacy investments without the resources to pursue new opportunities or properly support their existing portfolio companies. This creates a scenario where firms continue operating but lack the vitality and growth potential that defines healthy investment organizations.
Market Conditions and Structural Challenges
Several factors are contributing to this emerging crisis in private capital:
• High interest rates have dramatically altered the investment landscape, making leverage more expensive and reducing the attractiveness of private equity deals
• Difficult fundraising environment has left many firms struggling to secure commitments from institutional investors
• Overcrowded market with too many firms chasing limited capital and investment opportunities
• Accumulation of unsold assets from the previous low-rate environment that now cannot be exited profitably
Franzén noted that the industry expanded rapidly during the era of cheap money, with numerous firms launching that may not be sustainable in the current economic climate. The normalization of interest rates has exposed fundamental weaknesses in many business models that relied on continuous access to inexpensive capital.
Implications for the Industry
The proliferation of zombie firms could have significant consequences for the broader private capital ecosystem. These firms may be forced to hold assets longer than originally planned, potentially leading to suboptimal outcomes for both investors and portfolio companies. Without fresh capital to invest, they cannot take advantage of new opportunities or provide follow-on funding to struggling portfolio companies.
The situation also creates challenges for limited partners who may find their capital tied up in underperforming funds with limited prospects for successful exits. This could lead to reduced returns and longer investment horizons than originally anticipated.
EQT’s Position and Strategy
Despite the challenging environment, EQT appears positioned to weather the storm better than many competitors. As one of Europe’s largest private equity firms with approximately €242 billion in assets under management, EQT has the scale and resources to continue fundraising and investing. Franzén’s comments suggest that larger, more established firms may benefit from industry consolidation as weaker players struggle to survive.
The current market dislocation could create opportunities for well-capitalized firms to acquire assets from distressed sellers or partner with struggling firms to manage their portfolios. This potential consolidation may ultimately strengthen the remaining players in the industry.
The warning from EQT’s chief executive highlights a fundamental shift in the private capital landscape, moving from an era of abundant capital and easy fundraising to a more challenging environment that will test the sustainability of many business models in the industry.
Interview of the Week
Why Tech Billionaires Are So Angry: Elon Musk and the Gilded Rage of Silicon Valley
Keenon • November 6, 2025
Media•Social•Interview of the Week
If money is supposed to make you happy, then why do tech billionaires like Elon Musk, Peter Thiel and Marc Andreessen seem so miserably angry? That’s the question at the heart of Jacob Silverman’s new book, Gilded Rage, an expose of Silicon Valley’s angry plutocracy. The weird thing is that a lot of these billionaires behave little differently from the apoplectic lumpen commentariat on X or Reddit. Sure, they might own X, but they share all the right-wing conspiracy theories infecting the online mob - from trollish racism and anti-semitism to a bro style paranoia about female power. According to Silverman, their rage is a form of exhaustion with the world itself. These men don’t just want to own everything—they want to exit society entirely, by inventing new cities, buying private islands, and founding Martian colonies. Unlike the Gilded Age robber barons who happily built universities and libraries, today’s miserable tech elites sit in their palatial basements and rage against society. Maybe we should take away their money. It might cheer them up.
The Radicalization is Real and Different This isn’t just typical Silicon Valley disruption rhetoric. Silverman argues we’re witnessing an unprecedented fusion of corporate power and government under Trump, with tech CEOs like Musk acting as virtual co-candidates rather than mere donors. Unlike previous eras of money in politics, this represents CEOs directly occupying the political stage.
Childhood Trauma Shapes Billionaire Rage Musk’s abusive upbringing in apartheid South Africa, Thiel’s grievances dating back to Stanford, and personal family conflicts (like Musk’s estrangement from his trans daughter) have profoundly shaped these men’s worldviews. Their “woke mind virus” obsession often traces directly to feeling their children have been turned against them by progressive institutions.
The Apartheid Connection Matters The South African origins of key PayPal mafia members—Musk, Thiel, and David Sacks—isn’t coincidental. Growing up in a “highly engineered chauvinist racist society” has influenced their authoritarian instincts, comfort with hierarchy, and reactionary politics. Musk’s companies have faced multiple racial discrimination lawsuits, suggesting these patterns persist.
They’re Literary Fundamentalists, Not Intellectuals These billionaires obsessively reference science fiction and fantasy (Musk’s Asimov fixation, Thiel’s endless Tolkien companies), but they read these works as blueprints rather than allegories. They lack humor, self-reflection, and genuine intellectual growth—Thiel still complains about the same grievances from his 1995 book “The Diversity Myth.”
Startup of the Week
Charles Schwab to buy private shares platform Forge Global in $660 million deal
Published Thu, Nov 6 20256:42 AM EST
A banner is displayed for Forge Global Holdings Inc. to celebrate the company’s listing at the New York Stock Exchange (NYSE) in New York City, U.S., March 22, 2022.
Brendan Mcdermid | Reuters
Charles Schwab on Thursday agreed to buy private shares platform Forge Global in a deal valued at $660 million as Wall Street firms seek to meet growing investor demand for access to high-growth startups.
Major financial institutions are expanding services that give clients access to private companies and liquidity in pre-IPO shares, seeking to tap rising investor demand for early exposure to fast-growing startups.
The per-share deal value of $45 represents a premium of roughly 72% over the stock’s last closing price. Shares of Forge Global surged 65% in premarket trading on the news.
Some of the world’s most valuable startups are choosing to stay private for longer, raising capital in private markets rather than pursuing initial public offerings.
Companies such as ChatGPT-maker OpenAI, Elon Musk’s SpaceX and TikTok-parent Bytedance now hold valuations that rival or exceed several major S&P 500 companies, narrowing the gap in influence between public and private markets.
Investors are seeking exposure to these fast-growing companies, and financial services firms are moving to cater to that demand.
Last month, Wall Street giant Morgan Stanley agreed to acquire private shares platform EquityZen, a rival to Forge Global.
Forge Global went public in 2021 through a blank-check deal with a special purpose acquisition company. Its stock has jumped 87% this year amid media reports that the company had put itself up for sale and received takeover proposals.
It operates a trading marketplace through which investors have bought and sold more than $17 billion in private company shares.
Charles Schwab offers wealth management, securities brokerage, banking and financial advisory services to customers. It has a market capitalization of roughly $170 billion and manages around $11.6 trillion in client assets.
The companies expect the deal to close in the first half of 2026.
Post of the Week
Just how much are we spending on AI?
A reminder for new readers. Each week, That Was The Week, includes a collection of selected essays on critical issues in tech, startups, and venture capital.
I choose the articles based on their interest to me. The selections often include viewpoints I can't entirely agree with. I include them if they make me think or add to my knowledge. Click on the headline, the contents section link, or the ‘Read More’ link at the bottom of each piece to go to the original.
I express my point of view in the editorial and the weekly video.



































