0:00
/
0:00
Transcript

Media: Forever Blowing Bubbles?

While the Next Enlightenment is Being Built

Contents


Media: Forever Blowing Bubbles


West Ham United of the English Premier League have an anthem sung by fans at the start of a game


I’m forever blowing bubbles
Pretty bubbles in the air
They fly so high
They reach the sky

And like my dreams they fade and die!
Fortunes always hiding
I’ve looked every where
I’m forever blowing bubbles
Pretty bubbles in the air!

The so-called “AI bubble” isn’t a bubble at all but the media is singing the bubble song.

What it really is, is a new productivity revolution in its earliest stages of development where infrastructure is built for the future. The bubble narrative misses the key fact of history - the entirety of modern society was built by speculation meeting scientific breakthroughs. Speculation is the only historical method of funding major innovation. That is also true in art, where spending on something yet to be born typifies creative effort.

The term “bubble” betrays a failure to distinguish between speculative excess and speculation with productive intent. As Scott Galloway recently observed,

“Seventy-five percent of market gains trace back to a handful of AI-exposed companies.”

That concentration invites fears of fragility, but it also signals where new value is being born — in the infrastructure layer of the next economy. Scale is only possible because of the concentration of value in hands prepared to deploy capital.

Chamath Palihapitiya’s deep dive into Amazon’s metamorphosis from retailer to infrastructure provider reads like a prelude to this moment: the Mag7 aren’t speculators; they’re platform architects. When the world’s largest firms redirect hundreds of billions toward computation, data, and models, they’re not chasing mirages. They’re rebuilding the substrate of productivity itself.

Steve Blank warns that “no science means no startups,” lamenting that the foundational research engine is being starved even as applied AI accelerates.

He’s right about the imbalance — but wrong to conflate it with hollowness. What we’re witnessing is the engineering phase of decades of accumulated science: algorithms born in academia now scaled into global utility.

As Mike Loukides writes in Enlightenment, AI doesn’t extinguish reason; it “extends cognition into the external world.” The capital surge into AI isn’t detaching from value; it’s funding the re-enlightenment of individual and collective human enterprise — embedding intelligence into every system that touches human decision-making. We shouldn’t fear the boom. We should manage it — and recognize it for what it is: the most consequential redirection of capital since the Industrial Revolution.


The Media’s “Bubble” Reflex

Financial media thrives on spectacle. Every era of transformative technology — from the railway boom to the internet — was framed as mania before being recognized as infrastructure. “Bubbles” sell headlines; “compounding productivity gains” don’t. The same playbook is being applied to AI. Bloomberg warns of “valuation hysteria,” The Economist cautions against “AI intoxication,” and even The New York Times calls this “a replay of the dot-com euphoria.”

But the story these outlets miss is that the froth sits atop decades of compounding investment in compute, algorithms, and data infrastructure. The signal — not the noise — is that the productivity base of the global economy is shifting toward intelligence as a utility.

The press frames NVIDIA’s trillion-dollar valuation as speculative exuberance, yet that same framing would have called Amazon’s AWS “a risky sideline” in 2008. The problem isn’t overstatement of price; it’s understatement of consequence.

Journalistic cycles demand a clickbait narrative, not a systemic view. In this sense, the media is performing its traditional role as volatility amplifier — compressing long-term technological progress into short-term emotional swings. As Galloway wrote, “Narratives, not numbers, move markets.” The dominant narrative right now is fear of excess; the missing one is faith in compounding capability.


From Clicks to Comprehension

Mike Loukides reminder in Enlightenment — that technology’s purpose is to “extend reason” — can be inverted as critique: the media’s purpose should be to extend understanding, not anxiety. But modern coverage optimizes for attention, not accuracy. When cable news cuts from OpenAI’s model release to layoffs at a chip factory, the implicit narrative is chaos. Yet the actual dynamic is creative destruction: productivity migrates faster than old accounting frameworks can measure it. The “bubble” framing is a cognitive shortcut — a substitute for analysis. It confuses velocity of investment with fragility of value.

The irony is that the media itself is being transformed by the very forces it misunderstands. AI is already reshaping reporting, editing, and distribution. Newsrooms run on models; analysts use LLMs to summarize earnings. The industry warning about AI disruption is already running on it. The “bubble” narrative is therefore self-referential — a fear story told by an industry being automated by the thing it fears. “Help!, they are spending money to replace me.”


From Science to Scale

Steve Blank’s hierarchy of innovation — basic science → applied research → engineering → startups — remains the right lens. What we’re seeing now is the transition from the middle layers to the market. For forty years, government and academic labs laid the foundations: neural nets, backpropagation, transformers. Private capital is now scaling those discoveries into real systems. The irony is that this “commercial frenzy” is what finally forces investment back down the chain. As capital demands efficiency, it will fund new chips, new physics, and new models of energy use. The flywheel between science and startup doesn’t break; it accelerates.

If anything, the AI boom is the missing bridge between fundamental science and systemic deployment. The Manhattan Project and the Space Race were publicly funded; this time, the private sector is financing the frontier. Highly concentrated wealth is the engine of innovation. Capitalism makes that possible in today’s world. The long term problem will not be over-investment but the very wealth concentration that makes it possible. We need OpenAI and NVIDIA and all the others to build future wealth. We also need policy to distribute the gains and ensure we get systemic uplift and not a new class of feudal overlords. Sam Altman’s Worldcoin project is a clue as to how that might happen.


Platform Capitalism and the Mag7 Playbook

Chamath Palihapitiya’s analysis of the Mag7 — Amazon, Apple, Microsoft, Google, Meta, NVIDIA, and Tesla — clarifies what’s happening. These are not “AI stocks” in the narrow sense. They are infrastructure behomeths capturing the productivity dividend of intelligence at scale. Amazon didn’t become the world’s utility by selling books; it did so by selling everything, including compute. Likewise, the firms leading this wave are transforming intelligence into infrastructure — AI as a service layer, a universal API for cognition.

The investment surge into these firms reflects a rational understanding of where value accrues in platform shifts. In the industrial era, value aggregated around energy and transport. In the information era, around data and networks. In the AI era, it aggregates around context — the ability to synthesize knowledge, predict behavior, and automate reasoning. Calling that a bubble is like calling the electrification of cities “hype.”


The New Enlightenment

Mike Loukides Enlightenment essay reframes the philosophical dimension: the Enlightenment was defined by reason, empiricism, and the belief that knowledge could improve the human condition. AI challenges us to extend those principles, not abandon them. By externalizing parts of cognition, we’re not ending the age of reason; we’re upgrading it. The machine is not replacing the mind — it’s reflecting it back at scale. If the first Enlightenment created the scientific method, the second may create the computational method: a world where human discovery and human creativity are accelerated by AI code.

This is why the AI boom feels chaotic — it’s a cognitive revolution colliding with financial markets. Every great shift in human understanding — printing press, electricity, internet — created excess, fraud, and euphoria. But history judges these periods not by their bubbles, but by their aftermaths. The winners of this cycle will be those who understand that intelligence itself is becoming an asset class.


From Speculation to Construction

We should stop debating whether this is a bubble and start asking what the speculative investment is building. Behind every overheated valuation lies an investment in compute, data, and capability. The correction will come, as always. And some will be losers. But what remains will be the foundation of a new industrial layer — knowledge infrastructure. Those who frame AI as a speculative mania miss the deeper story: capital is being redeployed from consumption to knowledge and understanding. That’s not froth. That’s progress.

Round Trip?

The other narrative recently suggests that the players in AI are spending the same money with each other in a closed looping circle . Nvidia, OpenAI, AMD, Oracle etc. These narrative leave out the investment capital coming in Fromm the outside, and also leave out the massive revenues being made by Nvidia, OpenAI, Anthropic, Microsoft and others. There is no circular closed system here. There is a huge growth of both infrastructure and revenue.


Essays

How Does the End Begin?

Profgalloway • October 17, 2025

Essay•AI•S


Overview

The piece argues that U.S. markets and near-term economic growth have become unusually dependent on a narrow cohort of AI-linked companies. It highlights extreme concentration at the top of the S&P 500 and attributes the bulk of recent index returns, profit expansion, capital spending, and even GDP growth to AI. The underlying question is whether such narrow leadership is sustainable — and how an “end” to the boom typically begins when expectations and capital intensity outpace broad-based value creation.

Market Concentration and Narrow Breadth

  • The article opens with a stark data point: “The top 10 stocks in the S&P 500 account for 40% of the index’s market cap.” This level of concentration means index performance increasingly reflects the fortunes of a handful of firms rather than the economy at large.

  • It then ties market gains directly to AI: since the launch of a popular chatbot in November 2022, “AI-related stocks have registered 75% of S&P 500 returns.” In plain terms, three quarters of recent equity gains have been driven by a small, thematically linked set of companies.

  • Concentration amplifies volatility. If leadership falters, passive investors tracking the index could face synchronized drawdowns. Conversely, continued outperformance by the leaders can mask underlying weakness in the broader market.

Earnings and Investment Engine

  • Profit growth is just as skewed: “80% of earnings growth” is attributed to AI-related names. That suggests multiple compression elsewhere is being offset by premium valuations in AI, while non-AI sectors lag.

  • Corporate investment has tilted even more: “90% of capital spending growth” is coming from AI-linked firms. This indicates an investment supercycle in data centers, chips, and model infrastructure, with spillovers across energy, real estate, and cloud services — but also the risk of overbuild if demand expectations prove too optimistic.

  • A narrow capex boom can crowd out other priorities, leaving the economy exposed to a single technological thesis. If the return on invested capital falls as projects scale, the feedback loop that has been rewarding leaders could reverse quickly.

Macro Dependence on AI

  • The article claims that “AI investments accounted for nearly 92% of the U.S. GDP growth this year.” If accurate, this implies the broader economy’s incremental growth is tied disproportionately to one vector of spending.

  • Such dependence can be double-edged. If AI delivers genuine productivity gains — faster software development, automated support, improved scientific discovery — growth can broaden and become self-sustaining. If not, GDP may recoil when the investment impulse slows, contracting orders for hardware, power, and construction linked to data infrastructure.

How Endings Typically Begin

  • Endings often start with a funding or narrative shift: a modest earnings miss, guidance cut, regulatory surprise, or cost overrun that reframes the growth story. In concentrated markets, small cracks at the top can propagate quickly through passive flows, risk models, and credit conditions.

  • Another path is saturation: as capex soars, marginal returns diminish, and scrutiny rises on monetization versus hype. Firms then confront tougher trade-offs — continue spending to keep pace, or slow investments and risk losing the arms race.

  • Policy and politics can also catalyze regime change. New rules on data usage, AI safety, export controls, or power-grid constraints could raise costs and slow deployment, testing valuations built on rapid scale-up.

Key Takeaways

  • The leadership of a few mega-cap AI firms now dominates index performance, earnings growth, and capex trends.

  • AI-driven investment appears to be the principal engine of recent U.S. GDP growth, signaling both powerful momentum and macro fragility.

  • The cycle likely ends not with a single dramatic event but with a sequence of disappointments — softer demand, rising costs of capital, regulatory friction — that gradually undermines the narrative.

  • For investors and policymakers, the challenge is to encourage diffusion of productivity gains beyond the top cohort while stress-testing scenarios where AI investment growth normalizes.

Implications

If AI’s promise converts into widespread productivity, today’s concentration could be a staging point for broader prosperity. If not, the economy and markets may have over-indexed to a single theme, leaving portfolios, jobs, and public revenues vulnerable to a synchronized unwind. In this framing, the “end” begins when the marginal dollar of AI spending no longer buys commensurate growth — and the market, intensely focused on the few, remembers the many.

Read More

America’s future could hinge on whether AI slightly disappoints

Noahpinion • October 12, 2025

Essay•AI•Tariffs•AIBubble•Macroeconomy


Overview

The piece argues that the current resilience of the U.S. economy rests disproportionately on an AI-driven investment boom, making America dangerously exposed if AI merely underperforms rather than fails outright. Despite weak manufacturing, soft payrolls, and consumer sentiment “at Great Recession levels,” growth is still positive and labor markets remain historically tight. Competing explanations exist—tariffs may be less binding than feared; weak sentiment could be partisan “vibecession”—but a growing body of evidence suggests AI-related capex is propping up GDP, stocks, and political narratives around policy competence. If that pillar wobbles, the author warns, the consequences would extend beyond markets to reshape public judgment of the current presidency and the broader policy regime supporting it.

The economic picture: weak signals vs. resilient aggregates

  • Manufacturing is “hurting badly” under Trump-era tariffs; payroll growth looks weak; consumer confidence has slumped.

  • Offsetting indicators: unemployment is “rising a little” but remains very low; prime-age employment is near all-time highs; New York Fed GDP nowcast is a bit over 2% while the Atlanta Fed’s is higher.

  • The divergence invites two stories: either the economy is broadly fine (tariff exemptions, overestimated tariff harms, sector-specific issues), or tariffs are a drag being masked by a powerful AI upswing.

AI as the single pillar

  • Citing the Financial Times, Pantheon Macroeconomics estimates H1 growth would have been just 0.6% annualized without AI spending—“half the actual rate.” Paul Kedrosky’s estimates align, while Jason Furman’s back-of-the-envelope suggests an even larger AI contribution.

  • Equity markets are similarly concentrated: Ruchir Sharma notes “AI companies have accounted for 80 per cent of the gains in US stocks so far in 2025,” and over one-fifth of S&P 500 market cap sits in Nvidia, Microsoft, and Apple—two being direct AI bellwethers.

  • The Economist cautions that “beyond AI” the economy looks sluggish: flat real consumption since December, weak job growth, slumping housing, and non-AI business investment down.

Policy insulation and exposure

  • Despite broad tariff actions, Joey Politano observes the administration “left AI and its supply chain mostly untouched,” effectively ring-fencing the sector as a national growth bet.

  • The author argues this tacit industrial policy heightens macro and political risk: if AI stumbles, tariffs’ drag would be unmasked, potentially tipping the economy into recession.

Why a crash could come from ‘mild disappointment’

  • The likely vulnerability is an “industrial bubble,” a term Jeff Bezos uses for episodes where real-economy overbuild meets optimistic mispricing. Unlike purely financial manias, the pain here flows through debt, capex, and loan defaults tied to overstated technological payoffs.

  • Bloomberg’s roundup captures the froth: record pace of spending on a still-unproven profit model; an MIT study finding “95% of organizations saw zero return” on AI initiatives; Harvard/Stanford researchers labeling output “workslop”—content that looks like work but “lacks the substance to meaningfully advance a given task.”

  • Technical headwinds add to the risk: diminishing returns to scaling laws; underwhelming model upgrades (OpenAI’s latest got “mixed reviews” after GPT-5 hype); and hard constraints from power grids as data-center electricity demand surges.

  • Crucially, the tech need not “fail”—it only needs to fall short of the most ardent expectations for equity values, capex plans, and financing structures to face a painful reset.

Political stakes

  • If AI is the economy’s load-bearing wall, a bust would “flip the narrative” on Trump’s presidency, analogous to how the 2008 crash indelibly marked George W. Bush. Given how “transformative” a second term appears, the sector’s trajectory could “determine the entire fate of the country,” in the author’s framing.

Key takeaways

  • AI’s macro lift: multiple estimates attribute roughly half or more of recent growth—and most equity gains—to AI-related spending and expectations.

  • Concentration risk: capex, market cap, and policy protection are clustered in a narrow AI supply chain, amplifying systemic exposure.

  • Industrial-bubble mechanics: lending and overbuild tied to optimistic use-cases could transmit even “mild disappointment” into defaults, retrenchment, and recession.

  • Non-AI weakness: flat consumption, soft jobs, and weak non-AI investment suggest limited cushions if the AI tide ebbs.

  • Narrative fragility: the political credit for resilience is contingent; an AI stumble could rapidly recast economic and electoral judgments.

Implications

The U.S. has, by omission and emphasis, made a national bet on AI as a growth engine. That bet may pay off over the long arc, but the near-term macro math is unforgiving: if returns to investment arrive slower than markets and policymakers have implicitly assumed, the unwind could be sharper and more politically consequential than standard equity corrections. The prudent stance, the author implies, is to treat AI as promising but not preordained—and to diversify growth drivers before disappointment turns cyclical softness into a narrative and policy crisis.

Read More

No Science, No Startups: The Innovation Engine We’re Switching Off

Steve Blank • October 13, 2025

Essay•Education•SciencePolicy•Startups•ResearchFunding


Tons of words have been written about the Trump Administrations war on Science in Universities. But few people have asked what, exactly, is science? How does it work? Who are the scientists? What do they do? And more importantly, why should anyone (outside of universities) care?

(Unfortunately, you won’t see answers to these questions in the general press – it’s not clickbait enough. Nor will you read about it in the science journals– it’s not technical enough. You won’t hear a succinct description from any of the universities under fire, either – they’ve long lost the ability to connect the value of their work to the day-to-day life of the general public.)

In this post I’m going to describe how science works, how science and engineering have worked together to build innovative startups and companies in the U.S.—and why you should care.

(In a previous post I described how the U.S. built a science and technology ecosystem and why investment in science is directly correlated with a country’s national power. I suggest you read it first.)*

How Science Works

I was older than I care to admit when I finally understood the difference between a scientist, an engineer, an entrepreneur and a venture capitalist; and the role that each played in the creation of advancements that made our economy thrive, our defense strong and America great.

Scientists

Scientists (sometimes called researchers) are the people who ask lots of questions about why and how things work. They don’t know the answers. Scientists are driven by curiosity, willing to make educated guesses (the fancy word is hypotheses) and run experiments to test their guesses. Most of the time their hypotheses are wrong. But every time they’re right they move the human race forward. We get new medicines, cures for diseases, new consumer goods, better and cheaper foods, etc.

Scientists tend to specialize in one area – biology, medical research, physics, agriculture, computer science, materials, math, etc. — although a few move between areas. The U.S. government has supported scientific research at scale (read billions of $s) since 1940.

Scientists tend to fall into two categories: Theorists and Experimentalists.

Theorists

Theorists develop mathematical models, abstract frameworks, and hypotheses for how the universe works. They don’t run experiments themselves—instead, they propose new ideas or principles, explain existing experimental results, predict phenomena that haven’t been observed yet. Theorists help define what reality might be.

Read More

Enlightenment

Oreilly • October 14, 2025

Essay•AI•Enlightenment•Education•CriticalThinking

Enlightenment

In a fascinating op-ed, David Bell, a professor of history at Princeton, argues that “AI is shedding enlightenment values.” As someone who has taught writing at a similarly prestigious university, and as someone who has written about technology for the past 35 or so years, I had a deep response.

Bell’s is not the argument of an AI skeptic. For his argument to work, AI has to be pretty good at reasoning and writing. It’s an argument about the nature of thought itself. Reading is thinking. Writing is thinking. Those are almost clichés—they even turn up in students’ assessments of using AI in a college writing class. It’s not a surprise to see these ideas in the 18th century, and only a bit more surprising to see how far Enlightenment thinkers took them. Bell writes:

The great political philosopher Baron de Montesquieu wrote: “One should never so exhaust a subject that nothing is left for readers to do. The point is not to make them read, but to make them think.” Voltaire, the most famous of the French “philosophes,” claimed, “The most useful books are those that the readers write half of themselves.”

And in the late 20th century, the great Dante scholar John Freccero would say to his classes “The text reads you”: How you read The Divine Comedy tells you who you are. You inevitably find your reflection in the act of reading.

Read More

The Amazon Primer: How It Started, Where It Stands, What’s Next

Chamath • October 15, 2025

Essay•Venture


Overview

The piece revisits a long-held thesis about Amazon, anchored in a 2016 assessment—when the company’s market value was roughly $300B—that it was a “multi-trillion-dollar monopoly hiding in plain sight.” Framed as a fresh, data-informed primer drawing on nearly two decades of observation, it promises to synthesize how Amazon’s strategic architecture, operating model, and compounding advantages have translated into outsized market power and long-term value creation. The author sets an evaluative baseline with two concrete markers: the 2016 valuation context and the enduring conviction captured in the quote, “multi-trillion-dollar monopoly hiding in plain sight,” signaling a view that Amazon’s trajectory was systematically underappreciated at the time.

Core Thesis and Context

  • The central idea is that Amazon’s economic engine has been structurally mispriced by markets at various points, particularly in 2016, because investors and commentators underestimated the compounding effects of scale, integration, and data.

  • The primer’s intent is to reexamine that call with updated evidence, clarifying where the initial thesis held, where it needs nuance, and what new drivers have emerged.

  • The author positions a multi-year research arc—“studying Amazon for nearly two decades”—as the foundation for separating transient narratives from durable signals about the company’s moat and future optionality.

Analytical Lenses Likely Employed

  • Market structure and monopoly dynamics: not merely share in a single product category, but the orchestration of a multi-sided platform—retail marketplace, logistics, cloud infrastructure, and subscription—yielding network effects and cost advantages.

  • Unit economics and flywheel effects: a recurring theme in Amazon analysis is the reinvestment of scale benefits into lower prices, faster delivery, and broader selection, which in turn deepens customer lock-in.

  • Optionality via infrastructure: capabilities built for internal needs (e.g., fulfillment and compute) become externalized as services, opening new profit pools and reinforcing the core.

  • Time horizon arbitrage: a willingness to trade near-term margins for strategic positioning, with value recognition lagging operational reality—mirroring the 2016 “hiding in plain sight” framing.

What the Primer Seeks to Clarify

  • How and why the 2016 call proved prescient relative to subsequent value creation.

  • Which components of the business model (e.g., marketplace, logistics, cloud, subscriptions, advertising) most strongly contributed to the step-change from a $300B company to a multi-trillion-dollar platform.

  • The interplay between data, scale, and customer experience in sustaining advantage over time.

  • The boundary conditions of “monopoly” in a platform context—where cross-subsidies and integrated services complicate traditional market definitions.

Implications

  • For investors: understanding Amazon requires looking through GAAP snapshots to the underlying flywheel, cash generation in high-ROIC segments, and the compounding nature of infrastructure-led optionality.

  • For operators: Amazon’s playbook illustrates how building reusable capabilities (logistics, compute, identity, payments) can be leveraged across business lines to reduce marginal costs and accelerate innovation.

  • For policymakers: defining market power in ecosystems challenges conventional antitrust frameworks, which often consider narrow product markets rather than platform-level interdependencies.

Notable Quote and Data Points

  • “In 2016, when Amazon was a $300B company, I called it a multi-trillion-dollar monopoly hiding in plain sight.” This quote encapsulates both the valuation anchor and the thesis’ foresight.

  • Time-in-market: “studying Amazon for nearly two decades” underscores a longitudinal approach, implying that the primer is less a hot take than a cumulative synthesis.

Key Takeaways

  • The article offers an updated, evidence-driven validation of a bold 2016 thesis about Amazon’s latent scale and monopoly-like dynamics.

  • It centers on how integrated infrastructure, network effects, and reinvestment discipline translated into durable advantages and substantial value creation.

  • The work aims to map which levers mattered most to the company’s evolution from a $300B valuation to multi-trillion-dollar territory, and why many observers underestimated that compounding.

  • Beyond Amazon, the primer provides a template for analyzing platform companies where cross-subsidies, data feedback loops, and infrastructure externalization blur traditional market boundaries.

Read More

Bubble?

AI’s double bubble trouble

Ft • October 16, 2025

AI•Funding•Bubble•AIInfrastructure•LLMs•Bubble?


Overview

The piece argues that today’s artificial intelligence boom contains a “double bubble”: a surge of genuinely productive investment alongside pockets of exuberant speculation. The core distinction is between long-lived, economically useful assets and capabilities versus momentum-driven bets whose pricing far outstrips tangible cash flows. It contends that both phenomena are unfolding at once, making it harder for investors, operators, and policymakers to separate signal from noise and allocate capital wisely.

Where Spending Looks Like Investment

  • Picks-and-shovels: Build-outs of data centers, high-bandwidth networking, specialized chips, cooling, and especially reliable power are framed as foundational. These expenditures create durable capacity that benefits multiple AI use cases over years rather than quarters.

  • Capability compounding: Tooling, MLOps, data pipelines, and safety/observability layers are seen as productivity enablers that improve model reliability and lower total cost of ownership over time.

  • Enterprise integration: Spending that embeds AI into existing workflows (search, customer support, code assistance, analytics) can unlock measurable efficiency gains, stickier contracts, and defensible moats through data flywheels and switching costs.

  • Infrastructure economics: Even where near‑term margins are thin, the assets (power contracts, facilities, supply-chain access) are scarce and appreciate as demand grows, making returns more plausible on a multi‑year horizon.

Where Behavior Looks Like Speculation

  • Valuations untethered from unit economics: Application-layer companies priced on usage “hockey sticks” despite high inference costs and unclear gross margins.

  • Me‑too product risk: Thin “wrapper” apps dependent on the same foundation models face commoditization, weak moats, and distribution challenges.

  • Overpromised timelines: Claims of imminent artificial general intelligence or fully autonomous agents risk pulling forward expectations, while reliability, safety, and governance gaps persist.

  • Capital chasing narratives: Rapid fundraising rounds and secondary market activity can inflate prices before product–market fit or enterprise security compliance is proven.

How to Tell the Difference

  • Cash flow path: Investments tied to capacity, cost reduction, and contracted demand have clearer routes to payback than user‑growth stories with negative gross margins.

  • Scarcity and control points: Ownership or preferential access to power, advanced packaging, unique data, or distribution channels signals durability.

  • Substitution risk: Offerings that can be replicated by a prompt, plug‑in, or model parameter change are fragile relative to assets that require time, permits, or capex to duplicate.

  • Measured progress: Credible operators publish benchmarks that align with customer outcomes (latency, accuracy, compliance), not just model leaderboard wins.

Implications

  • For investors: Expect dispersion—some infrastructure and workflow players compound, while crowded consumer apps mean‑revert. Prioritize unit economics under stress scenarios (model price cuts, context limits, power constraints).

  • For operators: Secure energy and supply chains early; design for inference cost discipline; build proprietary data advantages; and align go‑to‑market with compliance and governance needs.

  • For policymakers: Grid expansion, permitting reform, and data/privacy standards will shape where productive investment clusters and whether speculative excess spills into systemic risks.

Key Takeaways

  • Both good investment and bad speculation are happening simultaneously.

  • Durable returns are likelier in scarce, capital‑intensive infrastructure and deeply embedded enterprise use cases.

  • The riskiest exposures are thin apps with weak moats, aggressive promises, and fragile unit economics.

  • Disciplined evaluation—of costs, scarcity, and substitution risk—helps navigate the “double bubble.”

Read More

AI has a cargo cult problem

Ft • October 16, 2025

AI•Funding•InvestmentBubble•CargoCult•CapitalEfficiency•Bubble?

AI has a cargo cult problem

Overview

The piece argues that parts of today’s artificial intelligence boom resemble a “cargo cult”: enthusiastic imitation of the trappings of success—massive spending, grand announcements, and headline-grabbing pilots—without the underlying causal ingredients that produce durable breakthroughs or profits. It warns that “Spending vast sums and inflating an investment bubble is no guarantee of unleashing technological magic,” framing a cautionary stance toward exuberant capital allocation and expectation-setting around AI.

What “cargo cult” means in this context

  • Borrowed from anthropological shorthand, “cargo cult” behavior arises when actors copy surface rituals of prior success, expecting similar outcomes, while missing the unseen mechanisms that actually drive results.

  • In AI, this manifests as a belief that bigger budgets, more GPUs, and sweeping “AI-first” slogans will automatically yield competitive moats, productivity surges, or scientific leaps—despite weak evidence of causality in specific use cases.

Symptoms in the current AI cycle

  • Escalating capital expenditures on compute and data centers billed as inevitable prerequisites for leadership.

  • Top-down mandates to “add AI” to products or workflows, independent of user need, data readiness, or MLOps maturity.

  • Valuation premia assigned to AI narratives rather than cash flows, unit economics, or defensible data advantages.

  • Demos and proofs-of-concept that impress in isolation but fail to translate into reliable, cost-effective production systems.

Why spending alone won’t deliver “technological magic”

  • Diminishing returns from naïve scaling when data quality, labeling, and feedback loops are the real bottlenecks.

  • Fragility in real-world deployment: hallucinations, brittleness under distribution shift, and safety/abuse risks.

  • Operational drag: inference costs, latency, energy constraints, and integration complexity across legacy systems.

  • Organizational gaps: scarce talent for prompt engineering, evaluation, red-teaming, and post-deployment monitoring.

  • Governance friction: privacy, IP, and compliance exposures that can erase thin margins or stall rollouts.

Principles to avoid cargo-cult AI

  • Start with the problem, not the model: define target tasks, success metrics, and alternatives (automation, UX redesign, rules).

  • Prove value early with small, measurable pilots; track ROI at a use-case level, not by platform spend.

  • Invest in data flywheels—collection, curation, human feedback, and domain ontologies—that compound over time.

  • Treat evaluation as a product: establish robust test suites, baselines, and continuous monitoring rather than relying on anecdotal demos.

  • Be model-agnostic: mix heuristics, retrieval, smaller specialist models, or fine-tunes when they beat frontier systems on cost/performance.

  • Build guardrails: access controls, privacy-preserving setups, content filters, and incident response.

Implications for stakeholders

  • Investors: favor evidence of durable data/process moats and efficient customer acquisition over raw compute announcements. Look for cohort-level retention, cost-to-serve trends, and margins resilient to model/provider switching.

  • Executives: resist “AI theater.” Tie budgets to unit-economic improvements and decommission pilots that fail to clear thresholds.

  • Policymakers: calibrate incentives to real productivity gains (standards, safety, workforce training) rather than broad subsidies that fuel misallocation.

  • Builders: pursue niches where domain context and proprietary data trump scale-idolatry; optimize for reliability, latency, and cost curves observable in production.

Key takeaways

  • Cargo-cult dynamics arise when capital and hype substitute for causal understanding of what makes AI deliver value.

  • Scale helps, but only if paired with the right data, evaluation, and operations; otherwise, costs compound faster than benefits.

  • Sustainable advantage is more likely to come from problem-fit, data systems, and disciplined deployment than from headline spending.

  • Prudent skepticism and rigorous measurement are antidotes to AI bubble mechanics—and the best path to real, compounding impact.

Read More

The Frothiest AI Bubble Is in Energy Stocks

Wsj • October 15, 2025

AI•Funding•EnergyStocks•Bubble?

The Frothiest AI Bubble Is in Energy Stocks

Overview

The article argues that the hottest “AI trade” isn’t in chips or software but in energy companies pitching future solutions to meet AI’s massive power needs. These are largely “concept stocks” that are pre-revenue yet command soaring market valuations on the promise of powering next-generation data centers. As the piece puts it, “Concept stocks with no revenue have soaring valuations.” It adds that the biggest of these energy firms has the backing of OpenAI’s CEO: “OpenAI CEO Sam Altman has backed the biggest of these energy companies.” Together, these signals have pulled significant speculative capital into early-stage energy technologies tied to the AI boom.

What “concept stocks” means in this context

  • Companies at the pre-commercialization stage, often with novel or unproven technologies.

  • No current revenue, limited operating history, and heavy reliance on external financing.

  • Valuation driven less by near-term cash flows and more by long-dated narratives about market size, strategy, and perceived inevitability of demand.

Why AI is pulling energy valuations higher

  • The model-training and inference cycles that underpin AI require large, reliable, and low-cost electricity supplies.

  • Investors extrapolate accelerating compute demand into future electricity demand, bidding up the value of companies that claim they can deliver abundant power.

  • Narratives around grid constraints and the need for resilient, clean, baseload energy for data centers amplify perceived urgency and potential pricing power.

The signaling effect of high-profile backers

  • Having OpenAI’s leader associated with the largest of these energy plays serves as a powerful credibility and attention catalyst.

  • Such involvement can accelerate fundraising, lift private and public comparables, and attract strategic partners.

  • It also introduces reflexivity: perceived endorsement begets capital, which raises valuations, which in turn appears to validate the thesis—even absent operating revenues.

How these valuations are being justified

  • Total addressable market framing anchored to the projected growth of AI workloads and data center buildouts.

  • Mentions of prospective offtake agreements, long-term power purchase structures, or partnerships with hyperscalers—even when non-binding—are treated as traction.

  • Expectations of regulatory support, incentives, and fast-tracked permitting for critical power projects are factored into models despite timing and execution uncertainty.

Key risks and what to watch

  • Technology risk: performance, scalability, and reliability must be proven at commercial scale.

  • Timeline risk: long development cycles can collide with investor patience; delays raise financing needs and dilution.

  • Regulatory and grid interconnection risk: permits, siting, and transmission remain persistent bottlenecks.

  • Financing risk: capital intensity is high; cost-of-capital moves can materially reset valuations.

  • Commercial risk: non-binding MOUs may not translate into contracted revenues; watch for binding offtakes, firm PPAs, and first meaningful revenues.

  • Milestones to track: commissioning of pilot plants, third-party performance validation, unit economics, and progress toward positive gross margins.

Implications for investors and operators

  • The AI-energy linkage is real, but valuations may be front-running evidence. Without revenue, price discovery leans on sentiment and celebrity endorsements.

  • Portfolio construction should reflect binary outcomes: position sizing, diversification across technologies, and clear milestone-based underwriting are critical.

  • Operators should prioritize transparent technical disclosures, credible roadmaps to commercialization, and binding customer commitments to move beyond narrative-driven pricing.

Key takeaways

  • “Concept stocks with no revenue have soaring valuations,” centered on the promise of serving AI’s power demand.

  • The largest of these firms is bolstered by the involvement of OpenAI’s CEO, intensifying investor interest.

  • Valuations hinge on future demand narratives, offtake expectations, and policy tailwinds rather than current financials.

  • Scrutiny should focus on binding contracts, verified performance, and the timeline to first revenue and scale.

Read More

Chipmaker TSMC reports nearly 40% surge in its net profit, thanks to AI

Fastcompany • Associated Press • October 16, 2025

AI•Tech•TSMC•Semiconductors•AIChips•Bubble?

Chipmaker TSMC reports nearly 40% surge in its net profit, thanks to AI

Earnings surge and AI tailwinds

Taiwan’s largest chipmaker reported a near-40% jump in quarterly net profit to a record 452.3 billion New Taiwan dollars (about $15 billion) for the July–September period, beating analyst expectations. Revenue rose 30% year over year, underscoring how demand for advanced processors tied to artificial intelligence continues to power top- and bottom-line growth. As the world’s biggest contract semiconductor manufacturer and a key supplier to Apple and Nvidia, the company is benefitting from a structural upgrade cycle in data centers and devices that increasingly hinge on AI-accelerated computing.

Key figures and context

  • Net profit: NT$452.3 billion (~$15 billion), a company record.

  • Profit growth: nearly 40% year over year.

  • Revenue growth: up 30% year over year for the quarter.

  • Customer mix: supplies leading-edge chips to Apple and Nvidia, anchoring demand visibility.

  • Geographic expansion: active fab projects in the United States and Japan.

What’s driving growth

The performance reflects an “AI supercycle” that is pulling forward orders for cutting-edge nodes used in training and inferencing. As hyperscalers and device makers race to integrate AI capabilities, they are prioritizing access to the most advanced manufacturing capacity. Morningstar analysts captured the backdrop succinctly: “Demand for TSMC’s products is unyielding... We expect AI demand to stay resilient.” That resilience is visible not only in server-class accelerators but also in premium smartphones and PCs where on-device AI features are becoming a differentiator, reinforcing mix improvements and utilization rates at advanced nodes.

Expansion and supply-chain resilience

To hedge geopolitical and supply-chain risk, the company is building fabrication plants in the U.S. and Japan. These moves diversify production away from a single geography while keeping the most sophisticated process know-how under tight control. The firm has committed $100 billion in U.S. investments, including new factories in Arizona, in addition to $65 billion pledged earlier—an investment scale designed to satisfy U.S. customer proximity requirements and policy incentives while maintaining global competitiveness.

Policy pressure and industry politics

Amid ongoing U.S.–China tensions, Washington has amplified calls to rebalance global chip production. The article notes that the U.S. Commerce Secretary, Howard Lutnick, proposed splitting production 50–50 between Taiwan and the U.S., an idea Taiwan rejected given its current manufacturing concentration and strategic advantages. While such proposals signal policy ambition to localize more capacity, the company’s dominant position, capital intensity, and hard-earned manufacturing yields make rapid geographic reallocation challenging. Morningstar’s view—doubting tariffs would materially hinder the company—highlights the pricing power and indispensability conferred by technological leadership.

Implications for customers and competitors

  • For mega-buyers like Nvidia and Apple, the results validate that leading-edge capacity remains the constraining resource in the AI era. Priority access can translate into product leadership and gross margin leverage.

  • For rivals and second-source foundries, catching up requires massive capital, ecosystem depth, and proven yields at the most advanced nodes—barriers that are difficult to overcome quickly.

  • For systems makers and cloud providers, diversified fab footprints in the U.S. and Japan may improve supply assurance and align with “friend-shoring” goals, but cost structures and ramp timelines will shape how much production ultimately shifts.

Outlook

The quarter’s outperformance, combined with robust AI order books and ongoing capacity additions, suggests momentum should continue into subsequent periods. The blend of structural AI demand, premium customer mix, and geographic diversification supports sustained investment and margin durability, even as macro and geopolitical uncertainties persist. In short, the firm is consolidating its role as the keystone of the AI hardware stack—translating technology leadership into record financial results while methodically addressing supply-chain and policy headwinds.

  • Key takeaways:

  • Record profit and strong revenue growth powered by AI-driven demand.

  • Strategic U.S. and Japan fabs aim to mitigate geopolitical risk and meet customer proximity needs.

  • Policy debates over production localization continue, but technological leadership and scale underpin the company’s resilience.

Read More

From Sports to AI, America Is Awash in Speculative Fever. Washington Is Egging It On.

Wsj • Greg Ip • October 16, 2025

Essay•Regulation•RetailInvesting•SportsBetting•Crypto•Bubble?

From Sports to AI, America Is Awash in Speculative Fever. Washington Is Egging It On.

Overview

The piece argues that a broad “speculative fever” has gripped the United States, extending from stock markets to sports wagering and cryptocurrencies. It highlights how “working-class investors are flocking to stocks, betting and crypto,” framing this surge either as the flowering of democratized finance or as late-arrival exuberance that could end painfully. The narrative juxtaposes enthusiasm for risk with the possibility that easy access, cultural normalization, and policy choices have collectively stoked behavior more akin to gambling than long-term investing.

What’s Driving the Boom

  • Frictionless access: Zero-commission trading, slick mobile apps, and instant deposits make risk-taking feel casual and continuous.

  • Normalization of betting: The mainstreaming of sports gambling creates a cultural bridge between wagering and short-term trading tactics.

  • Narrative momentum: AI-fueled growth stories and viral online commentary provide simple, compelling reasons to chase rallies.

  • Low barriers for new investors: Retail-friendly platforms and fractional shares invite small-dollar participation that can scale quickly in aggregate.

The Role of Policy and Washington’s Influence

  • Government encouragement of risk-taking shows up indirectly through stimulus-era liquidity, the tolerance of easy money conditions for extended periods, and public-sector enthusiasm for strategic technologies such as AI.

  • Regulatory posture is presented as ambivalent: while investor protection is cited as a goal, policy signals can legitimize speculative venues and assets by integrating them into the mainstream financial conversation.

  • Public support for innovation—tax incentives, grants, and procurement—can be read as green lights for risk, even if the intention is industrial strategy rather than market froth.

Risks and Potential Downside

  • Late-cycle dynamics: When participation broadens to less affluent households, losses can be socially regressive if conditions reverse.

  • Blurred lines between investing and gambling: Momentum-chasing, leverage, and rapid turnover amplify volatility and behavioral pitfalls.

  • Fragile expectations: If growth narratives falter or liquidity tightens, today’s democratization could flip into disillusionment, eroding trust in markets and institutions.

Key Quotes and Framing

  • The article captures the ambivalence succinctly: “working-class investors are flocking to stocks, betting and crypto, beneficiaries of a new age of democratic finance—or the last invitees to a party that’s going to end.” This encapsulates both the promise of broader access and the peril of arriving at the peak of a cycle.

Implications

  • Policymakers face a trade-off between fostering innovation and curbing excess: calibrating rules to protect newcomers without smothering growth narratives.

  • Market educators and platforms will need stronger guardrails—clear disclosures, default risk limits, and tools to mitigate impulsive behavior.

  • For households, the central challenge is distinguishing durable investment from ephemeral speculation, especially when cultural cues and interface design reward speed over patience.

Read More

AI Economics Are Brutal. Demand Is the Variable to Watch.

Wsj • Steven Rosenbush • October 14, 2025

AI•Data•Tokens•AIUsage•Demand•Bubble?

AI Economics Are Brutal. Demand Is the Variable to Watch.

Overview

The piece argues that the defining variable for near‑term AI economics is not model breakthroughs or chip supply alone, but demand—specifically, how much AI people actually use. The most practical proxy for that demand is “tokens,” the unit that measures how much text a model processes. The article’s core message is succinct: “Keep an eye on usage of AI, measured in units known as ‘tokens.’ It’s soaring.” This framing positions token consumption as a leading indicator for revenue growth, cost pressure, and market sustainability in an otherwise volatile, bubble‑prone environment.

What Tokens Capture

Tokens represent slices of text (or text‑like data) processed during prompts and outputs. Rising token counts signal that users are submitting more queries, longer prompts, or interacting with AI systems more frequently and in more complex ways. Because tokens accrue with every interaction, they tie directly to compute consumption, billable usage in many pricing models, and the stress placed on infrastructure. In short, token flow converts abstract “AI interest” into measurable demand.

Why Demand Is the Decider

  • Revenue mapping: In many business models, more tokens translate into higher billable usage, making token growth a direct proxy for top‑line expansion.

  • Capacity and cost: Surging tokens increase inference workloads, testing data center capacity and driving operational expenses.

  • Pricing power: If usage grows faster than capacity, providers may sustain or raise prices; if capacity outpaces demand, price competition can intensify.

  • Adoption depth: Token trends reveal whether AI is moving from experimentation to embedded workflows—longer, more frequent interactions suggest deeper integration.

The Harsh Economics

The article frames AI economics as “brutal” because costs scale with usage, while monetization depends on converting that usage into paying, repeat behavior. High fixed costs to build and maintain models and infrastructure meet variable inference costs each time tokens are processed. If token growth concentrates in low‑monetization use cases, margins compress. If it occurs in premium, enterprise, or mission‑critical workflows, margins can expand. Thus, demand quality matters as much as demand quantity.

Signals to Watch

  • Growth rate of tokens per user and per organization—evidence of intensifying engagement versus casual trials.

  • Mix of short versus long prompts and outputs—indicating complexity and potential value of tasks.

  • Conversion from free to paid token consumption—proof that usage is monetizable.

  • Elasticity to pricing and limits—whether usage holds up when guardrails or costs rise.

  • Token usage during peak periods—stress‑tests for reliability and user tolerance for latency.

Implications

  • For builders: Optimize for workflows that consistently consume tokens and deliver measurable ROI, not one‑off demos.

  • For operators: Align infrastructure spend with observed token trajectories to avoid overbuild or bottlenecks.

  • For investors: Track token volume, growth durability, and monetization pathways to distinguish sustainable adoption from hype.

  • For policymakers and enterprises: Demand‑driven scaling will surface issues around access, reliability, and responsible use as token loads climb.

Key Takeaways

  • Demand, expressed through token consumption, is the cleanest real‑time gauge of AI adoption and market health.

  • Token trends simultaneously illuminate revenue potential and cost exposure, explaining why AI economics can feel unforgiving.

  • Sustainable value will accrue where rising tokens align with sticky, high‑value use cases and resilient monetization.

Read More

Atlassian CEO, Mike Cannon-Brookes on Why Everything is Overvalued & Are We in an AI Bubble

Youtube • 20VC with Harry Stebbings • October 13, 2025

AI•Tech•Valuations•Bubble?


Overview

A wide-ranging conversation examines whether today’s tech markets are overvalued, how real an “AI bubble” might be, and what durable business building looks like through platform shifts. The discussion spans valuation sanity checks, AI’s impact on product design and pricing, the future of engineering work, and leadership principles from decades of company-building. Context on scale underscores the stakes: Atlassian serves 300,000+ customers and generates $5B+ in annual revenue, framing a practitioner’s view rather than a purely investor take. (thetwentyminutevc.com)

Are Markets Overvalued? The AI Bubble Question

The guest bluntly contends that “most of the things are vastly overvalued,” while allowing that a subset is actually undervalued and will be worth far more over time. He stresses the gap between “magical demos” and delivered enterprise value, arguing adoption will take longer than many expect as organizations rework processes, data and security. He has revised his own priors on AI platform scale, now seeing a plausible path for frontier model companies to reach multi‑trillion outcomes, even if the exact magnitude is uncertain. (texttube.ai)

What Changes In AI And SaaS Business Models

  • Multimodel by design: Rather than training proprietary models, the approach is to integrate and swap among best‑in‑class models quickly as the landscape evolves, making adaptability a core competency. (texttube.ai)

  • Design over data determinism: In platform shifts, great product design becomes a primary differentiator; AI will transform applications, but specialized tools will persist where they serve jobs‑to‑be‑done better than generic chat interfaces. (podcosmos.com)

  • Pricing is in flux: Classic per‑seat pricing strains under AI‑driven productivity. Expect hybrid models blending value‑based and consumption elements, with customer budget predictability still essential. Early AI margins are thin as costs outpace pricing power; defensibility must exceed “wrapper on a model.” (texttube.ai)

Talent, Tools, And Productivity

Despite “vibe coding” and code‑assist tools, the expectation is more engineers in the future, not fewer, because technology creation is not output‑bound; AI accelerates creation, spawning new products, features, and complexity that require expert builders. Internally, teams trial multiple AI dev tools (e.g., Cursor, GitHub Copilot), chosen for complementary strengths; coding is only one part of the engineer’s job alongside problem framing, debugging, integration, and operations. The practical constraint on AI tool pricing is low switching cost: vendors must prove outsized, measurable productivity to command premiums. (businessinsider.com)

Operating In Uncertainty: Strategy And Cadence

Strategy is framed as a series of explicit bets—formed with conviction, reviewed quarterly, and revised without sunk‑cost attachment. Customer conversations are treated as the stabilizing signal through noise; talking to the “right” customers, in sufficient number, builds internal confidence on what’s real versus hype. This cadence enables moving fast without being whipsawed by fashion. (texttube.ai)

Leadership And Company‑Building Lessons

  • Creative survival: To endure multiple platform transitions, companies must keep creating—sometimes killing their own products—rather than defending incumbency. The existential risk is losing creativity. (podcosmos.com)

  • Co‑leadership to solo leadership: Lessons from two decades of co‑CEO partnership emphasize mutual respect, complementary strengths, and clear “swim lanes”; the mindset carries into bolder AI‑era moves as a solo CEO. (podcosmos.com)

  • Strengths as dual‑edged: “Being unreasonable” is cast as both superpower and liability—useful for pushing boundaries, hazardous if untempered—illustrating how leadership traits are two sides of one coin. (texttube.ai)

Key Takeaways

  • Valuations: Much is overpriced, some is underappreciated; separating signal from hype requires evidence of durable unit economics and moats beyond model access. (texttube.ai)

  • AI trajectory: Bubble dynamics may be present, but enterprise value accrues on slower timelines than demos suggest; design and workflow change determine real ROI. (podcosmos.com)

  • Pricing evolution: Expect blended per‑value and consumption models as AI turns productivity into outcomes that must be measured and priced. (texttube.ai)

  • Talent outlook: AI shifts what engineers do and how they do it, but increases demand for core technologists as creation expands. (businessinsider.com)

Read More

The A.I. Bubble Looks Real

Nytimes • October 14, 2025

Essay•AI•SpeculativeBubble•DotComBust•HousingCrisisParallels•Bubble?


Thesis

The piece argues that today’s artificial-intelligence surge resembles a classic speculative bubble. Like the dot-com boom and the mid-2000s housing rise, exuberant expectations and rapid capital flows are inflating valuations far beyond what current fundamentals justify. When sentiment reverses, the deflation will be sharp and broadly painful.

Parallels to Past Bubbles

  • Dot-com echo: As with late-1990s internet stocks, investors are pricing in transformative future profits before sustainable business models are proven, rewarding growth narratives over cash flows.

  • Housing-crisis rhyme: Easy financing, herd behavior, and the belief that “this time is different” create a self-reinforcing cycle, until credit tightens and confidence breaks.

Mechanisms Inflating the Boom

  • Narrative momentum: Sweeping claims about productivity, automation, and “general intelligence” attract capital faster than real-world deployment can absorb.

  • Capital concentration: Money is clustering in a narrow set of platforms and suppliers, pushing a small cohort of names to outsized valuations while masking fragility beneath headline indices.

  • Costly infrastructure bets: Massive spending on compute, data centers, and specialized chips presumes sustained demand growth; if adoption lags, these fixed costs become a drag.

Signals of Overheating

  • Valuation stretch: Prices embed optimistic assumptions about market size, margins, and speed of monetization that would require near-perfect execution across multiple layers of the stack.

  • Copycat investment: Me-too product launches and speculative funding rounds suggest capital is chasing themes rather than defensible moats.

  • Disconnect with usage: Hype cycles can outpace durable user engagement and willingness to pay, revealing revenue shortfalls when promotional spend slows.

How the Pop Hurts

  • Corporate retrenchment: Overbuilt capacity and unmet revenue targets lead to write-downs, layoffs, and a wave of consolidation.

  • Financing squeeze: Risk appetite contracts, starving earlier-stage firms and compressing valuations across the ecosystem.

  • Real-economy spillovers: Suppliers, contractors, and regional economies tied to data-center and chip buildouts face demand shocks when projects pause.

Implications

Investors should separate durable utility from speculative narratives, stress-test assumptions about adoption curves, and scrutinize unit economics beyond headline growth. Operators need discipline on capex and a focus on verifiable productivity gains for customers. Policymakers and market overseers should be alert to leverage pockets and correlated exposures that could amplify a downturn. The central message: technological promise can be real while pricing becomes unreal—when expectations reset, the damage extends well beyond the most speculative corners.

Read More

AGI or Bust, OpenAI’s $1 Trillion Gamble, Apple’s Next CEO?

Youtube • Alex Kantrowitz • October 13, 2025

AI•Funding•AGI•OpenAI•AppleLeadership•Bubble?


Big Picture

The video frames a high-stakes moment for the tech industry around the pursuit of artificial general intelligence and what it would take—financially, organizationally, and societally—to get there. It uses the lens of a potential trillion-dollar-scale capital commitment to explore how leading AI firms might secure compute, energy, and talent at unprecedented levels, and what happens if the bet pays off—or doesn’t. It also considers what kind of leadership will be required at major platform companies to navigate this next era, using Apple’s eventual CEO succession as a case study in how strategy and operational discipline must evolve for an AI-first world.

AGI or Bust: Paths and Trade-offs

  • Explores why AGI has become a binary rallying cry: either push aggressively for step-change capability gains or risk falling behind in a compounding race for model quality, data, and distribution.

  • Highlights core bottlenecks—compute availability, high-quality data, and power—arguing that progress hinges on solving these together, not sequentially.

  • Surfaces safety and governance questions: capability scaling versus controllability, evaluation rigor, and the need for transparent benchmarks and third-party audits as systems begin to act autonomously across tools and networks.

  • Examines how productization might shift from single-turn chat to agentic workflows that plan, call tools, and interface with enterprise systems—raising new reliability, liability, and UX challenges.

The $1 Trillion Gamble: What It Buys

  • Breaks down the components of a trillion-dollar push: long-term compute contracts, custom silicon programs, data center build-outs, power procurement (including alternative and renewable sources), and strategic stakes across the supply chain.

  • Argues that scale alone is insufficient; the wager must unlock defensible distribution: integration into operating systems, productivity suites, developer platforms, and vertical enterprise stacks.

  • Discusses monetization pressure: from per-seat subscriptions and usage-based APIs to higher-value outcomes pricing tied to automation and decision support, with gross margin sensitivity to inference costs.

  • Notes the financing mosaic likely required—corporate cash flows, partnerships, pre-purchase agreements, sovereign/strategic investors—balanced against governance safeguards and public trust.

Apple’s Next CEO: Succession in an AI-First Era

  • Uses Apple’s leadership trajectory to illustrate how the next top executive at a platform giant must combine operational excellence with a coherent AI and silicon strategy.

  • Emphasizes on-device intelligence as a differentiator for privacy, latency, and battery, and the role of tightly coupled hardware, neural accelerators, and software to deliver reliable, everyday AI features.

  • Considers partner ecosystem implications: aligning incentive structures for developers, content owners, and accessory makers as generative features reshape creation, search, and services revenue streams.

Implications and Signals to Watch

  • Capital intensity will privilege firms that can secure long-duration power and compute; expect deeper vertical integration and unusual alliances between chipmakers, utilities, and hyperscalers.

  • Regulation will likely shift from post-hoc content rules to pre-deployment model oversight, evaluation standards, and incident reporting—impacting release cadence and competitive dynamics.

  • For enterprises, value will migrate from pilots to production-grade agents embedded in workflows; procurement will scrutinize observability, data residency, and total cost of ownership.

  • Consumer experiences will hinge on reliability and trust more than raw model IQ; brands that make AI disappear into intuitive interfaces may capture durable loyalty.

Key Takeaways

  • The pursuit of AGI concentrates financial, technical, and governance risk into a few pivotal bets whose outcomes will shape platform power for a decade.

  • A trillion-dollar commitment is less about headline capex and more about locking in enduring advantages across silicon, energy, and distribution.

  • Leadership at major platforms must marry supply-chain mastery with responsible AI strategy; succession choices will signal how aggressively each company plans to steer into agentic computing.

Read More

BlackRock CEO Larry Fink on the ‘AI bubble’

Youtube • CNBC Television • October 14, 2025

AI•Tech•LarryFink•AIBubble•DataCenters•Bubble?


Overview

In this clip, BlackRock’s Larry Fink weighs in on whether today’s market enthusiasm amounts to an “AI bubble.” His core message separates short‑term equity froth from a long‑duration, real‑economy investment cycle: he argues AI’s fundamentals are anchored in physical buildouts—compute, data centers, power, cooling, and networks—rather than the debt‑fueled speculation that characterized past bubbles. The result: even if valuations correct, the capex supercycle underpinning AI is likely to persist.

What’s driving the cycle

  • Compute demand and widespread adoption are pushing an unprecedented buildout of data centers. As Fink has noted on CNBC, “as we democratize AI, we need more data centers,” with marquee facilities scaling to roughly 1 gigawatt and costing on the order of “$50 billion per data center.” (cnbc.com)

  • Power is the binding constraint. In his 2025 investor letter, Fink highlights that a single hyperscale site can draw about 1 GW—roughly the electricity needed to power a major U.S. city on a peak day—forcing urgent investment in generation, transmission, and permitting reform. (blackrock.com)

Risks and market dynamics

  • Concentration: For now, AI remains dominated by large, cash‑rich platforms because of extraordinary upfront costs, though Fink expects costs to fall over time, broadening access and use cases. (benzinga.com)

  • Macro spillovers: He has warned that massive AI‑related capex could introduce new inflationary pressures. If inflation reaccelerates, long rates (e.g., the 10‑year Treasury) could revisit or exceed the 5%–5.5% range—an outcome that would “shock” equities and reset risk premia. (benzinga.com)

How this differs from a classic bubble

  • Cash vs. credit: The current wave is largely financed by corporate cash flows and equity, not highly leveraged balance sheets—making a drawdown less likely to trigger systemic stress.

  • Tangible assets: Spending is flowing into physical infrastructure (power, land, chips, cooling) that can retain value and utility across cycles, even if some AI software valuations compress.

Implications for investors

  • Near term: Expect valuation volatility in AI leaders and suppliers as rates, power availability, and GPU supply ebb and flow.

  • Medium term: “Picks and shovels” exposures—semiconductors, foundry and packaging, power generation and transmission, advanced cooling, specialized REITs, and project finance/private credit—stand to benefit as AI capacity scales.

  • Policy watch: Permitting, grid interconnects, and energy mix decisions will determine the pace and geography of deployment, influencing relative winners across regions and asset classes. (blackrock.com)

Key takeaways

  • The market can be frothy, but the AI buildout is a multi‑year capex theme rooted in hard assets.

  • Power and permitting are now as critical as GPUs to AI’s trajectory.

  • Rate sensitivity is high: an inflation surprise and higher long yields could reprice AI equities even as infrastructure spend continues. (benzinga.com)

Read More

When the Bubbles Burst

Mahablog • maha • October 14, 2025

Essay•GeoPolitics•RareEarths•Bubble?


Here’s an attention grabber — Why China Can Collapse the U.S. With One Decree. It’s by David Dayen at The American Prospect, so I’m inclined to take it seriously,

“When Donald Trump reacted to China’s export restrictions on rare earth minerals—a group of 17 chemical elements used in almost all electronics—by threatening a new 100 percent tariff on Chinese goods, Wall Street investors who had yawned at most of his erratic announcements for six months finally took notice. It was the same stock-tanking pressure that led Trump to climb down from his Liberation Day tariffs; sure enough, by Sunday, the TACO (Trump always chickens out) vibes started kicking in.”

“I have a great relationship with President Xi … he’s a great leader for their country, and I think we’ll get it set,” Trump told reporters on Air Force One on the way to Israel. “Don’t worry about China, it will all be fine!” he exclaimed on Truth Social. Next thing you know he’ll be apologizing in Mandarin like John Cena.

Here’s an explanation of the John Cena reference if you’re like me and didn’t get it.

“Holding the president of the United States on a tight leash is certainly a lucrative asset. But investors should probably be more wary of the situation. America has made an unusually directional economic bet that is at this moment totally dependent on Chinese rare earth exports. The circumstances that brought us here long predate Trump and are rooted in decades-long failures to retain our technological know-how and channel it into industrial production. It’s never too late for a wake-up call, but the country is in a terribly vulnerable position where China can snap its fingers and snuff out the only thing propping up our economy.”

Our economy is currently being propped up by what’s looking like an AI bubble. “Close to half of the gain in gross domestic product this year will come from data center construction, and around 80 percent of stock market gains are attributable to a handful of AI-heavy tech companies,” Dayen writes. And then he goes on to describe how we got here. This is the accumulation of a lot of business and government decisions going back many decades. But now that business karma is getting ripe we’ve got a perfect moron in charge of our economy to steer us through the crisis.

Paul Krugman has a post up about technology bubbles that’s partly hidden behind a paywall. But here’s another article that explains what Krugman wrote.

“In his newsletter on Monday, Krugman said that Trump’s tariff announcements six months ago were ‘a massive betrayal of the world’s trust,’ noting that previous tariff reductions were achieved ‘through many rounds of international negotiations, in which the U.S. and other nations solemnly agreed not to backtrack.’”

“Krugman said Trump now appears surprised that other countries are retaliating, referring to China’s new export controls on rare earths, which include several vital inputs for U.S. industry.”

“Reacting to the administration’s hypocrisy on the matter, Krugman said, ‘Gosh. Aggressive unilateral trade action is a ‘moral disgrace.’ Who knew?’”

“Krugman said that there is ‘one big difference’ between the trade strategies of the two countries, and it is that, unlike the U.S., ‘the Chinese appear to know what they’re doing.’”

Read More

OpenAI makes five-year business plan to meet $1tn spending pledges

Ft • October 14, 2025

AI•Funding•OpenAI•Infrastructure•DataCenters•Bubble?


Overview

OpenAI has drawn up a five‑year business plan to sustain more than $1 trillion in pledged spending for AI models and infrastructure, outlining new revenue lines, debt financing, and further fundraising to bridge the gap between massive capital needs and current cash flow. The plan seeks to diversify beyond today’s core subscriptions, while sequencing infrastructure buildouts and partnerships to reduce upfront costs and align payments with usage. (reuters.com)

What the plan includes

  • New products and revenue streams: bespoke AI services for governments and large enterprises; shopping and commerce tools; video creation via Sora; autonomous AI “agents”; and potential entry into online advertising. OpenAI is also weighing consumer hardware developed with Jony Ive as a way to create a differentiated, tightly integrated AI experience. (ft.com)

  • Funding approach: a mix of additional equity raises, new debt facilities, and structured partnerships in which infrastructure providers shoulder a large share of up‑front capex, with OpenAI paying over time through leases or usage‑based agreements. (reuters.com)

  • Compute and hardware: continued expansion of the Stargate data‑center initiative and deeper collaboration with Nvidia, AMD and Oracle to co‑develop and secure next‑generation AI hardware at scale, reducing per‑unit compute costs over time. (ft.com)

Scale of commitments and context

The plan follows a year of unprecedented infrastructure announcements around Stargate and related projects. Public remarks from leadership indicate tens of gigawatts of new capacity and capital outlays that could run into the high hundreds of billions of dollars, underscoring why a multiyear financing blueprint is necessary. “This is what it takes to deliver AI,” CEO Sam Altman said while unveiling large data‑center buildouts, framing them as a response to surging demand and the sector’s energy‑intensive compute needs. (cnbc.com)

Current business profile

OpenAI’s annualized revenue is roughly $13 billion, with subscriptions to ChatGPT accounting for a majority of sales. While only a small fraction of ChatGPT’s very large user base pays today, the plan envisions raising conversion and expanding lower‑cost offerings in emerging markets to widen the funnel. This consumer base is complemented by rapid enterprise uptake as companies integrate copilots and domain‑specific models into workflows. (ft.com)

Key financing mechanics

To reconcile near‑term losses with outsize capex, OpenAI is leaning on partner‑financed capacity (e.g., Oracle‑hosted compute), staggered payment schedules, and potential debt secured against contracted demand. The company is also exploring monetization of intellectual property and model licensing to third parties, and could supply compute directly as Stargate campuses come online. Investors and counterparties view falling unit costs and steep demand curves as making the economics workable over a five‑ to ten‑year horizon, though skeptics warn of “circular” financing risks across closely linked partners. (ft.com)

Risks, constraints, and external dependencies

  • Energy and siting: Multi‑gigawatt campuses face power, land, and water constraints; timelines depend on grid interconnections and new generation, including nuclear and renewables. (cnbc.com)

  • Market structure: Heavy reliance on a few chip and cloud suppliers raises concentration and pricing risk; co‑development aims to mitigate this but may not fully neutralize supply shocks. (ft.com)

  • Financial durability: Sustained negative free cash flow during buildout years requires continued access to capital markets and partner balance sheets; execution slippage could amplify financing costs. (reuters.com)

  • Systemic spillovers: Because many large tech and infrastructure players are now financially entangled with OpenAI’s roadmap, a shortfall in delivery could ripple across suppliers, lessors, and customers. (techcrunch.com)

Why it matters

If successful, the plan would lock in a multi‑year runway for frontier model development while pushing AI services deeper into consumer, enterprise, and public‑sector channels. It also signals a structural shift in how internet‑scale platforms are financed—away from lightly capitalized software toward capital‑intensive, utility‑like AI infrastructure—potentially redefining the economics of the sector for the next decade. (reuters.com)

Key takeaways

  • OpenAI has a five‑year strategy to fund and stage over $1 trillion in AI infrastructure and model development. (reuters.com)

  • The plan hinges on diversified revenues (agents, video, commerce, ads, hardware), partner‑financed data centers, and new debt/equity raises. (ft.com)

  • Leadership argues demand growth and falling compute costs will make the economics work; critics flag concentration and circular financing risks. (cnbc.com)

Read More

BlackRock, GIP and Nvidia in $40bn data centre takeover to power AI growth

Ft • October 15, 2025

AI•Funding•DataCenters•AlignedDataCenters•AIInfrastructure•Bubble?


Overview

A buyer consortium featuring BlackRock, Global Infrastructure Partners (GIP), Nvidia, MGX, and Microsoft is pursuing a roughly $40bn data-centre takeover aimed at powering the next phase of AI growth. The group intends to expand Aligned Data Centers to meet surging computing demand, positioning the platform as a scaled hub for AI training and inference workloads. In the article’s words, the consortium “plans to expand Aligned Data Centers to meet computing demand,” underscoring an immediate buildout agenda focused on capacity, power, and speed-to-deploy.

Who is involved and why it matters

  • BlackRock and GIP bring deep infrastructure capital, long-duration investment horizons, and experience structuring complex energy and digital-asset deals.

  • Nvidia adds strategic technology leadership—its accelerated computing platforms underpin modern AI clusters—plus potential ecosystem advantages in system design and supply coordination.

  • Microsoft represents anchor demand as a top hyperscale cloud provider and AI platform operator, aligning compute requirements with long-term offtake for data-centre capacity.

  • MGX extends access to additional capital and AI-centric partnerships, reinforcing global reach and multi-region expansion potential.

What Aligned Data Centers brings

Aligned Data Centers is a proven multi-tenant operator known for modular builds and power-efficient designs. As AI shifts data-centre specifications toward high-density racks, liquid cooling, and low-latency interconnects, Aligned’s ability to standardize, replicate, and scale campuses becomes a critical lever for meeting timelines that hyperscalers and enterprise AI programs demand.

Strategic rationale and synergies

  • Capital plus compute: Pairing infrastructure investors with a leading chipmaker and a hyperscaler marries financing certainty with technology roadmaps and consumption visibility.

  • Acceleration of build schedules: Coordinated procurement of power equipment, transformers, switchgear, and advanced cooling can compress time-to-live for new halls and campuses.

  • Optimization for AI: Facilities can be purpose-built for GPU clusters—supporting multi-hundred-kilowatt per rack densities, liquid-to-chip cooling, and fiber-rich topologies for distributed training.

  • Energy strategy: Scale enables long-term power purchase agreements, grid interconnect upgrades, and integration of renewable and firming resources to stabilize cost per compute.

Market context and implications

AI demand is reshaping digital infrastructure economics. As model training and inference expand, the limiting factor often shifts from chips to power, land, and construction velocity. A $40bn platform signals that data centres are now core to AI’s value chain, not a downstream utility. Expect more joint ventures that bind together capital, compute silicon, and cloud demand to de-risk multi-billion-dollar buildouts.

For enterprises, this consolidation could improve access to AI-ready colocation capacity, shortening deployment cycles and enabling hybrid strategies where sensitive data remains within controlled environments while tapping cloud-adjacent AI services. For cities and utilities, larger campus footprints will intensify focus on transmission upgrades, water use for cooling, and permitting—areas where scale and long-term investment partners can help align stakeholders.

Risks and watch items

  • Power scarcity and interconnection queues remain the gating factor; project timelines will hinge on utility partnerships and grid upgrades.

  • Regulatory reviews may examine market concentration, cross-supplier relationships, and data-sovereignty implications, especially where AI workloads intersect with sensitive sectors.

  • Supply-chain tightness for GPUs, electrical gear, and cooling systems could affect delivery schedules and costs.

  • Sustainability scrutiny will rise, requiring credible pathways to low-carbon energy and transparent reporting on efficiency and water stewardship.

Key takeaways

  • A high-profile consortium is backing a roughly $40bn data-centre move to “expand Aligned Data Centers to meet computing demand,” aligning investment scale with AI’s infrastructure needs.

  • Integrating investors, a chipmaker, and a hyperscaler aims to synchronize financing, technology roadmaps, and consumption, reducing execution risk for rapid capacity growth.

  • Purpose-built AI campuses—high-density, liquid-cooled, and power-optimized—are becoming the standard for next-generation workloads.

  • Execution hinges on power availability, permitting, and supply chains; success would set a template for future AI infrastructure partnerships.

Read More

Venture

F*** the Big VCs

Youtube • 20VC with Harry Stebbings • October 16, 2025

Venture


Core message

A sharp, punchy critique of brand-name, multi-stage venture firms and the incentives they bring into early-stage company building. The video argues that founders routinely over-index on prestige and logo-chasing, when what matters is whether the individual partner will move fast, take real risk at the stage you are in, and show up when it’s hard. The provocation—“forget the big VCs”—isn’t anti-capital; it’s a call to reject default playbooks that optimize for fund deployment and brand optics rather than founder outcomes.

Why “big VC” dynamics can hurt early stages

  • Fund math and check-size gravity: Very large funds are structurally pressured to write larger checks and seek outsized ownership, which can push premature scaling, unnecessary burn, and cap-table distortion at seed and Series A. That pressure can also slow decisions (more committees) and reduce the appetite for messy, pre–product-market-fit risk.

  • Signaling and optionality: When multi-stage platforms lead or crowd a seed, their behavior at the next round becomes a signal. If they don’t follow on, other investors infer negative information—hurting price and momentum. Founders unintentionally trade optionality for branding.

  • Governance creep: Big platforms often introduce heavier terms and more formal governance early—protective provisions, information rights, board control—which can constrain pivots and slow execution in the phase where speed and iteration matter most.

  • Portfolio-level optimization: A platform’s rational choice is to maximize fund-level returns, not any single company. That can manifest as pushing “swing-for-the-fences” strategies that fit the portfolio power-law but may not fit your market timing or founder risk profile.

What founders should optimize for instead

  • Choose a partner, not a logo: Prioritize the specific partner’s conviction, speed, and history of backing founders through rough patches. References from founders who failed are often the most revealing.

  • Stage–strategy fit: Seek investors whose fund size and reserves are aligned to your stage. Smaller or specialist funds and angels often make faster, cleaner decisions and calibrate help to the stage.

  • Clean terms and control of your arc: Prefer simple terms, right-sized checks, and boards that form only when truly additive. Optimize for runway to learn, not runway to market.

  • Clarity on reserves and follow-on behavior: Understand how much a fund reserves, what triggers follow-on, and whether they pre-commit to pro rata. Ambiguity here is where signaling risk begins.

Practical diligence questions

  • Decision velocity: “What is your fastest yes from first meeting to money wired in the last 12 months, and what made it possible?”

  • Post-check cadence: “How often will we speak? Who on your team does hands-on work, and what have they shipped for founders recently?”

  • Reserves and follow-on: “What percentage of the fund is reserved for follow-ons? Under what circumstances do you not follow on in a winner?”

  • References: “Introduce me to two founders you backed who did not work out. I want to learn how you behaved.”

  • Term simplicity: “Will you lead with a standard seed document and no exotic provisions? What must be on the term sheet and what is nice-to-have?”

Implications for fundraising strategy

Founders should design a capital plan that protects speed and learning in the earliest phase, layering talent and traction before inviting heavier governance. Treat large platforms as powerful but situational tools—well-suited once product-market fit emerges and the growth equation is clear. Until then, assemble a consortium that maximizes conviction, time-to-close, and runway per unit of dilution.

Key takeaways

  • Big-platform incentives often conflict with pre–product-market-fit needs; don’t let fund math dictate your roadmap.

  • The strongest predictor of value is the partner’s behavior, not the firm’s brand.

  • Avoid early signaling traps by clarifying follow-on intent and keeping terms simple.

  • Optimize for decision speed, clean cap tables, and aligned reserves at your stage.

Read More

Why I No Longer Care About Startup Valuation When I Invest (Except for These Four Reasons)

Medium • Hunter Walk • October 12, 2025

Venture


56 investments into Homebrew IV(ever), the pivot we made in 2022 to investing our own personal dollars in an evergreen fashion, we’re in a reflective mood. The almost four years of operating in this new normal represents a ‘fund cycle’ of sorts, so we probably have enough early data to reflect on this style of venture investing versus the more traditional LP-backed deployment of our first decade. One of the most clear differences is how we treat startup valuation in our entry point. And it’s a meaningful change!

Whereas before, as a lead seed VC in a portfolio model structure, the negotiation would be a tradeoff (for us) between ownership target, check size, current fund size and total number of investments we wanted to reach for that vehicle. Now we offer a (mostly) consistently sized supporting check, are stage agnostic (although 90% of the 56 have been pre-seed/seed), possess no ownership requirement, and have an open-ended timeline. Essentially pricing, as a top line absolute concern, is less of modeled variable for us, and more of a signal to inform our decision. Valuation contributes to our conversation around four questions we ask ourselves:

A. What were the founders optimizing for? Maximizing valuation? Minimizing dilution? Enough capital to cleanly execute to next milestones? Or maybe a bit underfunded? All things being equal, we understand the market dictates prices, but just like a startup’s strategy can differ whether their strategic true north KPI is growth or margin or customer count or something else, so will the goals of a round lead you to different results. An initial financing can often be one of the first telling data points on what matters to a founding team, whom if we invest, we hope to be able to support for years to come. B. Did the founders’ decision around pricing help or hurt the quality of the cap table? ‘Best’ investors or just the auction winners? Were there folks we believe are good partners to early stage companies who walked because of terms? Is there a lead investor and if so, are they underwriting to the same type of outcome, on the same timelines, that we’re seeking? Obviously impossible to fully know unless the founders are super transparent, and potentially subjective, but on our checklist.

Read More

Highest Count Of New Unicorns Join Crunchbase Board In Over 3 Years As Exits Also Gain Steam

Crunchbase • October 15, 2025

Venture

Gené Teare, @geneteare


A total of 26 companies joined The Crunchbase Unicorn Board in September, the largest new monthly cohort in three years, Crunchbase data shows. The surge in new unicorns follows on the heels of a very slow August, when only four companies joined the board.

Collectively, the new September unicorns added $38 billion in value to the board.

Of the 26 companies, 18 new unicorns came from the U.S. Two are U.K.-based, and Finland, Singapore, Hong Kong, Korea, Australia and Mexico each minted one new unicorn last month.

The highest valued among the new entrants were London-based data center provider Nscale, valued at $3.2 billion, and Utah-based legal tech startup Filevine at $3 billion.

Exits

Unicorn exits also picked up last month, with 11 companies leaving the board. Six of those companies went public, including Sweden-based Klarna, Santa Clara, California-based Netskope, and San Francisco-based Figure. Five companies were acquired, including Statsigby OpenAI and Thirty Madison by Remedy Meds.

New unicorns

While healthcare represented the largest cohort among the new unicorns, with five companies from that sector joining the board, new additions last month also hailed from sectors including aerospace, semiconductors and fintech. AI was woven through as a theme in many of the companies.

Here’s a closer look at September’s 26 new unicorns.

Healthcare

  • Strive Health, a kidney care startup that partners with healthcare providers to provide early detection and preventative care for patients, raised a $300 million Series D led by New Enterprise Associates. The 7-year-old Denver-based company was valued at $1.8 billion.

  • Ultragreen.ai, a provider of fluorescent imaging for surgery, raised a $188 million private equity round led by Vitruvian Partners and Temasek’s 65 Equity Partners. The 1-year-old Singapore-based company was valued at $1.3 billion.

  • Lila Sciences, a developer of AI tools for scientific research, raised a $235 million Series A led by Braidwell and Collective Global Management. The company seeks to experiment with AI for diagnostics, material science, compute and energy. The 3-year-old Cambridge, Massachusetts-based company was valued at $1.2 billion.

  • Enveda Biosciences uses AI to analyze the molecules in nature for medicines. It raised a $150 million Series D led by Premji Invest that valued the 6-year-old Boulder, Colorado-based company at $1 billion.

  • Cancer care provider Thyme Care, raised a $97 million Series D from strategic and venture investors at a $1 billion valuation. The 5-year-old Nashville, Tennessee-based company partners with health plans and providers to support patients with cancer.

AI

  • AI infrastructure provider Baseten raised a $150 million Series D led by Bond. The 6-year-old San Francisco-based company, which aims to make AI inference reliable for applications, was valued at $2.2 billion.

  • AI data company Invisible Technologies, a competitor to Scale AI, raised a $100 million funding led by Vanara Capital. The 10-year-old San Francisco-based company was valued at $2 billion.

  • AI consulting firm Distyl AI, which aims to help Fortune 500 companies become AI-native, raised a $175 million Series B led by Khosla Ventures and Lightspeed Venture Partners. The 3-year-old San Francisco-based company, founded by Palantir Technologies alumni, was valued at $1.8 billion.

  • You.com, a company that supports enterprises looking to adopt AI, raised a $100 million Series C led by Cox Enterprises. The 5-year-old Palo Alto, California-based company was valued at $1.5 billion.

Fintech

  • Tide, a company that supports SMEs with banking, invoicing, loans and payments, among other services, raised $120 million in private equity led by TPG’s The Rise Fund. Tide entered the India market in 2022 and supports 800,000 small businesses in that region and just under that number in the U.K. The 10-year-old London-based company was valued at $1.5 billion.

  • Banking-as-a-service provider Lead raised a $70 million Series B led by Andreessen Horowitz and Khosla Ventures at a $1.47 billion valuation for the 4-year-old company. Lead is a chartered bank based in Kansas City, Missouri, and was acquired by Luna in 2022 to provide banking services to fintech companies.

  • Kapital, a technology-first bank with customers in Mexico, Colombia and the U.S., raised a $100 million Series C led by Pelion Venture Partners and Tribe Capital. The 5-year-old Mexico City-based bank serves small and medium-sized businesses. It was valued at $1.4 billion.

AI data center

  • AI data center provider Nscale raised a $1.1 billion Series B led by Norway-based industrial investment company Aker with participation from Nvidia and Nokia. The 2-year-old London-based company was valued at $3.1 billion. Its customers include Nvidia, Microsoftand OpenAI. Since the Series B announcement, Nscale has raised a further $433 million SAFE toward a Series C.

  • AI data center Firmus Technologies raised $220 million in private equity led by Ellerston Capital with participation from Nvidia. An operator of data centers in Singapore and Tasmania, the 6-year-old Tasmania-headquartered company was valued at $1.2 billion.

AI cloud

  • Modular, a startup building an AI software computer layer agnostic to the chips they run on, raised a $250 million Series C led by US Innovative Technology Fund. The 3-year-old Palo Alto, California-based company was valued at $1.6 billion.

  • Modal Labs, a company that allows developers to run AI without managing infrastructure, raised an $87 million Series B led by Lux Capital. The 4-year-old New York-based company was valued at $1.1 billion.

Legal tech

  • Legal tech startup Filevine, a company that unifies case management, communication and billing with AI, raised a $260 million Series E extension led by Accel, Insight Partners and Utah-based Halo Experience Co. The 11-year-old Salt Lake City-based company says it has 6,000 customers using its platform. It was valued at $3 billion.

  • Eve, a legal tech startup to support plaintiff law firms, raised a $103 million Series B led by Spark Capital. The 5-year-old San Francisco-based company was valued at $1 billion.

Web3

  • Zerohash, a crypto and stablecoin infrastructure provider, raised a $104 million Series D led by Interactive Brokers Group. The 8-year-old Chicago-based company was valued at $1 billion.

  • RedotPay, a stablecoin payments solution provider, raised a $47 million funding led by Coinbase Ventures. The 1-year-old Hong Kong-based company was valued at $1 billion.

Semiconductor

Developer platform

  • Developer tooling company PostHog, a service that helps engineers with feature releases and tracking impact, raised a $75 million Series E led by Peak XV Partners. The 5-year-old San Francisco-based company was valued at $1.4 billion.

Hardware

  • Independent smartphone device-maker Nothing raised a $200 million Series C led by Tiger Global Management. The 5-year-old London-based company aims to reinvent the smartphone with AI intelligence and was valued at $1.3 billion.

Quantum computing

Material science

  • Periodic Labs, a startup that plans to build material science applications using AI, launched from stealth to announce a $300 million seed round led by Andreessen Horowitz. The less than 1-year-old Menlo Park, California-based company was valued at $1 billion.

Aerospace

  • Satellite manufacturer Apex raised a $200 million Series D led by Interlagos Capital. The 3-year-old Los Angeles-based company was valued at $1 billion.

Read More

Goldman Sachs is acquiring Industry Ventures for up to $965M as alternative VC exits surge

Techcrunch • October 13, 2025

Venture


Goldman Sachs agreed to buy Industry Ventures, the 25-year-old, San Francisco-based investment firm with $7 billion in assets under management, CNBC first reported Monday. The move highlights how secondary markets and buyouts are gaining importance as traditional venture exits remain slow.

The bank will pay $665 million in cash and equity, with as much as $300 million more tied to performance through 2030, according to Goldman. The deal is slated to close in the first quarter of next year, and all 45 Industry Ventures employees are expected to join Goldman.

The acquisition comes as venture firms increasingly pursue non-traditional exits amid a prolonged IPO drought that only now shows signs of easing. On TechCrunch’s StrictlyVC Download podcast earlier this year, Industry Ventures founder and CEO Hans Swildens said tech buyout funds now account for about 25% of all liquidity in the venture ecosystem — “a huge chunk of liquidity,” he noted.

Swildens said venture managers are adjusting their playbooks. Simply backing companies and waiting for an IPO or strategic M&A “probably won’t work anymore,” he said, arguing that VCs need to build alternative liquidity solutions.

As of April, he added, at least five major venture firms had hired full-time staff to manufacture non-traditional exits, including secondaries, continuation funds and buyouts, with brand-name funds “staffing and thinking through liquidity structures.”

Goldman is making the acquisition to reinforce its $540 billion alternatives investment platform, a business line the bank has identified as a key growth engine.

Industry Ventures says it has made more than 1,000 investments, holds stakes in over 700 venture firms and reports an internal rate of return of 18%.

Read More

How A16Z’s Returns Show Just What an Epic Year and Epic Time 2021 Was. Will 2028 Be Next?

Saastr • October 13, 2025

Venture


2021 wasn’t just a great year. It was the greatest year in venture … ever. But AI makes it all feel small. Will 2028 dwarf 2021?

In 2021, Andreessen Horowitz returned more cash to their LPs (their own investors) than in the entire previous decade combined.

Look at the chart above:

  • 2021 alone: $12.5B net to LPs (out of $15.1B total liquidity)

  • 2011–2020 combined: $12.4B net to LPs

Ten years of returns—from 2011 through 2020—equaled almost exactly what A16Z did in the single year of 2021: 2021 represented 50% of A16Z’s all-time LP distributions through 2025 YTD.

One year. Half of all-time returns. For a firm that had been investing for a decade.

After 2021, liquidity fell off a cliff. For … a little while:

  • 2022: $872M (down 94% from 2021)

  • 2023: $3.0B (still down 80% from 2021)

  • 2024: $5.2B (down 66% from 2021)

  • 2025 YTD: $4.0B (tracking to maybe $6B for the year, still down 60%)

Even as markets “recovered” over 2023–2025, we’re not even close to 2021 levels. The combined liquidity of 2022–2024 ($9.1B over three years) doesn’t match the single year of 2021 ($15.1B).

But it wasn’t just A16Z—the entire VC industry had a once-in-a-generation year. The numbers are staggering: Total U.S. VC exit value in 2021 was $774B, a 167% increase over 2020 ($290B), and more than 2019 and 2020 combined. Of that total, $681.5B came from public listings (IPOs and SPACs). 296 VC-backed companies went public in 2021—a 114.5% year-over-year increase—with 88% of all VC-backed exit activity happening through public markets.

Read More

AI

Sex could become the next big business opportunity for AI companies

Fastcompany • October 17, 2025

AI•Tech•OpenAI

Sex could become the next big business opportunity for AI companies

ChatGPT will be able to have kinkier conversations after OpenAI CEO Sam Altmanannounced the artificial intelligence company will soon allow its chatbot to engage in “erotica for verified adults”.

OpenAI won’t be the first to try to profit from sexualized AI. Sexual content was a top draw for AI tools almost as soon as the boom in AI-generated imagery and words erupted in 2022.

But the companies that were early to embrace mature AI also encountered legal and societal minefields and harmful abuse as a growing number of people have turned to the technology for companionship or titillation.

Will a sexier ChatGPT be different? After three years of largely banning mature content, Altman said Wednesday that his company is “not the elected moral police of the world” and ready to allow “more user freedom for adults” at the same time as it sets new limits for teens.

“In the same way that society differentiates other appropriate boundaries (R-rated movies, for example) we want to do a similar thing here,” Altman wrote on social media platform X, whose owner, Elon Musk, has also introduced an animated AI character that flirts with paid subscribers.

For now, unlike Musk’s Grok chatbot, paid subscriptions to ChatGPT are mostly pitched for professional use. But letting the chatbot become a friend or romantic partner could be another way for the world’s most valuable startup, which is losing more money than it makes, to turn a profit that could justify its $500 billion valuation.

“They’re not really earning much through subscriptions so having erotic content will bring them quick money,” said Zilan Qian, a fellow at Oxford University’s China Policy Lab who has studied the popularity of dating-based chatbots in the U.S. and China.

There are already about 29 million active users of AI chatbots designed specifically for romantic or sexual bonding, and that’s not counting people who use conventional chatbots in that way, according to research published by Qian earlier this month.

It also doesn’t include users of Character.AI, which is fighting a lawsuit that alleges a chatbot modeled after “Game of Thrones” character Daenerys Targaryen formed a sexually abusive relationship with a 14-year-old boy and pushed him to kill himself. OpenAI is facing a lawsuit from the family of a 16-year-old ChatGPT user who died by suicide in April.

Qian said she worries about the toll on real-world relationships when mainstream chatbots, already prone to sycophancy, are primed for 24-hour availability serving sexually explicit content.

“ChatGPT has voice chat versions. I would expect that in the future, if they were to go down this way — voice, text, visual — it’s all there,” she said.

Humans who fall in love with human-like machines have long been a literary cautionary tale, from popular science fiction of the last century to the ancient Greek legend of Pygmalion, obsessed with a woman he sculpted from ivory. Creating such a machine would seem like an unusual detour for OpenAI, founded a decade ago as a nonprofit dedicated to safely building better-than-human AI.

Altman said on a podcast in August that OpenAI has tried to resist the temptation to introduce products that could “juice growth or revenue” but be “very misaligned” with its long-term mission. Asked for a specific example, he gave one: “Well, we haven’t put a sexbot avatar in ChatGPT yet.”

Read More

Generative AI in the Real World: Context Engineering with Drew Breunig

Oreilly • October 16, 2025

AI•Data•ContextEngineering•RAG•Evals

Generative AI in the Real World: Context Engineering with Drew Breunig

Overview

The conversation explores “context engineering” for generative AI: the discipline of shaping, selecting, formatting, and governing the information that surrounds a model’s prompt so it can produce reliable, efficient results. Host Ben Lorica and guest Drew Breunig discuss what’s working in production systems, where current approaches are failing, and how teams can evolve their architectures and processes. A central theme is that simply increasing model context windows has not eliminated the hard problems of retrieval quality, reasoning under constraints, and cost/latency; instead, it has shifted where those problems appear and how they are diagnosed. The episode also emphasizes rigorous evaluation and testing practices as the backbone of trustworthy deployments, and looks ahead to more structured, data-centric ways of feeding models the right evidence at the right time.

What’s Working Today

  • Structured context pipelines: Teams that treat context as a first-class artifact—curated, versioned, and logged—see better reliability than those relying on ad‑hoc prompts.

  • Retrieval with metadata: RAG setups that exploit document structure, timestamps, entity IDs, and embeddings tuned to the domain outperform naive full‑text retrieval.

  • Tight prompt contracts: Explicit schemas (JSON, tool call parameters, and function signatures) reduce ambiguity and make outputs easier to validate and chain.

  • Observability by design: Capturing inputs, retrieved chunks, model versions, and response metrics enables root‑cause analysis when outputs go wrong.

Where Things Break Down

  • Bigger windows, bigger noise: Enlarged context windows admit more irrelevant or conflicting snippets, degrading attention and increasing hallucination risk; token costs and latency compound the issue.

  • Retrieval drift: Indexes and embeddings age; if freshness and de‑duplication are not enforced, models cite stale or contradictory facts.

  • Chunking pathologies: Poor segmentation splits key tables, formulas, or definitions across chunks, making it harder for the model to reconstruct meaning.

  • Hidden dependencies: Prompt templates accumulate brittle heuristics; minor copy changes or reordered fields can silently change outcomes.

Why Huge Context Windows Aren’t a Silver Bullet

  • Relevance vs. recall trade-off: Dumping “everything” into the window rarely beats high-precision retrieval and summarization. The model still must infer salience under token and attention constraints.

  • Cost/latency ceilings: Even with better hardware and caching, many enterprise workloads cannot afford to pass hundreds of kilobytes per request at scale.

  • Evaluation complexity: Larger inputs expand the combinatorial space of failure modes; without tight evals, teams misattribute gains to window size rather than better retrieval or instructions.

Evals and Testing: Don’t Skip the Hard Work

  • Layered evals: Combine unit‑style tests for prompts and tools, retrieval quality evals (precision/recall on labeled queries), and end‑to‑end task success rates.

  • Domain goldens: Hand‑curated question–answer pairs, reasoning traces, and red‑team probes specific to the business domain catch regressions earlier than generic benchmarks.

  • Continuous regression suites: Treat prompts, retrievers, and system messages like code; add tests for each bug you fix and run them on every model or data change.

  • Human‑in‑the‑loop checks: Calibrate automatic metrics with periodic expert review to prevent metric gaming or overfitting to a narrow test set.

Design Principles for Next-Gen Context Engineering

  • Data-first pipelines: Normalize sources, attach provenance, and preserve structure (tables, graphs, geospatial layers) so the retriever can reason about relationships—not just text similarity.

  • Retrieval as routing: Use intent classification and entity linking to select specialized indexes or tools, then assemble a compact, evidence‑rich context.

  • Summarize with accountability: Generate intermediate, cite‑back summaries that include source pointers, timestamps, and confidence to support downstream decisions.

  • Cost-aware optimization: Track per-token spend and latency by component; prefer smaller, faster models for retrieval/summarization and reserve large models for final synthesis.

Implications

Treating context as an engineered product, not a byproduct of prompts, is emerging as the differentiator for real-world AI. Organizations that invest in retrieval quality, structured knowledge, and disciplined evals will achieve accuracy, predictability, and lower unit economics—even as raw model capabilities evolve. Conversely, leaning on ever-larger context windows without improving data quality or measurement only pushes complexity downstream. The path forward blends better information architecture with rigorous testing and selective use of model capacity, yielding systems that are both more trustworthy and more cost-effective.

Key Takeaways

  • Bigger context windows help, but they do not replace precise retrieval, structure preservation, and summarization.

  • Robust evals—unit, retrieval, and end‑to‑end—are essential to sustain quality and catch regressions.

  • Data modeling and provenance (not just prompt tweaks) drive reliability at scale.

  • Optimize for cost and latency by routing tasks and constraining context to the most relevant evidence.

Read More

OpenAI wants to own it all

Ft • October 16, 2025

AI•Tech•OpenAI•PlatformStrategy•Antitrust


Thesis and Context

The piece argues that the maker of ChatGPT is pursuing an expansive, end‑to‑end strategy in artificial intelligence—aiming to shape not only core models, but also the layers of distribution, tooling, and consumer and enterprise experiences that sit on top of them. The central tension is whether such scope can be executed without “indigestion”: operational overload, strategic drift, regulatory pushback, and fraying partnerships. The analysis frames a classic platform dilemma—own more of the stack to capture value and quality control, but risk alienating the very ecosystem that fuels growth.

Strategy: Owning More of the Stack

  • Vertical reach: from frontier model development to developer platforms, enterprise offerings, and consumer-facing assistants.

  • Distribution ambition: embedding AI into daily workflows and devices to reduce reliance on third parties.

  • Quality and safety rationale: tighter integration promises better reliability, security, and rapid iteration.

  • Ecosystem calculus: expanding into adjacent layers can centralize standards and accelerate adoption, but compresses partner opportunity and introduces channel conflict.

Execution Risks and Operational Limits

  • Scale and reliability: running ever-larger models and fleets of AI agents requires robust infrastructure, disciplined release practices, and sustained uptime under surging demand.

  • Talent bandwidth: simultaneous pushes across research, product, safety, and compliance strain leadership focus and organizational coherence.

  • Governance and safety trade‑offs: shipping faster across many surfaces increases the burden on alignment, red‑team testing, and incident response.

  • Supply constraints: compute, energy, and data center capacity remain bottlenecks; scarcity forces prioritization and can curb product breadth.

Ecosystem Dynamics and Platform Risk

  • Partner friction: as the platform moves up the stack, API partners and startups face disintermediation risk, prompting diversification to alternative models and open‑source options.

  • Standards and switching: the more proprietary the stack, the stronger the short‑term moat—but the higher the incentive for customers to hedge, multi‑home, or push for open standards.

  • Developer trust: predictable pricing, model availability, and roadmap clarity become decisive; perceived volatility can depress long‑term platform commitment.

Regulatory and Policy Exposure

  • Antitrust scrutiny: attempts to bundle models with distribution channels, default placements, or exclusive integrations invite gatekeeper concerns.

  • Data and IP: broad ambitions magnify exposure to privacy, provenance, and copyright disputes; mechanisms for consent, opt‑outs, and revenue sharing become strategic.

  • Safety and accountability: as AI agents touch more user data and automate actions, regulators will expect auditable controls, incident disclosures, and meaningful user choice.

Business Model Tensions

  • Unit economics: inference costs, latency targets, and service‑level commitments challenge margins, especially for consumer tiers and always‑on assistants.

  • Enterprise vs. consumer balance: landing large organizations demands compliance, customization, and integration depth that can slow consumer product velocity.

  • Pricing power vs. openness: higher capture at the platform layer risks spurring commoditization lower in the stack; openness can grow the pie but dilute direct monetization.

What Success and “Indigestion” Look Like

  • Signals of success: stable performance at scale; durable developer adoption; clear, predictable monetization; and a cadence of product improvements without major regressions.

  • Signs of indigestion: outages or quality regressions across multiple products; partner defection; regulatory probes into bundling or data use; and costly pivots that reset roadmaps.

Implications and What to Watch

  • For users: convenience from deeply integrated assistants, offset by concerns over lock‑in, data handling, and reliability.

  • For developers: near‑term reach via a dominant platform, but heightened platform risk; multi‑model strategies gain appeal.

  • For competitors: opportunity to differentiate on openness, cost, or specialized verticals; alliances and open ecosystems may accelerate.

  • For policymakers: an expanding surface area for oversight—competition policy, data rights, safety standards, and transparency requirements.

Key Takeaways

  • Ambition to “own it all” promises coherence and speed, but magnifies operational, ecosystem, and regulatory risks.

  • The balance between integration and openness will determine whether the platform compels loyalty or catalyzes alternatives.

  • Durable advantage likely hinges on trust: reliability, governance, pricing clarity, and respectful treatment of partners and creators.

Read More

AI Is Juicing the Economy. Is It Making American Workers More Productive?

Wsj • October 13, 2025

AI•Work•WorkerProductivity•EconomicGrowth•CapitalInvestment

AI Is Juicing the Economy. Is It Making American Workers More Productive?

Overview

The article argues that artificial-intelligence spending has become a visible engine for U.S. economic momentum, even as measurable improvements in day-to-day worker productivity lag. In essence: “Investment in AI ignited a fire under the U.S. economy. But the technology hasn’t yet fulfilled the promise of making humans work more efficiently.” This juxtaposition underscores a classic diffusion problem for general-purpose technologies: capital flows and expectations can move quickly, while organizational change, skills, and process redesign move slowly. The near-term result is faster growth in AI-related investment and valuations than in per-worker output.

What’s accelerating

  • Capital formation tied to AI—spanning software, compute infrastructure, and related services—is rising, fueling broader economic activity through supply chains, construction, and professional services.

  • Firms are launching pilots and limited deployments that expand demand for cloud capacity and specialized tools, creating a stimulus-like effect before end-user productivity gains fully materialize.

  • Investor and executive expectations about AI’s long-run payoff spur hiring in technical roles and increased spending on complementary assets, which shows up in macro momentum before granular efficiency gains appear.

Why worker productivity lags (so far)

  • Integration friction: To translate model outputs into real productivity, companies must rebuild workflows, connect tools to data, and redesign roles—work that is expensive, time-consuming, and culturally disruptive.

  • Learning curves: Employees need time to master prompts, verification, and oversight. Early use often saves minutes rather than hours and requires double-checking, limiting near-term net gains.

  • Measurement issues: Output quality and speed improvements in knowledge work are hard to capture with standard metrics. Early benefits may manifest as higher service quality, more iterations, or reduced backlog rather than immediate throughput gains.

  • Complementarity gap: AI’s impact depends on complementary investments—data governance, security, domain-specific fine-tuning, and change management. Until these complements are in place, productivity improvements remain patchy.

  • Risk and compliance brakes: Concerns about accuracy, bias, privacy, and IP slow deployment in regulated or brand-sensitive contexts, keeping tools in “assistive” rather than “autonomous” modes.

Where gains could show up next

  • Customer-facing workflows: AI-assisted support, sales enablement, and marketing content can compress response times and personalization cycles once integrated into CRM and ticketing systems.

  • Back-office processes: Document processing, analytics, and routine reporting can be partially automated, with humans validating outputs—an approach that scales as confidence and tooling improve.

  • Software and data work: Code generation, testing, and data wrangling already benefit from AI assistance; as teams standardize prompts and guardrails, the share of time spent on rote tasks should fall.

Implications for businesses

  • Focus on process redesign, not just tool procurement. Map tasks end-to-end and embed AI where it eliminates handoffs or rework, pairing automation with clear human review points.

  • Invest in human capital. Training, prompt libraries, and role-specific playbooks accelerate learning curves and reduce verification overhead.

  • Build robust data pipelines. Clean, well-permissioned data and clear governance often drive more value than marginal model improvements.

  • Start with measurable use cases. Prioritize workflows with quantifiable time savings or error reduction, then scale to adjacent processes.

Implications for the economy and labor markets

  • Short-run divergence between investment-led growth and measured productivity is consistent with previous tech waves; diffusion typically takes years as organizations adapt.

  • Wage and employment effects will vary by occupation. Roles heavy on routine information synthesis may shift first toward supervision and exception handling, while complementarity can boost demand for domain experts who guide and verify AI outputs.

  • Policy levers that speed diffusion—workforce training, support for data infrastructure, and clear regulatory guidance—can convert macro investment momentum into broad-based productivity gains.

Key takeaways

  • AI is acting as a near-term macro stimulus via investment, even without immediate, widespread per-worker efficiency gains.

  • The bottlenecks are organizational: integration, skills, governance, and measurement—not model capability alone.

  • Sustainable productivity growth will depend on complementary investments and process redesign that embed AI deeply into daily work.

  • Expect a staggered rollout of gains: early wins in codified, high-volume workflows; slower progress in complex, high-stakes domains until trust and tooling mature.

Read More

Airtable Hires A CTO From OpenAI And Buys An AI ‘Superagent’ Startup

Upstartsmedia • October 13, 2025

AI•Tech•Airtable•DeepSky•Acquisition


Airtable CEO Howie Liu with his longtime customer and now CTO, David Azose. Credit: Airtable

#### The Upshot

Airtable CEO Howie Liu sees the new wave of vibe-coding startups like Lovable and Replit racing to add features that can take their outputs from prototypes and proofs-of-concept to enterprise-friendly apps.

He likes his chances to reach the same finish line from the opposite direction.

“Those products are trying to build underneath their magical agent layer, and it’s with components that Airtable already has,” Liu says. “We have all that scalability, reliability, components that you need to build a reliable business app very quickly.”

Airtable declared its intentions in this category a little over three months ago, when Liu unveiled Omni, an AI-powered app builder, to go with an AI assistant to help customers work with its no code database software. Airtable’s AI tools are now active for “the vast majority” of Airtable customers, which include 80% of the Fortune 100, the company says.

But Airtable has bigger ambitions in AI than just launching its own app builder. So Liu has brought in a new chief technology officer from OpenAI, and made his largest acquisition to date of a venture-backed AI agents startup, as part of his push.

Airtable has tapped David Azose, until now the engineering lead of ChatGPT’s business products, as its CTO, Upstarts can exclusively report.

A longtime Seattle resident who previously worked in engineering leadership roles at DoorDash, Uber and Microsoft, Azose started last week and will be commuting regularly to Airtable’s San Francisco base.

“I joke with folks that at this point, I know the flight numbers by heart,” Azose says.

Arriving at Airtable at the same time: DeepSky.

Formerly known as Gradient, DeepSky raised about $40 million in venture funding to date for what had pivoted from a Palantir-style business helping companies to develop AI tools, into a “superagent” builder that looked to offer investable, money-moving research for customers like investors (and became a hit with Stanford MBAs).

DeepSky’s founding trio of Chris Chang, Mark Kim-Huang and Forrest Moret, along with a dozen staffers (most of the team) will now operate it as Airtable’s second standalone product, with Chang reporting directly to Liu.

What’s the through line? Upstarts asks Liu late last week.

“In this era where things are moving fast. Speed of execution is paramount, and technical depth of how you execute on products is key,” Liu replies. “You can’t just go and build a bunch of stuff, ship it and say, ‘job done.’”

More on his strategy, why Azose picked Airtable from OpenAI, and how DeepSky’s agents can unlock more upside for Airtable (and, Liu hopes, its customers), below.

Read More

OpenAI x Broadcom — The OpenAI Podcast Ep. 8

Youtube • OpenAI • October 13, 2025

AI•Tech•Semiconductors•AIInfrastructure•Partnerships


Overview

This episode brings together leaders from an AI research organization and a major semiconductor company for a conversation about building and scaling the compute required for modern AI systems. The discussion centers on how model advances and hardware progress reinforce one another, what it takes to translate research breakthroughs into reliable, large-scale deployment, and how partnerships between model developers and chip designers are shaping the next generation of AI infrastructure. The conversation highlights the practical realities of moving from proofs-of-concept to data-center-scale systems, focusing on cost, power, latency, and reliability as first-class concerns alongside raw performance.

Hardware–Software Co-Design

A core theme is co-design: aligning model architectures, compiler stacks, kernels, and runtime with the underlying silicon roadmap. The guests describe how choices in attention mechanisms, quantization, sparsity, and memory layout ripple into requirements for on-die cache, high-bandwidth memory, and interconnect topology. Conversely, hardware constraints feed back into model and training strategy—affecting sequence lengths, parallelization schemes, and batch sizing. The episode emphasizes short feedback loops between research teams and hardware engineers to unlock step-change efficiency gains rather than incremental tweaks.

Scaling, Cost, and Energy

As training runs and inference fleets grow, efficiency becomes existential. The conversation explores how to reduce cost per token and energy per inference through better utilization, kernel fusion, mixed precision, and exploiting model characteristics (e.g., activation sparsity). It also touches on power delivery, thermal envelopes, and the importance of performance-per-watt at the rack and cluster level—not just at the chip. The guests outline how data-center layout, cooling, and firmware/driver maturity can be as decisive as teraflops on a spec sheet.

Packaging, Memory, and Interconnects

Memory bandwidth and capacity are recurring bottlenecks. The episode discusses advanced packaging, chiplets, and high-bandwidth memory as pathways to feed compute units efficiently, alongside the trade-offs in yield, cost, and thermal density. Interconnects—both on-package and across nodes—are framed as critical to training and serving large models, with collective ops, topology-aware schedulers, and congestion management determining how close real-world performance gets to theoretical peak.

Networking and Systems Integration

Beyond chips, the full stack includes networking fabrics, storage hierarchies, and orchestration. The speakers address how network diameter, link bandwidth, and congestion control influence time-to-train and tail latency. They consider reliability engineering—from error detection/correction to graceful degradation—and why predictable performance can matter more than occasional bursts of peak throughput in production services.

Software Tooling and Developer Experience

The episode highlights compilers, graph optimizers, and profiling tools that expose bottlenecks to researchers, enabling rapid iteration without deep hardware expertise. Standardized kernels, portable abstractions, and automatic parallelization are presented as enablers for teams to target diverse accelerators while preserving performance. The conversation underscores the importance of open interfaces and documentation to reduce integration friction across the ecosystem.

Security, Supply, and Roadmaps

Security and supply assurance enter the discussion through topics such as firmware integrity, secure boot, and provenance of components. The guests outline how long-term roadmaps—spanning process nodes, packaging advances, and networking generations—must align with anticipated model sizes and workloads so that capacity arrives when needed. They stress that forecasting datasets, product demand, and compliance needs is as crucial as benchmarking kernel speed.

Implications

Listeners come away with a picture of AI progress as a coordination challenge across research, silicon, systems, and operations. The key message is that sustained gains require tight partnerships: early sharing of workload characteristics, iterative tuning, and readiness to redesign assumptions at each layer. The payoff is not only faster models but also more dependable, efficient, and accessible AI services.

  • Co-design beats siloed optimization across the AI stack

  • Efficiency and reliability are as vital as raw performance

  • Memory and interconnects remain the central bottlenecks

  • Tooling that abstracts hardware while preserving performance accelerates innovation

  • Long-term alignment on roadmaps, security, and supply is essential for scaling

Read More

OpenAI and Broadcom, ChatGPT and XPUs, AMD and Nvidia

Stratechery • Ben Thompson • October 14, 2025

AI•Tech•CustomSilicon•XPUs•Broadcom


Overview

The core argument is that a bespoke hardware partnership between an AI model operator and a semiconductor specialist is strategically logical when the operator has deep insight into its own workload profile. In this case, the assertion is straightforward: “OpenAI’s deal with Broadcom makes perfect sense, because OpenAI already knows exactly what workloads it needs to optimize.” That knowledge—spanning prompt processing, inference latency, memory bandwidth needs, interconnect pressure, and model serving patterns—enables silicon co-design that trades generality for targeted efficiency. The result is potential gains in cost per inference, power consumption, and throughput that commodity accelerators may not fully unlock at OpenAI’s scale.

Why Workload Clarity Matters

  • Knowing the distribution of sequence lengths, token throughput, and batch-size dynamics lets designers right-size compute arrays, memory hierarchies, and on-chip networks.

  • Observability into live serving (cache hit rates, KV cache growth, and context-window behavior) informs SRAM vs. HBM decisions and interposer bandwidth.

  • Understanding model evolution paths (e.g., larger context windows, multimodal fusion, tool-use) guides extensibility for future instruction sets or accelerators.

This closed loop—production telemetry feeding silicon design—reduces guesswork and accelerates the path from architecture to measurable TCO improvements.

From GPUs to XPUs

  • General-purpose GPUs excel at parallel workloads but carry overhead for flexibility. “XPU” is a catch-all for specialized accelerators that can be tuned to mixed workloads—training, inference, retrieval, and compression—without being locked to a single operation.

  • A Broadcom collaboration can emphasize custom interconnects, packetized fabrics, or offload engines (e.g., attention, GEMM variants, or sparsity primitives) that reflect ChatGPT-class serving bottlenecks.

  • The goal is system-level balance: compute density aligned with memory bandwidth, low-latency interconnect for distributed inference, and firmware/runtime stacks that map OpenAI’s actual kernels to hardware efficiently.

Strategic Rationale vs. Commodity Alternatives

  • Supply diversification: Depending solely on a single GPU vendor risks price and availability constraints. A custom path hedges capacity risk while negotiating leverage.

  • Cost structure: Inference dominates production costs for popular assistants. Even single-digit percentage improvements in joules per token or tokens per dollar compound across billions of queries.

  • Software leverage: Owning the runtime and compiler layers that target a custom XPU allows faster iteration when models, quantization schemes, or caching strategies change.

  • Ecosystem optionality: Custom accelerators can coexist with Nvidia and AMD inventories; schedulers can route workloads to the best-fit backend given queue depth, SLA, and price.

Risks and Trade-offs

  • NRE and time-to-market: Custom silicon requires upfront design costs and long lead times; any miss in workload forecasting can reduce payoff.

  • Toolchain maturity: Compiler, kernel libraries, and observability must reach GPU-like stability to avoid developer friction.

  • Fragmentation: Introducing a new target increases operational complexity in orchestration, placement, and autoscaling, especially for multimodal models.

Implications for Model Products

  • Faster, cheaper inference can expand context windows, enable richer multimodal interactions, and support more aggressive on-device/off-device hybrid strategies.

  • Lower marginal costs encourage experimentation with agentic behaviors and tool use that previously incurred prohibitive latency or cost.

  • Control over the hardware roadmap aligns incentives: features that matter for end-user experience (latency tail, reliability, energy) get first-class treatment in silicon.

Key Takeaways

  • Workload intimacy is the scarce asset; when an operator truly understands its serving and training profiles, custom accelerators become rational.

  • XPUs aim to optimize the full system—compute, memory, interconnect, and software—not just FLOPS.

  • A Broadcom partnership can deliver TCO, latency, and capacity advantages while diversifying supply, but success hinges on toolchain maturity and accurate workload forecasting.

  • The move signals a shift from “best universal GPU” to “best-for-our-stack” silicon, redefining competition across Nvidia, AMD, and custom paths.

Read More

You can make ChatGPT your personal shopper and deal hunter. Here’s how.

Washingtonpost • October 14, 2025

AI•ECommerce•PersonalizedRecommendations•PriceComparison•ShoppingTips


Overview

AI assistants like ChatGPT can act as on-demand shopping concierges by translating your needs into concrete product recommendations and surfacing price differences across retailers. The core value is personalization: instead of searching through dozens of pages, you describe your budget, preferences, and constraints, and the assistant proposes shortlists you can refine in conversation. In parallel, these tools can structure price comparisons and highlight trade-offs—features versus cost, brand reliability versus warranty terms—so you make decisions faster and with more confidence. The practical approach is to treat the assistant as both a product researcher and a deal analyst, guiding it with clear parameters and iterating until the options match your priorities.

How to Get Useful, Personalized Suggestions

  • Provide a clear brief: intended use, must-have features, nice-to-haves, budget ceiling, and any brand or ecosystem preferences.

  • Share context that affects performance: room size for appliances, device compatibility, fit measurements for apparel, or travel dates for luggage and accessories.

  • Ask for tiered options: request good/better/best picks to visualize feature trade-offs and diminishing returns.

  • Include constraints beyond price: sustainability preferences, repairability, return policies, and warranty coverage.

  • Use follow-ups to refine: prune features you won’t use, ask for lighter, quieter, or more durable alternatives, or target models with stronger owner satisfaction.

Price Comparison and Deal-Hunting

  • Have the assistant lay out a side-by-side comparison that includes base price, shipping or installation fees, tax estimates, and any required accessories to surface true total cost.

  • Request time-sensitive purchase advice: ask when prices typically dip (seasonal sales, model-year changeovers) and what substitute products deliver similar value at lower cost.

  • Explore value angles: open-box, refurbished, previous-generation models, bundle discounts, or loyalty-program perks that improve effective price.

  • Ask for “best alternative” alerts: if your top choice is overpriced, prompt the assistant to flag a comparable model within a set percentage of your target budget.

Prompt Patterns That Work

  • “Create a shortlist of 5 options under $X for [use case], must include [features], and exclude [features]. Rank by value and explain the trade-offs in one sentence each.”

  • “Compare [Model A] vs [Model B] vs [Model C] on price, performance, reliability, warranty, and total cost of ownership. Highlight the best pick for [my priority].”

  • “Find budget alternatives within 10% of [Model A]’s performance but under $Y. Note any missing features I might care about.”

  • “Propose a starter kit for [activity] capped at $Z, allocating the budget across essentials and noting where to save vs. where to splurge.”

Quality Checks and Good Practices

  • Verify prices before purchase: ask the assistant to produce a structured checklist of items and claimed deals so you can confirm with retailers.

  • Request sources and linkable product identifiers (model numbers, SKUs) to avoid confusion with similarly named items.

  • Consider long-term costs: consumables, subscription fees, energy use, and expected lifespan can outweigh small upfront savings.

  • Be mindful of privacy: share only the personal details necessary for sizing or compatibility; avoid sensitive information in shopping chats.

Implications for Shoppers

Using an AI assistant shifts effort from endless scrolling to guided decision-making. You gain speed through tight shortlists and transparent trade-offs, and you can compress research that once took hours into a few iterative prompts. The upside is particularly strong for complex or bundle purchases, where total cost, compatibility, and timing matter. The remaining responsibility is validation: confirm availability and prices, watch for hidden costs, and calibrate recommendations against your actual needs. Approach it as a collaborative process—your specificity steers the assistant, while its structured comparisons and scenario testing help you buy with clarity and confidence.

  • Key takeaways:

  • Personalize with precise constraints to improve recommendation quality.

  • Compare total cost, not just sticker price, including fees and accessories.

  • Use tiered and alternative lists to balance value and features.

  • Validate with model numbers and retailer checks before purchasing.

  • Protect privacy and avoid oversharing during the shopping process.

Read More

Introducing Veo 3.1 and advanced capabilities in Flow

Blog • Jess Gallegos • October 15, 2025

AI•Tech•Veo•Flow•VideoEditing

Introducing Veo 3.1 and advanced capabilities in Flow

Overview

A new release introduces Veo 3.1 alongside advanced capabilities in Flow, centering on richer, more flexible ways to edit short video clips. The announcement emphasizes creative control and streamlined editing, signaling a push to help creators iterate faster without sacrificing polish or intent. As the post states, “Today, we’re introducing new and enhanced creative capabilities to edit your clips.” This positions the update as an evolution of existing tooling rather than a wholesale redesign, with a focus on tangible improvements to day‑to‑day editing tasks and the creative process.

Key Updates at a Glance

  • Enhanced creative tools for editing clips, aimed at making fine‑grained adjustments easier and more intuitive.

  • Veo 3.1 version milestone, indicating iterative improvements to capability, quality, and reliability.

  • “Advanced capabilities in Flow,” suggesting a more powerful orchestration layer for moving from idea to edited output with fewer steps.

  • A clear emphasis on creative empowerment—giving users more options to shape pacing, style, and narrative in short-form video.

What This Means for Creators

For editors, marketers, and everyday storytellers, the additions promise more precise control over how clips are refined, sequenced, and presented. The improvements in Veo 3.1 point to faster iteration loops: creators can try variations, compare outputs, and select the strongest version without friction. Meanwhile, expanded capabilities in Flow imply a smoother path from first draft to finished cut, reducing context switching and the need for external tools during key moments of editing. Collectively, these enhancements should help creators maintain momentum—polishing color, timing, transitions, or composition while staying aligned with the original creative intent.

Workflow and Collaboration Implications

  • Greater creative consistency: With more nuanced editing options, teams can standardize looks and narrative tempos across campaigns or channels.

  • Lowered revision overhead: Fine-grained controls can make it easier to accommodate feedback and rapidly produce alternate cuts for different platforms.

  • Faster experimentation: Stronger clip-editing tools invite more A/B testing of hooks, transitions, and formats—vital for short-form performance.

  • Integrated flow: Advanced capabilities in Flow suggest fewer handoffs and smoother transitions between ideation, assembly, and refinement.

User Experience Considerations

These updates appear designed to reduce the “tool burden” often felt in clip editing. By centering editability—rather than one-off generation—the release puts control firmly in the creator’s hands. That means the system likely prioritizes:

  • Intuitive adjustments over complex, multi-step manipulations.

  • Non-destructive editing that lets users explore creative directions without fear of losing progress.

  • Clear affordances that map to common storytelling needs: trim, re-time, re-frame, and stylize.

Strategic Context

The focus on “new and enhanced creative capabilities to edit your clips” underscores a broader shift from pure generation to editable, production-ready outputs. As short-form video continues to dominate discovery and engagement, the ability to rapidly refine clips—while preserving creative nuance—becomes a strategic differentiator. Veo 3.1 and the upgraded Flow aim to meet that need by tightening the loop between inspiration, draft, and delivery. For organizations, this can translate into faster content cycles, more consistent brand expression, and improved performance across platforms.

Key Takeaways

  • The update advances practical, creator-centric editing—more control, less friction.

  • Veo 3.1 signals steady iteration focused on quality and reliability.

  • Flow’s advanced capabilities point to streamlined end-to-end workflows.

  • Emphasis on clip-level refinement aligns with how audiences consume and evaluate content today.

  • Expect more experimentation, faster revisions, and stronger creative consistency across outputs.

Read More

Walmart and Sam’s Club will let you shop directly with ChatGPT as retail giant announces deal with OpenAI

Fastcompany • Michael Grothaus • October 14, 2025

AI•ECommerce•Walmart•OpenAI•AgenticCommerce

Walmart and Sam’s Club will let you shop directly with ChatGPT as retail giant announces deal with OpenAI

America’s largest brick-and-mortar retailer is partnering with the country’s most prominent AI firm in the clearest signal yet that companies are hoping to boost their sales with artificial intelligence-assisted shopping tools.

Today, Walmart and OpenAI announced a new partnership that will allow ChatGPT users to buy Walmart products directly from within the chatbot itself.

Here’s what you need to know about the news, and how Walmart’s stock price is reacting.

The Walmart-OpenAI deal explained

Today, retail giant Walmart Inc. (NYSE: WMT) announced a major new deal with ChatGPT maker OpenAI. The deal will see the artificial intelligence firm’s chatbot gain the ability to make purchases through Walmart and Sam’s Club on a customer’s and member’s behalf.

This AI shopping experience will be done through natural language interaction with ChatGPT.

In other words, you tell ChatGPT what you want to buy from Walmart or Sam’s Club, and ChatGPT will add the item to your cart, and your preferred payment method will be charged—all without leaving the ChatGPT interface.

This type of shopping, which is referred to as “agentic commerce” due to it being powered by a generative AI chatbot, utilizes OpenAI’s Instant Checkout and Agentic Commerce Protocol, which the company launched last month.

“This marks the next step in agentic commerce, where ChatGPT doesn’t just help you find what to buy, it also helps you buy it,” OpenAI said when introducing Instant Checkout in September.

Walmart’s adoption of OpenAI’s technology underscores how the largest retailers on the planet are betting that consumers will increasingly turn to AI chatbots to help fulfill their shopping needs.

When can Walmart shoppers start using ChatGPT?

Despite announcing the OpenAI deal today, Walmart did not give a date for when users could begin shopping through their normal ChatGPT conversations, only saying that the deal would allow this interaction “soon.”

The company also wasn’t shy about making promises about how transformative agentic commerce will be—for shoppers and artificial intelligence itself.

“At the center of this transformation are the everyday moments that define how people shop,” the company said. “This is agentic commerce in action: where AI shifts from reactive to proactive, from static to dynamic. It learns, plans, and predicts, helping customers anticipate their needs before they do.”

Read More

Media

Spotify to Start Putting Video Podcasts on Netflix

Bloomberg • October 14, 2025

Media•Broadcasting•Spotify•Netflix•Podcasts

Spotify to Start Putting Video Podcasts on Netflix

Overview

Spotify will begin distributing select video podcasts on Netflix, marking a cross-platform expansion that extends Spotify’s podcast reach from its own app into one of the world’s largest streaming video services. The initial slate will feature certain shows from Spotify Studios and The Ringer, including Bill Simmons’ podcast, and episodes will be published simultaneously on both platforms starting in 2026. The arrangement signals a shift toward treating video podcasts as full-fledged streaming content that can live alongside series and films, rather than solely within audio-first ecosystems.

What’s Changing

  • Simultaneous release: Episodes from a curated set of Spotify Studios and The Ringer shows will debut on Spotify and Netflix at the same time, reducing windowing and platform exclusivity.

  • Video-first emphasis: By placing video podcasts on a TV-centric platform, creators can meet audiences where they already watch long-form video, potentially improving completion rates and time spent.

  • Flagship inclusion: The lineup includes The Bill Simmons Podcast, a tentpole for The Ringer with a large, loyal audience—useful for testing adoption and discoverability in Netflix’s interface.

Strategic Rationale

For Spotify, distributing video podcasts on Netflix broadens audience reach without abandoning its own app. It reframes podcasts as multi-surface IP that can be monetized and discovered across services. For Netflix, video podcasts add a steady cadence of low-cost, talk-driven programming that can complement tentpole releases and keep engagement high between seasons of major series. The move also aligns with a broader industry trend of blending talk shows, recap formats, and cultural commentary into streaming libraries, letting viewers watch discussions about sports, entertainment, and news in the same place they watch the underlying content.

Implications for Creators and Rights

  • Audience growth: Creators gain exposure to Netflix’s global user base while preserving existing Spotify listeners, potentially lifting total reach across both platforms.

  • Format flexibility: Video podcasts can lean into studio segments, highlight reels, and guest clips designed for lean-back viewing, expanding beyond the constraints of mobile-first consumption.

  • Rights and distribution: Simultaneous publishing indicates rights structures that allow cross-platform distribution, suggesting a template other podcasts might follow if the rollout performs well.

User Experience and Discovery

Bringing video podcasts to Netflix could surface talk content within recommendations tied to users’ viewing habits—e.g., surfacing a sports podcast to viewers of sports documentaries. This converged discovery may reduce friction: viewers no longer need to switch apps to watch conversational or analysis-driven shows. However, success will depend on how prominently Netflix features podcast episodes within rows and search results, and whether episodic cadence fits viewers’ expectations compared with bingeable series.

Monetization Considerations

While specifics are not detailed, simultaneous distribution opens multiple monetization levers: advertising, brand integrations, and potential premium tiers or early-access windows down the line. The key test will be whether aggregated cross-platform metrics can demonstrate incremental reach and engagement that justify investment in video-first podcast production. For advertisers, the proposition is a single piece of content addressable across two massive platforms—appealing, but contingent on measurement clarity and brand-safety assurances.

Competitive Landscape

This partnership adds pressure on other streamers and audio platforms to rethink content silos. If video podcasts perform well on Netflix, rival services may court established podcast networks or experiment with their own simultaneous-release models. At the same time, Spotify’s move suggests that platform exclusivity is less critical than total footprint for certain formats, especially talk and analysis shows with high episode frequency.

Key Takeaways

  • Select shows from Spotify Studios and The Ringer, including The Bill Simmons Podcast, will publish simultaneously on Spotify and Netflix beginning in 2026.

  • The collaboration positions video podcasts as streaming-native content, expanding reach and discovery beyond audio apps.

  • Success will hinge on Netflix’s UI placement, cross-platform measurement, and creator-friendly rights structures.

  • If effective, the model could accelerate cross-platform podcast distribution and reshape how talk content is produced, monetized, and found by viewers.

Read More

Crypto

Ripple Pays $1 Billion for GTreasury to Enter Corporate Treasury

Bloomberg • October 16, 2025

Crypto•Blockchain•M

Ripple Pays $1 Billion for GTreasury to Enter Corporate Treasury

Overview

Crypto company Ripple announced it has agreed to acquire treasury management software provider GTreasury for $1 billion. Ripple said the deal is aimed at expanding its footprint into corporate treasury, positioning the firm to offer enterprises a unified stack that pairs traditional cash management capabilities with blockchain-enabled settlement. The headline price signals a decisive move beyond cross-border payments infrastructure into the broader workflows that finance teams use every day—cash positioning, liquidity forecasting, bank connectivity, payments execution, and risk controls—potentially bringing crypto-native rails closer to the core of enterprise finance operations.

Deal Scope and What GTreasury Brings

  • GTreasury is a provider of treasury management software (TMS), used by corporate finance teams to centralize cash visibility, manage liquidity, and orchestrate bank accounts and payments.

  • By paying $1 billion, Ripple is effectively purchasing an installed base of enterprise customers, established integrations with banks and ERPs, and a mature workflow product that sits at the center of CFO organizations.

  • Ripple said it “has agreed to buy” GTreasury, underscoring that the transaction is announced and priced; integration and closing would typically follow customary approvals and closing conditions.

Strategic Rationale

  • Embedding blockchain settlement into TMS workflows: Treasury systems are the control tower for cash. Ownership here allows Ripple to insert crypto-enabled cross-border settlement and near-real-time funding into day-to-day treasury processes, rather than asking enterprises to adopt standalone crypto tools.

  • Distribution and data: A TMS provides continuous visibility into cash balances, currency exposures, and payment timings. With GTreasury, Ripple can route payments across its network where speed or cost advantages are clearest and measure impact through embedded analytics.

  • Product completeness: Enterprises often prefer end-to-end solutions that cover initiation, approval, execution, reconciliation, and reporting. Pairing GTreasury’s orchestration layer with Ripple’s payment rails could reduce fragmentation and accelerate enterprise adoption.

Implications for Corporates and Banks

  • For corporate treasurers: If Ripple integrates blockchain-based settlement natively, treasurers could access faster cross-border payments, potentially lower fees, improved liquidity timing, and automated reconciliation—without switching their primary operating system.

  • For banks and payment partners: A TMS-centered approach may increase pressure to offer instant or near-instant rails and tighter API integrations, while also expanding opportunities for banks that partner on faster payout corridors.

  • For fintech competitors: The acquisition places Ripple in more direct competition with TMS providers, cross-border specialists, and payments platforms that have been embedding treasury features. The differentiator will be whether blockchain-native settlement can translate into measurable cost and working-capital gains inside standard treasury workflows.

Execution Considerations

  • Integration complexity: Unifying bank connectivity, compliance, controls, and crypto settlement inside a regulated enterprise environment requires careful product and risk design. Migration paths for existing GTreasury customers will be crucial.

  • Governance and controls: Treasury is a control function. Embedding new rails must maintain rigorous approval hierarchies, audit trails, and segregation of duties while keeping user experience familiar.

  • Interoperability: Success will hinge on breadth of ERP/bank integrations, support for multiple currencies, and the ability to operate in heterogeneous regulatory regimes.

What to Watch Next

  • Closing milestones and initial integration roadmap following Ripple’s announcement that it agreed to the $1 billion purchase.

  • Early use cases offered to existing GTreasury customers—e.g., faster cross-border vendor payments, real-time wallet-to-account funding, or automated FX routing.

  • Evidence of tangible outcomes: reduced settlement times, lower payment costs, improved cash forecasting accuracy, and accelerated period-end reconciliation.

Key Takeaways

  • Ripple said it will buy GTreasury for $1 billion, marking a major bet on owning the corporate treasury workflow layer.

  • The move could bring crypto-enabled settlement directly into enterprise finance operations, with potential benefits in speed, cost, and cash visibility.

  • Execution risk centers on integration, compliance, and user trust, but success would reposition Ripple from a payments network to a broader enterprise treasury platform.

Read More

The Next $600B of Real-World Blockchain Adoption: Why Pantera Invested in TransCrypts, Meanwhile and Coinflow

Veradiverdict • Paul Veradittakit • October 16, 2025

Venture•Crypto

The Next $600B of Real-World Blockchain Adoption: Why Pantera Invested in TransCrypts, Meanwhile and Coinflow

Pantera highlights a consistent thesis: blockchain’s biggest wins come from solving mainstream problems without forcing users to “go crypto.” Its latest bets—TransCrypts (decentralized identity), Meanwhile (bitcoin-denominated life insurance), and Coinflow (cross-chain stablecoin payments)—target large, broken markets where cryptographic rails deliver visible gains in cost, speed, security, and access. The firms collectively address hundreds of billions in spend across identity verification, long-term savings/insurance, and cross-border commerce, which Pantera frames as the next wave of real-world adoption.

TransCrypts: User-Owned Digital Identity

  • Problem: centralized databases are costly honeypots and poorly interoperable, with identity fraud expected to exceed $50 billion in 2025.

  • Solution: a tamper-proof, user-controlled credential backbone (employment, health, education) anchored on-chain, minimizing mass-breach risk and enabling cryptographic verification as AI-driven fraud escalates.

  • Market: digital identity valued at $40 billion in 2025, projected to reach $203 billion by 2034; decentralized identity segments are growing at 53–89% annually.

  • Quote: “Our goal is simple: to give people 100% control of their identity… making verification secure, efficient, and fraud-resistant.” (Zain Zaidi, TransCrypts CEO)

Implication: By returning data ownership to individuals while simplifying verification for institutions, TransCrypts could reduce fraud, compliance costs, and onboarding friction across HR, healthcare, and education.

Meanwhile: Bitcoin-Denominated Life Insurance

  • Problem: traditional policies erode with inflation and face cross-border friction; life insurance represents 7.1% of global GDP, making inflation a pervasive drag on savings.

  • Solution: Bermuda Monetary Authority–regulated policies priced and paid in bitcoin, with policyholders able to borrow up to 90% of BTC value; yield generated by lending to regulated institutions, with transparent, auditable reserves.

  • Traction: bitcoin under management grew over 200% year-to-date; company raised $82 million to scale globally.

  • Quote: “We’re bringing [insurers’] long-term capital role to Bitcoin… helping families protect wealth in BTC and enabling compliant, scalable bitcoin-indexed products.” (Zac Townsend, CEO)

Implication: If BTC continues to mature as “digital gold,” inflation-resistant, jurisdiction-agnostic policies could reframe life insurance and annuities as programmable, borderless savings instruments.

Coinflow: Cross-Chain Stablecoin Payments for Merchants

  • Problem: cross-border payments (~$220B today, projected ~$320B by 2030) remain slow and expensive—often ~7% fees and 2–3 day settlement—especially for U.S. e-commerce selling into Africa and Asia.

  • Solution: instant, cross-chain settlement (e.g., Solana↔Ethereum), fiat on/off-ramps, and AI-driven fraud prevention delivered via a unified stack so merchants avoid wallet and chain complexity.

  • Traction: 23x revenue growth since the 2024 seed round; payment reach in 170+ countries; multi-billion-dollar annual transaction run-rate; $25M Series A led by Pantera to expand payouts across 100+ countries in Asia and Latin America.

  • Quote: “The payment layer of the internet should be instant, secure, and truly global.” (Daniel Lev, Founder)

Implication: By abstracting chains and automating compliance/fraud, Coinflow positions stablecoins as invisible infrastructure that lowers cost of sales and accelerates cash cycles.

Why This Matters

Pantera argues the “killer app” is not a new token but infrastructure that millions use without noticing it’s blockchain. Meanwhile protects generational wealth against inflation; TransCrypts hardens identity against fraud while restoring ownership; Coinflow compresses payment costs and settlement times for global commerce. Together they represent a combined addressable opportunity north of $570 billion and a credible path to mainstream utility.

Broader Market Signals

  • Business: ICE reportedly takes a multi‑billion stake in Polymarket as event markets scale; Coinbase and Mastercard eye stablecoin infra (BVNK), and research dubs Solana “crypto’s financial bazaar,” underscoring on-chain economic depth.

  • Regulation: Bybit secures the UAE’s first full SCA Virtual Asset Platform Operator License, signaling regulatory maturation in a key hub.

  • Funding/Product: Meanwhile’s $82M and Coinflow’s $25M reinforce investor appetite for compliance‑forward, revenue‑scaling crypto infrastructure.

Key Takeaways

  • Real-world adoption hinges on abstracting crypto complexity while delivering measurable benefits (cost, speed, security).

  • Identity, savings/insurance, and cross-border payments are near-term beachheads with tangible pain points and large TAMs.

  • Traction metrics (200%+ AUM growth; 23x revenue; 170+ country coverage) suggest product–market fit beyond crypto-native users.

  • Regulatory alignment (BMA, UAE SCA) is a competitive moat for scaling compliant, global offerings.

Read More

Apple

Matthew Belloni Interviews Eddy Cue on ‘The Town’

Theringer • John Gruber • October 17, 2025

Media•TV•AppleTVPlus•MLS•Formula•Apple


Speaking of Eddy Cue, he was the guest on Matthew Belloni’s excellent podcast, The Town, this week. (Overcast link.) Just a great interview in general. Cue doesn’t do many interviews but he’s my favorite Apple executive to hear speak, because he’s the least rehearsed and most straightforward. If he doesn’t want to answer a question (Belloni tried, mightily, to press him on subscriber and viewership numbers), Cue just says he’s not going to answer that question, rather than dance around it with a non-answer answer.

My two big takeaways:

Everyone in Hollywood is spooked about what Apple’s intentions “really are” regarding original movies and series. They’re worried it’s some sort of play to polish Apple’s brand, and that Apple is going to get bored or tired of losing money, and pick up stakes and leave the game. Cue emphasized that the answer is simple: Apple thinks it’s a great business to be in (and he also made the point that Apple’s brand needed no polishing) and they’re in this business for that reason, and for the long haul.

Apple is serious about sports rights, but they don’t want to dabble. They want to own the rights to entire sports. Friday Night Baseball was, effectively, a learning experiment. Apple TV’s MLS deal — and the F1 US deal announced today — are the sort of deals Apple wants. (That’s going to make it hard for Apple to get involved with the NFL, because the NFL strategically wants to spread its games across all the major TV networks and streaming services.) Cue is a huge sports fan (as is Tim Cook), and Apple wants to deliver sports on Apple TV that cater to fans.

Read More

★ The Just Plain M5 Chip Launches in Three Updated Products: 14-Inch MacBook Pro, iPad Pro (Both Sizes), and Some Sort of Headset Thingamajig Called Vision Pro

Daringfireball • John Gruber • October 15, 2025

AI•Tech•Apple•M


Apple Newsroom, today:

Apple today announced M5, delivering the next big leap in AI performance and advances to nearly every aspect of the chip. Built using third-generation 3-nanometer technology, M5 introduces a next-generation 10-core GPU architecture with a Neural Accelerator in each core, enabling GPU-based AI workloads to run dramatically faster, with over 4× the peak GPU compute performance compared to M4. The GPU also offers enhanced graphics capabilities and third-generation ray tracing that combined deliver a graphics performance that is up to 45 percent higher than M4. M5 features the world’s fastest performance core, with up to a 10-core CPU made up of six efficiency cores and up to four performance cores. Together, they deliver up to 15 percent faster multithreaded performance over M4. M5 also features an improved 16-core Neural Engine, a powerful media engine, and a nearly 30 percent increase in unified memory bandwidth to 153GB/s. M5 brings its industry-leading power-efficient performance to the new 14-inch MacBook Pro, iPad Pro, and Apple Vision Pro, allowing each device to excel in its own way. All are available for pre-order today.

Some thoughts and observations:

14-INCH MACBOOK PRO

Apple Newsroom: “Apple Unveils New 14‑Inch MacBook Pro Powered by the M5 Chip, Delivering the Next Big Leap in AI for the Mac”.

The 14-inch MacBook Pro with the no-adjective M-series chip has always been an odd duck in the MacBook lineup. This “Pro”-but-not-pro spot in the MacBook lineup goes back to the Intel era, when there was a 13-inch MacBook Pro without a Touch Bar. That was the MacBook Pro that, in 2016, Phil Schiller suggested as a good choice for those who were then holding out for a MacBook Air with a retina display. (The first retina MacBook Air didn’t ship for another two years, in late 2018.) It’s more like a MacBook “Pro” than a MacBook Pro. The truly pro-spec’d MacBook Pros have M-series Pro and Max chips, and are available in both 14- and 16-inch sizes. The base 14-inch model, with the no-adjective M-series chip, is for people who probably would be better served with a MacBook Air but who wrongly believe they “need” a laptop with “Pro” in its name.

Here’s a timeline of no-adjective M-series chips and when they appeared in the 14-inch MacBook Pro:

  • M1 13-inch MacBook Pro: 10 November 2020. This MacBook Pro was one of the three Macs that debuted with the launch of Apple Silicon — the others were the MacBook Air and Mac Mini. The hardware looked exactly like the last generation Intel MacBook Pro. The M1 Pro and M1 Max models didn’t ship for another year (well, 11 months later), and those models brought with them the new form factor design that’s still with us today with the new M5 MacBook Pro.

  • M2 13-inch MacBook Pro: 6 June 2022. This model also stuck with the older Intel-era form factor, including the 13-inch, not 14-inch, display size.

  • M3 14-inch MacBook Pro: 30 October 2023. The “Scary Fast” event. This model debuted alongside the pro-spec’d M3 Pro and M3 Max 14- and 16-inch models.

  • M4 14-inch MacBook Pro: 30 October 2024. Exactly one year after the M3, and also alongside the M4 Pro and M4 Max models. What was different in 2024 with the M4 generation is that the M4 iPad Pros debuted back in early May, all by themselves.

  • M5 14-inch MacBook Pro: 15 October 2025 (today). What’s different with today’s announcement is that it is not alongside the M5 Pro and M5 Max models, but is alongside the M5 iPad Pros.

This raises the question of when to expect those M5 Pro/Max models. The rumor mill suggests “early 2026”. I suspect that’s right, based on nothing other than the fact that if they were going to be announced this year, Apple almost certainly would have announced the entire M5 generation MacBook lineup together.

Basically, this is just a speed bump upgrade over the just-plain M4 MacBook Pro. But annual — or at least regular — speed bumps are a good thing. The alternative is years-long gaps between hardware refreshes.

IPAD PROS

Apple Newsroom: “Apple Introduces the Powerful New iPad Pro With the M5 Chip”:

Featuring a next-generation GPU with a Neural Accelerator in each core, M5 delivers a big boost in performance for iPad Pro users, whether they’re working on cutting-edge projects or tapping into AI for productivity. The new iPad Pro delivers up to 3.5× the AI performance than iPad Pro with M4 and up to 5.6× faster than iPad Pro with M1. N1, the new Apple-designed wireless networking chip, enables the latest generation of wireless technologies with support for Wi-Fi 7 on iPad Pro. The C1X modem comes to cellular models of iPad Pro, delivering up to 50 percent faster cellular data performance than its predecessor with even greater efficiency, allowing users to do more on the go.

I think the N1 wireless chip and C1X modem are more interesting generation-over-generation improvements than the M5 chip. Thanks to the N1, these iPad Pro models support Wi-Fi 7 — today’s new M5 14-inch MacBook Pro does not. I would wager rather heavily that the upcoming M5 Pro and M5 Max MacBook Pro models will support Wi-Fi 7 (probably via the N1 chip, or perhaps even an “N1X” or something).

Other than that, this too is a speed bump upgrade.

VISION PRO

Apple Newsroom: “Apple Vision Pro Upgraded With the M5 Chip and Dual Knit Band”:

Read More

Apple Is the Exclusive New Broadcast Partner for Formula 1 in the U.S.

Apple • John Gruber • October 17, 2025

Media•Broadcasting•AppleTV•Formula•Apple


Apple and Formula 1® today announced a five-year partnership that will bring all F1 races exclusively to Apple TV in the United States beginning next year.

The partnership builds on Apple’s deepening relationship with Formula 1 following the global success of Apple Original Films’ adrenaline-fueled blockbuster F1 The Movie, the highest-grossing sports movie of all time. United by a commitment to innovation and fan experience, the partnership sets the stage for Formula 1’s continued growth in the U.S.

Apple TV will deliver comprehensive coverage of Formula 1, with all practice, qualifying, Sprint sessions, and Grands Prix available to Apple TV subscribers. Select races and all practice sessions will also be available for free in the Apple TV app throughout the course of the season. In addition to broadcasting Formula 1 on Apple TV, Apple will amplify the sport across Apple News, Apple Maps, Apple Music, and Apple Fitness+. Apple Sports — the free app for iPhone — will feature live updates for every qualifying, Sprint, and race for each Grand Prix across the season, with real-time leaderboards, season driver and constructor standings, Live Activities to follow on the Lock Screen, and a designated widget for the iPhone Home Screen.

F1 TV Premium, F1’s own premier content offering, will continue to be available in the U.S. via an Apple TV subscription only and will be free for those who subscribe.

“We’re thrilled to expand our relationship with Formula 1 and offer Apple TV subscribers in the U.S. front-row access to one of the most exciting and fastest-growing sports on the planet,” said Eddy Cue, Apple’s senior vice president of Services. “2026 marks a transformative new era for Formula 1, from new teams to new regulations and cars with the best drivers in the world, and we look forward to delivering premium and innovative fan-first coverage to our customers in a way that only Apple can.”

“This is an incredibly exciting partnership for Apple and the whole of Formula 1 that will ensure we can continue to maximize our growth potential in the U.S. with the right content and innovative distribution channels,” said Stefano Domenicali, Formula 1’s president and CEO. “We are no strangers to each other, having spent the past three years working together to create F1 The Movie, which has already proven to be a huge hit around the world. We have a shared vision to bring this amazing sport to our fans in the U.S. and entice new fans through live broadcasts, engaging content, and a year-round approach to keep them hooked. I want to thank Tim Cook, Eddy Cue, and the entire Apple team for their vision and passionate approach to delivering this partnership, and we are looking forward to the next five years together.”

Read More

Tokenization

BlackRock CEO Larry Fink: We’re at the beginning of the tokenization of all assets

Youtube • CNBC Television • October 14, 2025

Crypto•Blockchain•Tokenization

BlackRock CEO Larry Fink: We're at the beginning of the tokenization of all assets

Overview

BlackRock CEO Larry Fink asserts that financial markets are entering “the beginning of the tokenization of all assets,” framing tokenization as a structural shift that will rewire how ownership, settlement, and access work across equities, bonds, funds, real estate, and alternative assets. In short remarks, he positions tokenization as the next phase of digital market infrastructure, following the rise of electronic trading and ETFs, with the potential to compress costs, accelerate settlement, and expand participation through fractionalized ownership and 24/7 market access. His comments imply that large, regulated institutions will be central to translating crypto-native rails into mainstream finance, aligning on-chain records with compliance, investor protections, and established custody practices.

What Fink Emphasizes

  • “We’re at the beginning” signals a multi‑year buildout rather than an overnight transition, suggesting pilots and incremental integrations before broad adoption.

  • “Tokenization of all assets” highlights the breadth: not just cryptocurrencies but traditional securities and real‑world assets (RWAs) represented on blockchains.

  • Institutional guardrails matter: robust KYC/AML, regulated custodians, and clear governance are prerequisites for scale.

  • Efficiency gains are a core promise: near‑instant, atomic settlement; programmable compliance; and reduced reconciliation across intermediaries.

How Tokenization Could Change Markets

Tokenized instruments can embed rules (whitelists, transfer restrictions, tax logic) directly into assets, shrinking operational overhead while preserving regulatory controls. Settlement finality on shared ledgers could reduce counterparty risk and capital charges linked to T+ settlement cycles. Fractionalization may open exposure to previously illiquid or high‑denomination assets (e.g., private credit, infrastructure, real estate), broadening distribution and enabling smaller ticket sizes without bespoke structures. Programmability also supports features like automated payouts, real‑time NAV updates for funds, and composable collateral in lending, repo, or derivatives.

Implications for Institutions and Investors

For large asset managers, tokenization can streamline fund administration, transfer agency, and distribution, while creating new share classes native to blockchain rails. Exchanges, transfer agents, and broker‑dealers may see roles evolve toward permissioned on‑chain marketplaces and identity‑aware wallets. For investors, benefits include faster settlement, lower fees, and around‑the‑clock markets, alongside improved transparency via immutable audit trails. However, enterprise adoption will hinge on interoperability between chains, integration with bank‑grade custody, and privacy layers that protect sensitive trading data while satisfying regulators.

Risks and Open Questions

Key hurdles include regulatory clarity across jurisdictions, standards for identity and compliance on-chain, and cybersecurity of custodial and smart‑contract systems. Market structure questions persist: which ledgers become systemically important; how permissioned networks interconnect with public blockchains; and how tokenized cash (stablecoins, bank tokenized deposits, or potential CBDCs) will power atomic settlement against tokenized securities. Governance of tokenized assets—upgrades, key management, dispute resolution—must be durable and auditable to win institutional trust.

Key Takeaways

  • Tokenization is framed as a mainstream, institution‑led evolution of market plumbing, not a niche crypto experiment.

  • Efficiency, programmability, and broader access are the headline benefits; compliance by design is the enabler.

  • Early traction will likely center on RWAs like money market funds, Treasuries, and private credit before expanding to more complex instruments.

  • Success depends on standardization, interoperability, and regulated cash rails to match tokenized assets with real‑time settlement.

  • Fink’s stance signals that top‑tier incumbents plan to be architects—not bystanders—of the on‑chain financial stack.

Read More

GeoPolitics

Technological sovereignty with American characteristics

Ft • October 11, 2025

GeoPolitics•USA•Technology•Semiconductors•IndustrialPolicy


The US has decided it can no longer afford to outsource its chipmaking future. This strategic shift represents a fundamental rethinking of America’s approach to technological sovereignty and industrial policy.

For decades, the United States pursued a model of globalized semiconductor production that saw manufacturing capacity migrate to Asia, particularly Taiwan and South Korea. This approach prioritized cost efficiency and specialized expertise but created critical vulnerabilities in the nation’s supply chain security.

The realization that advanced semiconductors form the bedrock of modern economic and military power has prompted a dramatic policy reversal. Semiconductors enable everything from artificial intelligence systems and cloud computing to advanced weapons and communications infrastructure, making them essential to national security and economic competitiveness.

Recent legislation, including the CHIPS and Science Act, represents the most significant industrial policy intervention in generations. The act provides substantial subsidies and tax incentives to rebuild domestic semiconductor manufacturing capacity and strengthen America’s position in the global technology competition.

This new approach combines elements of protectionism, strategic investment, and international partnership building. The goal is not complete self-sufficiency but rather ensuring that the United States maintains control over critical segments of the semiconductor supply chain while reducing dependence on geopolitical rivals.

The shift reflects broader concerns about technological competition with China and the recognition that leadership in advanced computing requires domestic manufacturing capabilities. As artificial intelligence and quantum computing become increasingly central to economic and military advantage, semiconductor sovereignty has emerged as a national security imperative.

Read More

Regulation

Governor Newsom vetoed a bill restricting kids’ access to AI chatbots. Here’s why

Fastcompany • Associated Press • October 14, 2025

Regulation•USA•California

Governor Newsom vetoed a bill restricting kids’ access to AI chatbots. Here’s why

California Gov. Gavin Newsom on Monday vetoed landmark legislation that would have restricted children’s access to AI chatbots.

The bill would have banned companies from making AI chatbots available to anyone under 18 years old unless the businesses could ensure the technology couldn’t engage in sexual conversations or encourage self-harm.

“While I strongly support the author’s goal of establishing necessary safeguards for the safe use of AI by minors, (the bill) imposes such broad restrictions on the use of conversational AI tools that it may unintentionally lead to a total ban on the use of these products by minors,” Newsom said.

The veto came hours after he signed a law requiring platforms to remind users they are interacting with a chatbot and not a human. The notification would pop up every three hours for users who are minors. Companies will also have to maintain a protocol to prevent self-harm content and refer users to crisis service providers if they expressed suicidal ideation.

Newsom, who has four children under 18, said California has a responsibility to protect kids and teens who are increasingly turning to AI chatbots for everything from help with homework to emotional support and personal advice.

California is among several states that tried this year to address concerns surrounding chatbots used by kids for companionship. Safety concerns around the technology exploded following reports and lawsuits saying chatbots made by Meta, OpenAI and others engaged with young users in highly sexualized conversations and, in some cases, coached them to take their own lives.

The two measures were among a slew of AI bills introduced by California lawmakers this year to rein in the homegrown industry that is rapidly evolving with little oversight. Tech companies and their coalitions, in response, spent at least $2.5 million in the first six months of the session lobbying against the measures, according to advocacy group Tech Oversight California. Tech companies and leaders in recent months also announced they are launching pro-AI super PACs to fight state and federal oversight.

The youth AI chatbot ban would have applied to generative AI systems that simulate “humanlike relationship” with users by retaining their personal information and asking unprompted emotional questions. It would have allowed the state attorney general to seek a civil penalty of $25,000 per violation.

James Steyer, founder and CEO of Common Sense Media, said Newsom’s veto of the bill was “deeply disappointing.” “This legislation is desperately needed to protect children and teens from dangerous — and even deadly — AI companion chatbots,” he said.

Read More

US approves new bank backed by billionaires with ties to Trump

Ft • October 15, 2025

Regulation•USA•OCC•BankCharter•EreborBank


What happened

U.S. banking regulators have granted conditional approval for a new federally chartered institution, Erebor Bank, backed by several billionaire tech investors with long-standing ties to Donald Trump. The Office of the Comptroller of the Currency (OCC) cleared Erebor to proceed toward launch, positioning it as a tech- and innovation-focused bank serving clients in artificial intelligence, crypto, defense, and advanced manufacturing. The bank plans a largely digital model with operations in Columbus, Ohio, and New York, and initial capitalization reported at about $275 million. Final steps include satisfying additional supervisory conditions and securing deposit insurance, before opening to customers. (ft.com)

Who’s behind it

Erebor was co-founded by Palmer Luckey, the defense-tech entrepreneur behind Anduril, and venture capitalist Joe Lonsdale. The investor group includes Peter Thiel’s Founders Fund and other prominent Silicon Valley backers. Business Insider reports the company has been valued at more than $2 billion in private funding rounds, underscoring expectations that it will concentrate on complex, higher-margin niches underserved since Silicon Valley Bank’s collapse. The FT also reports a leadership bench that includes CEO Owen Rapaport, with governance and compliance hires drawn from established banking and fintech circles. (businessinsider.com)

Regulatory path and conditions

The OCC’s decision is “conditional,” meaning Erebor must meet detailed requirements on capital, liquidity, risk management, and IT/security before receiving full authorization and accepting insured deposits. According to reporting, the approval marks one of the earliest bank green lights under Comptroller Jonathan Gould, who has emphasized openness to innovation if paired with robust controls. Erebor still needs FDIC insurance; comparable cases suggest that process can take around nine to ten months, so operational launch may trail regulatory milestones by several quarters. (businessinsider.com)

Political context and scrutiny

Because key backers and founders are high-profile Trump allies and donors, the approval has drawn partisan fire. Senator Elizabeth Warren criticized the move as politically driven and potentially destabilizing, calling it a “fast tracked” decision and warning of a “risky venture that could set up another bailout.” The OCC has defended its approach as risk-based and consistent with safe-and-sound banking. The broader context includes an August 7, 2025 executive order directing regulators to guard against politicized “debanking,” signaling a friendlier posture toward controversial or novel financial businesses. (banking.senate.gov)

Business model and risk posture

Erebor positions itself as a conservative, specialty bank for the “innovation economy,” not a high-risk “crypto bank.” Reported priorities include transaction services and lending to venture-backed companies, with potential exposure to digital-asset–adjacent borrowers and AI hardware financing (e.g., GPU-backed loans) structured with tight collateral and risk limits. The bank’s Tolkien-inspired name—Erebor—symbolizes a vault-like “stronghold,” reflecting a brand narrative of stable financing for high-tech industry after the SVB shock. Board and adviser selections—with experience in digital asset compliance and federal enforcement—aim to reassure regulators and counterparties. (businessinsider.com)

Why it matters

  • Reopens the new-charter pipeline: After years of caution, a conditional OCC approval for a tech-focused entrant could encourage other specialized banks to apply, potentially widening credit access for startups. (thetimes.co.uk)

  • A test case for “innovation” supervision: Regulators will scrutinize Erebor’s digital-asset touchpoints, concentration risk to venture sectors, and cyber/operational resilience as a template for future approvals. (businessinsider.com)

  • Politics meets prudential policy: The approval will fuel debate over whether regulatory posture has shifted under the Trump administration—and whether safeguards are sufficient to prevent favoritism or excess risk-taking. (whitehouse.gov)

  • Post-SVB market gap: If executed prudently, Erebor could provide bespoke services to tech firms that have struggled with mainstream banks’ tighter risk appetites since 2023, potentially smoothing funding cycles in AI, defense tech, and manufacturing. (thetimes.co.uk)

Key takeaways

  • Conditional OCC approval advances Erebor toward launch; FDIC insurance and additional conditions remain outstanding. (businessinsider.com)

  • Founders and backers link the bank closely to Silicon Valley’s Trump-aligned billionaire set, driving both investor interest and political criticism. (ft.com)

  • Strategy focuses on the “innovation economy,” with strict risk controls to avoid the perception—and reality—of being a “crypto bank.” (ft.com)

  • The move may signal a broader regulatory opening for specialized banks amid an administration push against “politicized debanking.” (whitehouse.gov)

Read More

Interview of the Week

Should a College be a Museum or a Startup? Why Universities Need to Teach Failure

Keenon • Andrew Keen • October 15, 2025

Essay•Regulation•Neoliberalism•WallStreet•DynamicCapitalism•Interview of the Week


Keen On America
Should a College be a Museum or a Startup? Why Universities Need to Teach Failure
What’s the point of going to college? There used to be an obvious answer to this: to acquire the knowledge to get a better job. But in our AI age, when smart machines are already challenging many white collar professions, the point of college is increasingly coming into question—especially given its time and financial commitment. According to…
Listen now

What’s the point of going to college? There used to be an obvious answer to this: to acquire the knowledge to get a better job. But in our AI age, when smart machines are already challenging many white collar professions, the point of college is increasingly coming into question—especially given its time and financial commitment. According to Caroline Levander, author of the upcoming InventEd, the American ‘tradition of innovation’ can transform college today. Levander, who serves as Vice President for Global Strategy at Rice University, argues that colleges must transform themselves from museums into startups. Indeed, the ideal of failure, so celebrated in Silicon Valley, must become a pillar of reinvented universities. And students too, who Levander has suggested have become increasingly conservative in their attitude to personal risk, must also learn to embrace not just innovative technological tools but also the messiness of personal disruption. That should be the point of college, Levander says. To learn how to productively fail.

1. Universities Must Choose: Museum or Startup? Levander argues universities exist on a continuum between museums (curating and preserving accumulated wisdom) and startups (messy, high-risk spaces for creating new knowledge). Most institutions haven’t intentionally decided where they belong on this spectrum, but they need to embrace a more dynamic, startup-oriented position to remain relevant.

2. Student Risk Aversion is the Real Crisis Today’s students are increasingly conservative, focused on maximizing GPAs and taking “safe” courses rather than exploring creatively. Universities must build a “growth mindset” that encourages failure and experimentation—treating creativity as a muscle to develop rather than a fixed trait like eye color.

3. Disciplinary Diversity is America’s Innovation Secret Just as biodiversity sustains ecosystems, disciplinary diversity fuels innovation. Breakthrough moments are unpredictable—Steve Jobs in calligraphy, investor Bill Miller in a philosophy seminar on John Searle. Closing departments and narrowing curricula amounts to “eating our seed corn” and threatens America’s competitive advantage.

4. The Dropout Myth Misses the Point While figures like Steve Jobs, Mark Zuckerberg, and Sam Altman dropped out successfully, Levander asks: “How do we create more Steve Jobses who find the university not a place to leave, but a place to continue building creative capability?” The goal is to institutionalize and scale what now happens by happenstance.

5. Attacking Universities Threatens National Innovation The current political assault on university funding—particularly research dollars—isn’t just bad for Harvard or Rice. It threatens America’s entire innovation economy, since universities remain the primary incubators for industry-creating discoveries that drive national prosperity and competitiveness.

Read More

Post of the Week

“Venture is not an asset class, asset classes scale…Venture doesn’t scale with more capital” Roelof Botha

X • MeghanKReynolds • October 15, 2025

X•Post of the Week


Read More


A reminder for new readers. Each week, That Was The Week, includes a collection of selected essays on critical issues in tech, startups, and venture capital.

I choose the articles based on their interest to me. The selections often include viewpoints I can't entirely agree with. I include them if they make me think or add to my knowledge. Click on the headline, the contents section link, or the ‘Read More’ link at the bottom of each piece to go to the original.

I express my point of view in the editorial and the weekly video.

Discussion about this video

User's avatar