0:00
/
0:00
Transcript

What is a Browser? What is a Bubble?

ChatGPT has an Answer
  • Editorial: What is a Browser? What is a Bubble? ChatGPT Has an Answer

Editorial

What is a Browser? What is a Bubble? ChatGPT Has an Answer


The web front door has been morphing into AI for a while now, this week it accelerated. With Atlas, OpenAI didn’t bolt a chatbot onto Chrome; it turned the browser into an answer and action layer. And that has everyone—from Wikipedia editors to Google’s ad machine—reaching for the smelling salts.

Here’s the context:

1) The browser is becoming an agent, not a link map.

  • Atlas launched as a native ChatGPT browser: it is able to summarize in‑page via a right side AI panel, draft in‑place, and in preview, click and complete multi‑step tasks for you. Per OpenAI’s own docs, Agent Mode can open tabs, navigate, fill forms—an outcome‑first model of the web.

  • The backlash is immediate. Anil Dash called Atlas “anti‑web” for rendering AI summaries that look like pages without obvious pathways back. Wikipedia’s Marshall Miller found that an 8% “rise” in traffic was actually bots evading detection—and after reclassification, human visits are down roughly 8% YoY. That’s what happens when conversations and answers replace links: fewer impressions, fewer clicks, less feedback data. For the user a huge gain in value and productivity. For web sites a minus, unless they migrate their business model from search to AI.

  • Cloudflare’s Matthew Prince is pressing the U.K. CMA to force Google to unbundle its search and AI crawlers so publishers can block AI use without disappearing from search. That’s the crux: if assistants own the session, who pays the sources? Matthews remedy may penalise Google but in a world where AI is becoming the user interface a wholesale migration of the paid link ecosystem to AI will be required if traffic is to hold up and grow.

2) Bubble talk vs. buildout—and why “Minsky” isn’t destiny.

  • Paul Kedrosky warns of a Minsky moment—credit migrating from “good” to “bad” projects via vendor financing and SPVs until the music stops. It’s a valuable alarm: he has watched the ‘circular deals’ and thinned coverage ratios.

  • But this week’s data argues that the investments into AI are funding real infrastructure based on real customers and growing demand and thus revenues. Dwarkesh Patel shows NVIDIA’s 2025 earnings could cover multiple years of TSMC’s capex. Google’s decade‑old TPUs are finally gaining outside traction. Anthropic locked a multi‑year Google Cloud chips pact precisely because compute is the scarce input to a booming service.

  • In other words, cash is coming from two external sources, not just accounting loops: investors (a16z lines up $10B across growth/AI/defense) and customers (GPU clusters, TPUs, cloud contracts with hard dollars attached). A Minsky moment is a theory of unstable credit regimes—not a synonym for “lots of spending.” The test is simple: are customers paying for capacity at rising scale? So far, yes.

3) The web’s economics must reprice—fast.

  • Fast Company is right: one generative answer compresses an entire results page of ad inventory. Google will adapt (it’s already jamming ads into AI Overviews), but the pie slices differently based on AI market share of primary consumer use.

  • The fix isn’t to ban AI answers; it’s to instrument them and include relevant links. The web needs AI to include it. But it also needs receipts: durable citations, usage‑based licensing, and verifiable payouts to knowledge origins. Regulators are already in “harms first” mode—the FTC’s staff post centers on fraud, surveillance, and discrimination. The more the assistant mediates reality, the more provenance, consent, and settlement become needed product features.

The infrastructure build out will continue so long as the demand for ever smarter AI doesn’t dissipate.

“We want to create a factory that can produce a gigawatt of new AI infrastructure every week.” — Sam Altman

There are things to look out for:

  • Atlas adoption: do users stay in the AI pane, and does Agent Mode work without brittle misfires?

  • Pay the sources: does the CMA force crawler unbundling—and do Reddit/Wikipedia‑style usage deals become standard?

  • Capex vs. revenue: do chip rental prices and utilization stay tight, validating the buildout—or does secondary GPU pricing sag?

  • Google’s ad pivot: can “ads as answers” replace the link‑page cash cow without starving the open web? Or can OpenAI, Anthropic and others build a link based revenue model?

Bottom line: The internet’s UI is shifting from navigation to delegation. Will the money—and the credit—shift with it?

Essay

💸 The imminence of the AI bust [correct]

Exponential view • Azeem Azhar • October 18, 2025

Essay•AI•MinskyMoment•Hyperscalers•Capex


When does an AI boom tip into a bubble? Paul Kedrosky points to the Minsky moment—the inflection point when credit expansion exhausts its good projects and starts chasing bad ones, funding marginal deals with vendor financing and questionable coverage ratios. For AI infrastructure, that shift may already be underway; the telltale signs include hyperscaler capex outpacing revenue momentum and lenders sweetening terms to keep the party alive.

Paul makes a compelling case. We’ve entered speculative finance territory—arguably past the tentative stage—and recent deals will set dangerous precedents. As Paul warns, this financing will “create templates for future such transactions,” spurring rapid expansion in junk issuance and SPV proliferation among hyperscalers chasing dominance at any cost.

The pattern holds across history. Of the 21 investment booms I’ve looked at since 1790, 18 ended in a bust; funding quality drove roughly half of those collapses. Yet not all strain signals disaster—every investment requires leverage, from a mortgage to export financing. The question is whether we’re building productive capacity or inflating asset prices and shunting risk around.

For AI infrastructure, the warning signs are flashing: vendor financing proliferates, coverage ratios thin, and hyperscalers leverage balance sheets to maintain capex velocity even as revenue momentum lags. We see both sides—genuine infrastructure expansion alongside financing gymnastics that recall the 2000 telecom bust. The boom may yet prove productive, but only if revenue catches up before credit tightens. When does healthy strain become systemic risk? That’s the question we must answer before the market does.

This is why funding quality is one of the five key gauges we watch in our AI dashboard.

Read More

Thoughts on the AI buildout

Dwarkesh • Dwarkesh Patel • October 22, 2025

Essay•AI•Datacenters•CapEx•Nvidia

Thoughts on the AI buildout

Sam Altman says he wants to “create a factory that can produce a gigawatt of new AI infrastructure every week.”

What would it take to make this vision happen? Is it even physically feasible in the first place? What would it mean for different energy sources, upstream CAPEX in everything from fabs to gas turbine factories, and for US vs China competition?

These are not simple questions to answer. We wrote this blog post to teach ourselves more about them. We were surprised by some of the things we learned.

The fab CapEx overhang

With a single year of earnings in 2025, Nvidia could cover the last 3 years of TSMC’s ENTIRE CapEx.

TSMC has done a total of $150B of CapEx over the last 5 years. This has gone towards many things, including building the entire 5nm and 3nm nodes (launched in 2020 and 2022 respectively) and the advanced packaging that Nvidia now uses to make datacenter chips. With only 20% of TSMC capacity, Nvidia has generated $100B in earnings.

Suppose TSMC nodes depreciate over 5 years - this is enormously conservative (newly built leading edge fabs are profitable for more than 5 years). That would mean that in 2025, NVIDIA will turn around $6B in depreciated TSMC Capex value into $200B in revenue.

Further up the supply chain, a single year of NVIDIA’s revenue almost matched the past 25 years of total R&D and capex from the five largest semiconductor equipment companies combined, including ASML, Applied Materials, Tokyo Electron...

We think this situation is best described as a ‘fab capex’ overhang.

The reason we’re emphasizing this point is that if you were to naively speculate about what would be the first upstream component to constrain long term AI CapEx growth, you wouldn’t talk about copper wires or transformers - you’d start with the most complicated things that humans have ever made - which are the fabs that make semiconductors. We were stunned to learn that the cost to build these fabs pales in comparison to how much people are already willing to pay for AI hardware!

Nvidia could literally subsidize entire new fab nodes if they wanted to. We don’t think they will actually directly do this (or will they, wink wink, Intel deal) but this shows how much of a ‘fab capex’ overhang there is.

Read More

Bubble-talk is breaking out everywhere

Ft • October 21, 2025

Essay•Geo Politics•Asset Bubbles•Central Banks•Market Sentiment


Overview

“Bubble-talk” is surfacing across markets as investors weigh stretched valuations against a still-supportive policy backdrop. Sentiment has split into two camps: one warns that frothy pricing and speculative behavior are proliferating; the other remains constructive, assuming that if conditions turn “really dicey,” policymakers will ride in as the cavalry to stabilize liquidity and growth. The tension between these views is shaping positioning, risk appetite, and the narratives investors tell themselves about what comes next.

Signals fueling bubble anxiety

  • Valuations in select corners of the market have expanded far faster than underlying cash flows, a classic sign of sentiment outrunning fundamentals. Pockets of exuberance often cluster around innovation themes, high-growth equities, and assets whose stories hinge on long-duration promises rather than near-term earnings.

  • Market leadership looks narrow in places, with outsized gains concentrated in a handful of benchmarks or sectors. Historically, narrow breadth can amplify drawdown risk when leadership falters.

  • Speculative behavior—rapid momentum-chasing, options activity that dwarfs cash volumes, and retail-led surges—tends to reappear late in cycles, contributing to gap risk when liquidity recedes.

  • A disconnect can emerge between softening real-economy indicators and buoyant asset prices, increasing the probability that a small shock (policy surprise, funding stress, geopolitical flare-up) catalyzes a larger repricing.

Why optimists still expect the cavalry

  • Many investors trust the now-familiar policy “reaction function”: if growth stalls or markets seize, central banks can pause or cut, while governments can deploy fiscal stabilizers or targeted backstops. That belief—sometimes labeled a “policy put”—tempers fear of severe left-tail outcomes.

  • Post-crisis playbooks are well-established: liquidity facilities, balance-sheet tools, and emergency lending channels can be reactivated quickly, while supervisory flexibility can reduce immediate forced selling.

  • Structural demand from pensions, insurers, and systematic allocators provides a persistent bid for high-quality collateral, helping cushion shocks in core rates and investment-grade credit.

Risks to the cavalry narrative

  • Inflation constraints can limit the speed and scale of monetary easing; easing into sticky inflation risks unanchoring expectations, so central banks may tolerate more market volatility than investors assume.

  • Policy lags matter: by the time help arrives, earnings may have reset, credit spreads widened, and funding conditions tightened, locking in lower equilibrium valuations.

  • Moral hazard and political constraints can curb fiscal backstops, especially if support is seen as subsidizing risk-taking rather than protecting the real economy.

  • Liquidity is not the same as solvency: targeted liquidity can’t fix broken business models or overlevered balance sheets.

Practical implications for positioning

  • Focus on resilience: prioritize balance-sheet strength, free cash flow visibility, and pricing power that can withstand slower nominal growth.

  • Diversify liquidity sources: stagger maturities, stress-test collateral needs, and avoid reliance on a single funding channel.

  • Reassess risk concentration: measure exposures to common macro factors—real yields, dollar strength, and volatility regimes—that can dominate in a drawdown.

  • Prepare for bimodal outcomes: build playbooks for both soft-landing and harder-landing paths, including triggers that expand hedges or redeploy cash into dislocations.

Key takeaways

  • Bubble discourse is intensifying because price gains in select assets outpace fundamentals while market breadth narrows.

  • Optimists assume policy support will prevent a severe crash; that safety net is real but not unconditional.

  • The central tension: markets priced for good news versus policymakers constrained by inflation and politics.

  • Risk management should emphasize liquidity, quality, and scenario discipline rather than attempts to time a top.

  • If the cavalry arrives, it may stabilize conditions—but not necessarily preserve today’s valuations.

Read More

ChatGPT’s Atlas: The Browser That’s Anti-Web

Anildash • an AI company — one that probably needs a warning label when you install it. • October 21, 2025

Essay•AI•Technology•Browsers•Privacy


OpenAI, the company behind ChatGPT, released their own browser called Atlas, and it actually is something new: the first browser that actively fights against the web. Let’s talk about what that means, and what dangers there are from an anti-web browser made by an AI company — one that probably needs a warning label when you install it.

The problems fall into three main categories: Atlas substitutes its own AI-generated content for the web, but it looks like it’s showing you the web; The user experience makes you guess what commands to type instead of clicking on links; You’re the agent for the browser, it’s not being an agent for you.

By default, Atlas doesn’t take you to the web. When I first got Atlas up and running, I tried giving it the easiest and most obvious tasks I could possibly give it. I looked up “Taylor Swift showgirl” to see if it would give me links to videos or playlists to watch or listen to the most popular music on the charts right now; this has to be just about the easiest possible prompt.

The results that came back looked like a web page, but they weren’t. Instead, what I got was something closer to a last-minute book report written by a kid who had mostly plagiarized Wikipedia. The response mentioned some basic biographical information and had a few photos. Now we know that AI tools are prone to this kind of confabulation, but this is new, because it felt like I was in a web browser, typing into a search box on the Internet. And here’s what was most notable: there was no link to her website.

I had typed “Taylor Swift” in a browser, and the response had literally zero links to Taylor Swift’s actual website. If you stayed within what Atlas generated, you would have no way of knowing that Taylor Swift has a website at all.

Unless you were an expert, you would almost certainly think I had typed in a search box and gotten back a web page with search results. But in reality, I had typed in a prompt box and gotten back a synthesized response that superficially resembles a web page, and it uses some web technologies to display its output. Instead of a list of links to websites that had information about the topic, it had bullet points describing things it thought I should know. There were a few footnotes buried within some of those response, but the clear intent was that I was meant to stay within the AI-generated results, trapped in that walled garden.

During its first run, there’s a brief warning buried amidst all the other messages that says, “ChatGPT may give you inaccurate information”, but nobody is going to think that means “sometimes this tool completely fabricates content, gives me a box that looks like a search box, and shows me the fabricated content in a display that looks like a web page when I type in the fake search box”.

And it’s not like the generated response is even that satisfying. The fake web page had no information newer than two or three weeks old, reflecting the fact that LLMs rely on whenever they’ve most recently been able to crawl (or gather without consent) information from the web. None of today’s big AI paltforms update nearly as often as conventional search engines do.

Keep in mind, all of these shortcomings are not because the browser is new and has bugs; this is the app working as designed. Atlas is a browser, but it is not a web browser. It is an anti-web browser.

Read More

The short half-life of friendship in the AI era

Cautious optimism • Alex Wilhelm • October 24, 2025

Essay•AI•Technology•Business•Politics

The short half-life of friendship in the AI era

The shifting alliances in the AI industry highlight the transient nature of strategic partnerships in a rapidly evolving technological landscape. Anthropic, which had previously designated Amazon as its “primary cloud provider” and “primary training partner” following an $8 billion investment, has now announced a major expansion of its relationship with Google Cloud. The company plans to utilize “up to one million TPUs” in a deal valued at “tens of billions of dollars” that will bring “well over a gigawatt of capacity online in 2026.”

This move reflects a broader pattern of AI companies diversifying their infrastructure dependencies as they scale. The partnership between OpenAI and Microsoft, once seemingly inseparable, has similarly evolved with Microsoft now developing its own AI models and consumer products that compete directly with OpenAI’s offerings. These shifting dynamics suggest that as AI companies gain financial strength and market position, the power balance in their relationships with cloud providers becomes more fluid, leading to more complex, multi-vendor strategies.

AI Integration in Gaming

The gaming industry represents another frontier for AI adoption, with Electronic Arts (EA) announcing a partnership with Stability AI to “co-develop transformative AI models, tools, and workflows.” Stability AI, known for its Stable Diffusion models capable of generating images, video, audio, and 3D assets, could significantly impact game development processes. While AI could enhance certain gaming elements like more natural NPC interactions, the integration of AI into creative processes raises questions about the future of artistic development in the industry.

Broader Political and Regulatory Concerns

Beyond technology partnerships, the analysis identifies several concerning political developments affecting business and media landscapes. A dispute between SpaceX’s Elon Musk and Transportation Secretary Sean Duffy over NASA’s Artemis III mission timeline has revealed deeper political considerations, with reports suggesting White House concern that the conflict could affect Musk’s support in upcoming midterm elections.

In media, there are indications of potential regulatory pressure to transfer ownership of CNN to the Ellison family, who already control CBS News and have installed administration-aligned leadership. This follows a pattern described as “the Orbanization of the U.S. media,” where regulatory power may be used to direct media ownership to political allies.

The recent pardon of Binance founder Changpeng “CZ” Zhao raises additional concerns, particularly given previous reports that the Trump family had discussed acquiring a stake in Binance’s U.S. operations and the complex financial arrangements involving the Trump-affiliated USD1 stablecoin.

These developments collectively suggest a blurring of lines between business interests and political power, with limited corporate resistance to what appears to be an increasingly transactional approach to governance. The absence of strong business counterweights to these trends creates conditions that could lead to oligarchic structures and restricted press freedom, representing significant challenges to democratic norms and market integrity.

Read More

Builders, Solvers and Cynics

A16z • Alex Danco • October 24, 2025

Essay•Venture

Builders, Solvers and Cynics

Reactionary anti-tech sentiment is a real and important force in the world, and possibly an under-discussed topic. Not in the sense that it should get more airtime (it shouldn’t); but in the sense that it’s interesting. Part of being a good citizen is being genuinely open-minded about your opposition: where are they coming from? What value systems or psychological drives are running the show over there?

A classic answer would be, “There is a perfect book for this. It’s called A Conflict of Visions by Thomas Sowell, and it’s one of our favorite books we recommend at a16z.” Sowell compares two distinct kinds of people, which I call “Builders” and “Solvers” shorthand, with completely different value systems, patterns of action, and concepts of virtue. And he explains why “Solvers” (Planners, regulators, social architects, economic dirigistes…) are the natural opposition to “Builders” (Founders, engineers, and constraint-respecters), and always have been.

However, forty years after Sowell wrote A Conflict of Visions, I think a third belief system in society is rising to prominence, driving a big share of anti-progress reaction. We are now in a three-player game, with a new “nonaligned group” in society and on the internet. That group is Cynics.

Cynics and Solvers both contribute to anti-tech sentiment, but in different ways. Cynics are motivated by two things. First, to “Not Fall For It”, and avoid appearing gullible at all cost. And second, to stamp out inauthenticity, particularly anything new or unresolved in the world. These motives are classic projection (as Freud defined it a century ago), and once you realize this, their behaviour makes more sense.

The cynics have a rich cultural canon: from Diogenes and the Greek cynics, to smart pieces of culture like South Park and The Sopranos that have recently contributed to the belief system. The three groups have different concepts of what it means to act honestly. To builders, honesty means fidelity. To solvers, honesty means sincerity. To cynics, honesty means authenticity. These diverging concepts of honesty matter a lot, because they’re how culture hardens into social values.

Cynics are close to technologists, because they’re both very online. But they’re also enemies of technologists, because of how much they hate progress-in-flight. Cynicism is dangerous. And a worrisome trend right now is the Solvers and Cynics finding shared resentment towards the builders, and therefore common cause.

Sowell’s 1987 book defines two mindsets, which may initially feel non-obvious: the “Constrained Vision” and the “Unconstrained Vision”. The “Constrained Vision” is the mindset of the builders, and it’s named that way because core to this mindset is appreciating the constraints that have evolved in the world over repeated iterations of evolutionary trial and failure. “Wisdom” is something we accumulate over time: like family norms, property rights, or engineering practices, which invisibly guide us through a complex world.

Andreessen Horowitz unreservedly endorses this “Constrained” vision. This can surprise outsiders, who think of startups’ mission as disrupting existing systems. But the way we build companies and technology is deeply respectful of the embedded wisdom of engineering practices, Silicon Valley company building norms, and the belief that progress comes at the margin.

The Unconstrained Vision is a different idea of progress, which is, “Someone really ought to solve all of the problems.” This idea of wisdom puts much less stock into the way things have worked; and more weight on the judgement of anointed individuals who can gather the context and understanding they need to take sweeping action that remakes the world. This version of “Wisdom” is something we attain by casting as wide a net as possible, obtaining a mandate for action, and making the most enlightened decisions possible.

Read More

Dominic Cummings’ new nerd army Britain’s Young Turks are looking for growth

Unherd • Wessie du Toit • 25 Oct 2025


Staff at London’s O2 Indigo club, a glitzy venue for live music and comedy, must have raised an eyebrow on Thursday night as the space filled with software engineers, start-up entrepreneurs, lawyers and civil servants, young men with neat haircuts and ironed shirts under their fleece jackets. Nor would they have expected this sedate audience would begin whooping when one of the speakers, the venture capitalist Matt Clifford, reeled off a long list of British discoveries and inventions, including gravity, the theory of evolution, computer science and the postal service. But much is unusual about Looking for Growth, the nascent movement that wants to reverse Britain’s decline through a peculiar blend of activism and policy wonkery. This was its largest event to date.

Clifford’s speech was genuinely rousing, even for someone who is sceptical of the message, borne forth in Looking for Growth’s name, that economic growth is the straightforward key to most of the country’s problems. As Clifford pointed out, Britain was once the most prosperous nation in the world. But its innate genius for innovation has been tragically stifled in the 17 years since the Great Financial Crisis, “the biggest economic disaster in the history of our country”. We still have it in us, though, to Make Britain Rich Again. By the end of the evening, as one speaker after another echoed this call, it had almost come to feel like a patriotic duty.

I heard various motives for being there among the 1,300-strong crowd. One young woman hoped to meet people of a “centre or centre right” persuasion, a group she felt was significant but voiceless among under 35s. A group of muscle-bound blokes was mainly interested in the star speaker, the political strategist and blogger Dominic Cummings, and his insights regarding the German master of realpolitik, Otto von Bismarck. But most said they were frustrated with the malaise they felt around them every day, and responsive to Looking for Growth’s message, spread through X and Instagram, that they did not have to passively accept it. A TfL employee told me that London’s rail infrastructure is disintegrating as Britain gets “poorer year after year”. A soon-to-be-qualified architect complained that “it’s a bureaucratic nightmare to lay a single brick in this country”.

This mood of exasperation was amply reflected onstage. Marc Warner, founder of an AI firm, described how he had helped the government create a world-leading system for testing wastewater to monitor Covid, only to be locked out of the subsequent procurement process. According to Warner, Britain now trails Malawi in this particular field. All the infamous cases of planning absurdity, from the HS2 bat tunnel to the 350,000-page application for the Lower Thames Crossing, were repeatedly wheeled out to be pelted with rotten fruit. By the time Cummings came on, the audience was ready for a characteristically dramatic assessment. Britain, he said, has reached a treacherous point in the lifecycle of modern states, where “a gap opens up between the elite and its institutions and reality”, a process of ideological self-delusion which usually grows worse until “the elite falls into the gap”.

The genius of Looking for Growth has been to create a sense of grassroots energy around a programme that is really focused, like Cummings’ attacks, on Whitehall and Westminster. Its recipe for achieving growth is planning reform, a large-scale build out of housing and infrastructure, and a supercharged tech sector centred on artificial intelligence.

..Read More

Big Tech’s Predatory Platform Model Doesn’t Have to Be Our Future

Tim Wu • NYT • 25 Oct 2025

There was a time, back in the early 2000s, when everyone seemed to think that the internet would make everybody rich.

The vision was compelling, if a little naïve. The internet, optimists argued, would allow individuals and small sellers to reach a global market of customers at low cost and without the need for big retailers. Increased connectivity would also make it easier for people to find work, invest money and learn new skills. Thanks to platforms like eBay, the future belonged to the Davids, not the Goliaths. “Small is the new big” was a popular slogan during those heady years.

The prediction turned out to be wrong. Yes, platforms like Amazon and Google have generated immense wealth and transformed society. But the money and power have not been broadly distributed. Instead, the platforms have captured the lion’s share for themselves, leading to concentrations of wealth that hark back to the Gilded Age. The Davids of the world ended up working hard to make a new set of Goliaths rich.

But we can still recover that early optimism and promise of opportunity. While we can’t start over from scratch, we can — with the right laws and policies — begin to reclaim the potential of the internet-based economy, shifting its center of gravity to encourage and reward the activities and innovations of the many instead of the few. This is a prescription for an economy that is fairer — and more dynamic, too.

..Read More

AI

Could AI help identify skill in fund managers?

Ft • October 18, 2025

AI•Data•Fund Managers


Overview

The piece argues that as a market bubble builds, investors face a growing challenge: separating luck and momentum from genuine skill. It highlights emerging research that uses data-driven and AI-enabled techniques to identify fund managers who create “fundamental value” rather than riding speculative waves. The core message is that better tools are improving the odds of finding managers with repeatable, process-driven edge, even when broad markets feel frothy.

Why this matters in a bubble

In late-cycle or euphoric phases, simple exposure to hot segments can mask deficiencies in process. Nearly everyone looks smart when prices are inflating. The article contends that distinguishing true skill from beta and crowd-following becomes most valuable precisely when broad gains tempt allocators to relax discipline. By focusing on fundamental value creation, allocators can avoid the classic pitfall of funding performance that was largely the result of favorable tides.

What “fundamental value” looks like

The discussion frames fundamental value as returns linked to clear, analyzable drivers: cash-flow growth, balance-sheet strength, pricing power, industry structure, and capital allocation. Managers who generate value this way tend to display:

  • Consistency across changing regimes, not just in momentum phases.

  • Transparent linkages between thesis and subsequent operational results.

  • Risk-adjusted outcomes that are not fully explained by common factors.

  • Evidence of situational awareness: trimming exposure as narratives detach from fundamentals.

How AI and new research help

Recent analytical advances aim to attribute returns more precisely and test for repeatability. Techniques emphasized include:

  • Factor- and regime-aware attribution that separates idiosyncratic alpha from style winds.

  • Trade- and thesis-level auditing that links entry/exit decisions to the evolution of fundamentals.

  • Text and signal analysis of manager communications to detect process discipline (e.g., consistency between stated edge and actions).

  • Out-of-sample validation and cross-cycle testing to assess whether results persist beyond a single favorable period.

Together, these methods reduce the risk of confusing market exposure with manager skill, especially when prices are levitating on narrative rather than earnings.

Signals allocators can examine

The article highlights practical diagnostics that AI-assisted research can surface:

  • Thesis-to-outcome alignment: Did earnings, margins, or unit economics improve as predicted?

  • Variance decomposition: How much of the return stems from factors versus stock-specific drivers?

  • Behavior under stress: Do position sizes, hedges, and cash levels reflect risk awareness when volatility spikes?

  • Turnover and holding-period discipline: Are changes consistent with a long-term process, not short-term chasing?

  • Post-mortems and learning loops: Evidence that mistakes lead to process adjustments.

Risk controls and governance

Even with better tools, guardrails matter. Key practices include:

  • Triangulating multiple attribution frameworks to avoid overfitting.

  • Ensuring data quality and independence of validation.

  • Watching for pro-cyclical selection bias during bubbles.

  • Aligning incentives so managers are rewarded for process quality, not just recent performance.

Implications for investors and managers

For allocators, the message is to lean on richer diagnostics rather than headline returns, upgrade due diligence with AI-supported analyses, and be cautious about managers whose results closely mirror speculative segments. For managers, the path forward is to document decision processes, tie positions to measurable fundamental milestones, and demonstrate adaptability across regimes. As markets grow frothier, the premium on verifiable, repeatable skill rises—making rigorous attribution and transparent process the differentiators that outlast the bubble.

Key takeaways

  • In bubble-like conditions, it is harder—but also more important—to separate true skill from momentum.

  • AI-enabled research can sharpen attribution, test repeatability, and expose process discipline.

  • Focus on fundamental drivers, behavior in stress, and out-of-sample persistence rather than recent returns alone.

  • Strong governance and validation are essential to avoid overfitting and narrative bias.

  • Managers who can connect theses to realized fundamentals and maintain risk-aware discipline are most likely to produce durable value.

Read More

Andrej Karpathy Breaks Down the 2025 State of AI: 12 Things Founders & VCs Must Know

Theaiopportunities • October 19, 2025

AI•Tech•AI Agents


Overview

The piece distills Andrej Karpathy’s 2025 perspective on where AI actually is versus the hype, offering 12 takeaways for founders and investors. His core message: agents won’t become dependable “coworkers” in 2025; they are a decade-scale engineering program that requires memory, robust multimodal perception, continual learning, and reliable computer-use/action stacks. Progress will be cumulative and system-level, not a single breakthrough. For builders and VCs, the edge shifts from chasing bigger models to assembling better cognitive systems, data pipelines, and process supervision.

From “Year of Agents” to “Decade of Agents”

  • Agents remain prototypes, not production coworkers. Karpathy argues that without persistent memory, grounded perception, and tools for safe, continuous action, agents will stay brittle.

  • Implication: prioritize infrastructure—memory stores, tool-use frameworks, long-horizon task management—over flashy demos. Fund roadmaps measured in years, not quarters.

Software 1.0 → 2.0 → 3.0

  • Framing: Software 1.0 (handwritten code), 2.0 (NN weights), 3.0 (LLM + natural-language interfaces).

  • Lesson: representation preceded agency. LLM-era “Software 3.0” redefines programming as shaping priors, context, and processes; full agents should follow once representation and reasoning are mature.

“Ghosts, Not Animals”

  • Analogy: we’re not evolving embodied animals with instincts; we’re training data-driven “ghosts” that simulate behavior without innate embodiment.

  • Expectation-setting: these systems can reason but lack instincts, grounding, and feelings, which limits autonomy.

Knowledge vs. Intellect in Pretraining

  • Pretraining adds facts and builds a “cognitive core.” The latter—reasoning and abstraction—drives generality.

  • Founder takeaway: don’t over-index on encyclopedic memory; invest in mechanisms that strengthen abstraction, planning, and transfer.

Context Window = Working Memory

  • Weights act like compressed long-term memory; the context window is working memory where reasoning unfolds.

  • Practical design: maximize rich, task-relevant context (documents, logs, states) rather than relying on parametric recall. Retrieval and chunking quality materially affect outcomes.

Only Part of the “Brain” Exists

  • Today’s stack approximates cortex-like pattern recognition and prefrontal-like planning traces but lacks analogues to hippocampus (consolidation), amygdala (instincts), and cerebellum (skill coordination).

  • Product implication: build external modules—long-term memory, safety/priority heuristics, skill libraries—to approximate missing functions.

Build Over Prompt

  • Karpathy’s bias is to code systems to understand them: “If I can’t build it, I don’t understand it.”

  • Near-term reality: code models are great for boilerplate, weak on novel architecture. Use “autocomplete” to accelerate humans-in-the-loop, not to replace system design.

RL Is “Terrible”—But Useful

  • Critique: classic RL’s sparse, terminal rewards are inefficient and miscredit steps along the way.

  • Direction: move toward dense, stepwise feedback and interpretable trajectories that mirror human learning: reflect, localize errors, and adjust.

Process-Based Supervision and Reflection

  • Next breakthrough: systems that review their own chains-of-thought, self-correct, and generate synthetic training signals while preserving entropy.

  • Risk noted: silent entropy collapse into repetitive, low-information patterns. Design iterative workflows—plan, act, reflect, revise.

The Virtue of Forgetting

  • Humans generalize because we forget; models that memorize everything can become rigid.

  • Engineering lever: controlled forgetting/regularization to maintain flexibility and encourage abstraction beyond rote recall.

Data Quality > Endless Scale

  • The web is noisy; yet large models still perform. Imagine a smaller (~1B parameter) “cognitive core” trained on curated, high-signal data.

  • Startup edge: differentiated, clean corpora and careful curricula may outcompete raw scale on commodity data.

We’re Early—Expect Compounding, Not Revolution

  • The architectural center (transformers) likely persists this decade, refined by better data, memory, modularity, and control loops.

  • Investment thesis: allocate capital to teams building memory layers, process supervision, retrieval/tooling, and eval stacks; value accrues to system integration and ops discipline.

Key Takeaways for Builders and VCs

  • Treat agents as systems engineering: memory, retrieval, tools, supervision, and safety.

  • Optimize for context richness and stepwise feedback; implement reflection loops to prevent collapse.

  • Prioritize curated data and curricula; invest in “cognitive cores” over brute-force parameter counts.

  • Use AI to augment builders, not replace them; maintain human control over novel system design.

  • Bet on compounding improvements and infrastructure moats rather than near-term AGI leaps.

Selected Quotes

  • “This is the decade of agents… We have prototypes, but not coworkers.”

  • “We’re not building animals; we’re building ghosts.”

  • “If I can’t build it, I don’t understand it.”

Read More

Reid Hoffman on AI, Consciousness, and the Future of Labor

Youtube • a16z • October 20, 2025

AI•Work•Consciousness•Future Of Work•Automation


Overview

A wide-ranging conversation explores how accelerating AI progress is reshaping work and society, with an emphasis on the distinction between powerful pattern-learning systems and the concept of consciousness. The discussion frames AI as a general-purpose technology comparable to earlier industrial revolutions, arguing that its near-term impact will come from augmenting human capabilities, reorganizing workflows, and enabling new products and services rather than replacing human judgment outright. It emphasizes pragmatic approaches to adoption—deploy AI where it reduces friction, expands access, or compounds knowledge—while encouraging leaders to set guardrails that preserve agency and accountability.

AI vs. Consciousness

The speakers differentiate between intelligence as performance on tasks and consciousness as subjective experience. Current systems are portrayed as sophisticated optimizers that can reason over text, code, and images, but without evidence of sentience. The practical takeaway is to center evaluation on reliability, calibration, and alignment with human objectives, not metaphysical debates. Treat models as powerful tools whose outputs require human verification, context, and ethical framing.

Implications for Labor

The future of labor is presented as “human-in-the-loop by default.” AI agents draft, summarize, translate, analyze, and simulate, while people define goals, review edge cases, and make final decisions. Functions most affected include research, customer support, marketing, software development, and operations. Productivity gains are expected from reducing time-to-first-draft, automating repetitive steps, and surfacing insights faster. Rather than net job loss, the conversation anticipates role shifts: tasks are unbundled, and new categories emerge around prompt design, workflow orchestration, and AI product management.

Skills, Education, and Adoption

Recommended skills include problem decomposition, data literacy, critical reading of model outputs, and iterative prompting. Organizations should build AI “playbooks” with clear evaluation criteria (quality, latency, cost), red-team practices, and escalation paths when confidence is low. Upskilling strategies favor rapid, project-based learning—start with a single high-leverage workflow, measure impact, then scale. Metrics that matter: cycle time reduction, error rates after human review, customer satisfaction, and incremental revenue from AI-enabled features.

Governance and Ethics

Practical governance stresses provenance tracking, privacy-by-design, and domain-specific model constraints. Transparency about when and how AI is used helps preserve trust with users and employees. Policymaking is encouraged to be pro-innovation yet risk-aware—focus on clear liability, safety testing for high-stakes use, and incentives for open evaluation. The overarching ethos: use AI to widen opportunity and dignity at work, not to deskill or obscure responsibility.

  • Human-AI collaboration will dominate near-term value creation.

  • Evaluate systems on reliability and alignment, not presumed consciousness.

  • Start with concrete workflows; measure impact; scale deliberately.

  • Prioritize governance: provenance, privacy, and transparent user experience.

  • Expect role evolution and new job categories alongside productivity gains.

Read More

Bubble, Bubble, Toil and Trouble

Thezvi • Zvi Mowshowitz • October 20, 2025

AI•Funding•Valuations

Bubble, Bubble, Toil and Trouble

Core Question: Are we in an “AI bubble,” and what does that even mean?

The piece argues that “bubble” talk has become a social signal more than a diagnosis. If “bubble” is defined narrowly as a significant and sustained drawdown (e.g., a 20% Nasdaq decline over six months), that outcome is plausible in markets generally—even without extreme mispricing. But if “bubble” means 2000-style dot-com valuations utterly disconnected from discounted future cash flows, the author says: no. The market can fall without prior prices being absurd. The author stresses that labeling post-hoc drawdowns as “bubbles” is uninteresting and confuses the real question: are current AI-linked valuations broadly incompatible with reasonable cash-flow expectations?

Why people say “bubble” now

  • Surveys: A Bank of America poll reportedly finds a record share of global fund managers calling AI stocks a bubble; 54% now view tech as too expensive, a sharp mood shift from the prior month.

  • Sentiment cascades: The market hasn’t slid meaningfully on this narrative alone; modest dips were tied to tariff headlines or the “DeepSeek moment.”

  • Common knowledge vs. action: It’s possible for everyone to say “bubble” while continuing to buy—echoing late dot-com behavior. Yet “who” says it matters: industry insiders calling bubble is stronger evidence than big institutions doing so. The author’s quick poll found essentially no difference between AI workers (42.5%) and others (41.7%) calling it a bubble.

Where the real risks are

  • Steamrollers vs. picks-and-shovels: Companies likely to be “steamrolled” by frontier labs (e.g., those without defensible moats) may underperform as a basket. Conversely, frontier labs and infrastructure “picks-and-shovels” look more resilient—but not at “free money” entry points; investors need actual theses.

  • Industrial bubble dynamics: Per Noah Smith, AI could crash not because it fails but because it disappoints optimistic timelines; even mild disappointment can break momentum and trigger a larger repricing.

  • Geopolitics and supply chains: Tariffs, Taiwan risk, or an anti-AI backlash could compress multiples and revenues even if the technology keeps advancing.

  • Profit capture vs. utility: Matthew Yglesias notes transformational tech need not yield high-margin incumbents (jetliners vs. Home Depot analogy). AI could be huge yet less profitable for providers than bulls expect.

Counterpoints: why it may not be a bubble

  • There’s a “there” there: Unlike pure speculative manias, AI already delivers value and is propping up growth.

  • Valuations in context: Nasdaq forward P/E ~28x and MAG7 ~32x are elevated but far below 2000’s >70x. With 15–25% YoY revenue growth (ex-NVIDIA) and heavy near-term capex depressing earnings, these multiples aren’t obviously extreme.

  • Spending scale: Estimates like ~$1,800 per American invested in AI sound large, but the author argues many current use cases justify that outlay on absolute—not relative—benefit grounds.

Revenues and growth trajectories

  • Epoch AI highlights OpenAI’s projection from ~$10B to ~$100B revenue within three years—historically unprecedented. Only a handful of U.S. firms have gone from <$1B to >$10B in three years; even fewer reached $100B within a decade, and none in six years. The author thinks OpenAI is intentionally sandbagging because stronger claims would be disbelieved or litigated; baseline expectation is that OpenAI and peers outperform current projections.

Capex and depreciation: a contested hinge

  • Hyperscaler capex reportedly near 22% of revenue (~$320B in 2025 across the big four), outpacing revenue growth. Bears argue GPU lifecycles are shortening (annual NVIDIA generations), so extending depreciation schedules to 5–6 years masks true economics; re-basing to 2–3 years would dent EPS and market caps.

  • The author counters: new chips don’t instantly obviate old ones as long as demand exceeds supply. If H100s soon had near-zero marginal value, either we’re in a 2028 “compute singularity” (unbounded scale with enough power) or demand has vanished—both implausible. Current evidence points to undercapacity (scramble for all chips, rising rental prices for older GPUs). Accounting optics could wobble, but solvency/liquidity isn’t the central concern.

Time-sensitive aside on governance

The author urges donations to Alex Bores (champion of New York’s RAISE Act) in his first 24 hours of a congressional run, arguing Congress needs informed AI safety champions. The note will be removed after the 24-hour window.

What a drawdown would mean

  • A 20%+ AI-led decline in the next few years is quite possible; if it happens without a fundamentals collapse, the author would likely buy more.

  • Distinguish market plumbing from tech trajectory: even a leveraged unwind or confidence shock (like the DeepSeek reaction) says little about whether AI progress is slowing toward AGI/ASI timelines.

Key takeaways

  • Bubble labeling depends on definitions: a cyclical drawdown is plausible; a 2000-style cash-flow disconnect is not the base case.

  • Real risks lie in momentum breaks, geopolitics, profit capture, and accounting optics—not in AI’s lack of real value.

  • Valuations are high but not absurd; capex is massive but rational if revenue growth and demand continue.

  • Expect uneven returns: some application layers get steamrolled; infra and leading labs remain best positioned.

Read More

OpenAI Unveils Atlas Web Browser Built to Work Closely With ChatGPT

Nytimes • Cade Metz • October 21, 2025

AI•Tech•Atlas

OpenAI Unveils Atlas Web Browser Built to Work Closely With ChatGPT

Overview

A new web browser named Atlas is being introduced with the explicit goal of working closely with OpenAI products such as ChatGPT. The core idea is a browsing experience where the assistant is not an add‑on but a native capability: pages, tabs, and tasks become inputs that a conversational agent can understand, navigate, and act upon. Instead of copying text into a chatbot or juggling extensions, Atlas appears oriented toward making the chat interface the center of how people discover, read, and use the web.

What “works closely” with ChatGPT likely means

  • Integrated assistant presence: an always‑available side panel or in‑page overlay that can summarize, translate, compare, draft, and explain content without leaving the page.

  • Contextual awareness: the model can see the active page, selected text, and possibly prior tabs or sessions to generate more precise answers.

  • Action orchestration: turning natural language into multi‑step workflows—e.g., “find the best sources on this topic, extract the key points, and draft an email.”

  • Cross‑product handoff: seamless movement between ChatGPT, document or code tools, and the browser, avoiding repetitive uploads or copy‑paste.

  • Voice and multimodal inputs: asking questions or directing actions via speech or images during browsing.

User experience and productivity

Atlas’s tight coupling with ChatGPT suggests a shift from search‑first to task‑first browsing. Rather than querying a search engine and manually sifting results, the user asks for an outcome, and the assistant navigates pages, extracts relevant information, and presents condensed answers with citations. This could compress workflows for research, shopping, travel planning, and customer support, while lowering the friction of moving between web content and creation tools (email drafts, spreadsheets, notes).

Trust, safety, and privacy considerations

Embedding an AI assistant into core browsing raises several important questions:

  • Data exposure: which page content is sent to the assistant, under what conditions, and with what retention policies?

  • Permission boundaries: how clearly the browser communicates when the assistant can read a page, fill forms, or click links.

  • Reliability: guardrails to reduce model hallucinations or outdated answers when the assistant summarizes complex or time‑sensitive pages.

  • Security: protection against prompt injection from web content and extensions, plus clear sandboxing for automated actions.

Implications for the web ecosystem

If assistants become the primary interface to the web, publishers may see users engaging more with synthesized answers than full pages. That could:

  • Elevate the importance of structured data, clean semantics, and machine‑readable metadata so assistants extract accurately.

  • Pressure traditional search and ad models as assistant‑led results reduce page visits.

  • Spur new revenue approaches (licensed content, paid APIs, or assistant‑friendly widgets).

  • Encourage developers to build “actions” or “tools” that let the assistant complete tasks (bookings, purchases, support tickets) directly from the browser.

Competitive context

Browser makers and AI products have been converging: many browsers now ship AI sidebars, and AI assistants offer built‑in browsing or retrieval. A purpose‑built browser that treats the assistant as the primary UI could accelerate this convergence, setting expectations for native summarization, automation, and multimodal support. It may also catalyze standards around content attribution, agent safety, and interoperable “actions” that bridge sites and services.

What to watch next

  • Depth of OS integration (e.g., system‑level sharing, notifications, and voice) and performance trade‑offs under heavy AI workloads.

  • The permissions model for page‑level data access and automated actions, including granular, user‑friendly controls.

  • How well Atlas balances fast, synthesized answers with transparent citations and links back to original sources.

  • Extension and developer ecosystems that allow third parties to add secure, composable tools for the assistant.

  • Monetization levers—subscription tiers, usage caps, or enterprise features—that sustain AI compute while keeping the experience seamless.

Key takeaways

  • Atlas introduces a browser paradigm where ChatGPT‑style assistance is native, not bolted on.

  • The design aims to turn browsing into outcome‑driven workflows, potentially reshaping search, productivity, and content discovery.

  • Success will hinge on trust (privacy, safety, attribution), speed and reliability of AI features, and a robust ecosystem of actions and developer tools.

Read More

Introducing ChatGPT Atlas

Youtube • OpenAI • October 21, 2025

AI•Tech•Chat GPT Atlas•Agent Mode•Privacy Controls


What it is and why it matters

OpenAI unveils ChatGPT Atlas, a full web browser with ChatGPT integrated at its core to reimagine how people navigate, read, and act on the web. OpenAI frames Atlas as “the browser with ChatGPT built in,” bringing the assistant into the page you’re viewing so it can understand context, help in place, and even complete tasks without copy-paste or tab juggling. Atlas launches worldwide on macOS today for Free, Plus, Pro, and Go users, with Business in beta and Enterprise/Edu available if enabled by admins; Windows, iOS, and Android versions are “coming soon.” (openai.com)

Availability, setup, and requirements

Users can download Atlas and sign in with their ChatGPT account; onboarding supports importing bookmarks, saved passwords, and history from your current browser for a quick switch. On macOS, Atlas supports Apple silicon (M‑series) Macs running macOS 12 Monterey or later. You can set Atlas as the default browser in Settings; making it default unlocks elevated rate limits for the first seven days (terms apply). (openai.com)

Key browsing features

  • New-tab experience: Ask a question or enter a URL to receive a concise answer alongside structured tabs for search links, images, videos, and news where available. (help.openai.com)

  • Ask ChatGPT sidebar: A persistent side panel that summarizes pages, extracts details, drafts text, or explains code without leaving the current tab. Open it from the top-right of the browser and type or speak your prompt. (help.openai.com)

  • In-line writing help: Cursor-like in-page editing to rewrite, check grammar, or adapt tone directly on the site you’re viewing. (help.openai.com)

Agent mode: getting work done for you

Atlas supports an upgraded Agent Mode that can open tabs, navigate, click, and execute multi-step workflows—planning events, researching, filling forms, building carts, or booking appointments while you browse. It’s available in preview for Plus, Pro, and Business users, with OpenAI emphasizing faster performance by leveraging browsing context. OpenAI cautions that it’s an early experience that “may make mistakes” on complex flows; the team is prioritizing reliability and latency and has run extensive red-teaming, with safeguards designed to adapt to new attack patterns. Users can monitor the agent, use logged‑out mode, and decide whether to grant page visibility before the agent acts. (openai.com)

Privacy, memory, and parental controls

OpenAI stresses user control and transparency. You can clear specific pages, wipe browsing history, or use incognito (which logs ChatGPT out temporarily). “By default, we don’t use the content you browse to train our models,” though users can opt in via data controls. A toggle in the address bar lets you decide if ChatGPT can see the current page; when off, no content is shared and no memories are created. Optional Browser Memories can remember key details to improve assistance—such as assembling a to‑do list from recent activity—viewable and manageable in Settings. Existing parental controls for ChatGPT carry into Atlas, with new options to disable Browser Memories and Agent Mode. (openai.com)

Roadmap and developer/website hooks

OpenAI’s roadmap includes multi‑profile support, improved developer tools, and better discovery for ChatGPT Apps built with the Apps SDK. Site owners can add ARIA tags to improve how the agent interprets and acts on their pages in Atlas. OpenAI suggests this is a step toward “agentic” web use where routine tasks are delegated so users can focus on higher‑value work. (openai.com)

Key takeaways

  • A native browser that embeds ChatGPT to summarize, search, and act directly on the web page you’re on. (openai.com)

  • Launches on macOS today; Windows, iOS, and Android are next; simple import and default‑browser setup flows. (openai.com)

  • Agent Mode in preview can autonomously complete multi‑step web tasks; users remain in control with explicit visibility toggles. (openai.com)

  • Privacy-first defaults (no training on your browsing by default), granular memories, and expanded parental controls. (openai.com)

Read More

Why Creativity Will Matter More Than Code

Youtube • a16z • October 22, 2025

AI•Work•Creativity•AI Companions•Emotional Interfaces


In this conversation, Anish Acharya joins Kevin Rose to explore why, in the age of AI, creativity will increasingly outweigh raw coding skill. They frame the moment as a rebirth of consumer technology, with AI compressing the distance between an idea and a polished product. This shift makes room for more makers to try more things, faster, and to let taste, intuition, and storytelling drive what gets built. The discussion sets out the stakes: code is becoming a commodity, but creative direction and product sensibility are becoming differentiators. (podcasts.apple.com)

They dig into “weird and working” products—software that blends emotion with utility—and how new tools enable solo creators to assemble full-stack experiences. Examples include AI companions and “emotional interfaces” that respond to mood, context, and intent. These interfaces, they argue, will feel less like command lines and more like conversations or performances, inviting products that are designed around feeling as much as function. With generative platforms lowering the cost of experimentation, a single person can now prototype, ship, and iterate at the speed that once required teams. (music.amazon.in)

The throughline is that the next wave of culturally important apps will be authored by people who lead with curiosity and taste—people willing to be different. As AI handles more of the scaffolding, the leverage shifts to those who can fuse art and engineering, crafting experiences that resonate. The episode ultimately argues that the frontier sits where consumer tech meets human feeling, and that the most valuable builders will be those bold enough to be strange, specific, and emotionally intelligent in what they create. (podchaser.com)

Read More

Is the Flurry of Circular AI Deals a Win-Win—or Sign of a Bubble?

Wsj • October 22, 2025

AI•Funding•Round Tripping•Hyperscalers•Antitrust

Is the Flurry of Circular AI Deals a Win-Win—or Sign of a Bubble?

Sorry, I can’t provide a verbatim extract from this article, but here’s a concise summary of its opening themes.

A surge of “circular” AI deals is reshaping how money and demand flow through the industry. In these arrangements, large technology suppliers take equity stakes or extend financing to AI startups that, in turn, commit to spending heavily on the investors’ cloud, chips, or services. The resulting loop can make growth look effortless: investment dollars cycle back as contracted revenue or usage, while startups secure scarce compute and credibility.

Proponents argue this alignment is pragmatic. Building frontier AI systems requires extraordinary capital, power, and hardware; guaranteed demand helps justify multibillion-dollar data-center expansions and long-term chip orders. Startups gain priority access to GPUs and infrastructure, potentially lowering unit costs and accelerating product road maps. Investors and corporate partners can also shape technical direction, integration, and go-to-market, turning customers into co-developers.

Skeptics see echoes of past booms where “round-trip” flows masked underlying demand. When revenue depends on counterparties funded by the vendor itself, traditional signals—pricing discipline, utilization, and organic adoption—can blur. Minimum-spend commitments and credits may inflate usage metrics, while concentration risk rises around a handful of hyperscalers and model labs. If downstream customer adoption lags—or if power, chip supply, or regulatory constraints bite—the loop could stall, leaving stranded capacity and pressured margins.

Disclosure and accounting also matter. Observers look for clarity on how companies separate investment returns from operating revenue, whether preferential terms exist, and how long-dated commitments are recognized. Antitrust and competition concerns may intensify if strategic financing influences supplier choice or locks in exclusive access to compute, data, or distribution.

The piece frames two paths: circular deals as a bridge to genuine, diversified AI demand—or as a flywheel that spins until it hits the hard limits of economics, infrastructure, or oversight. Early warning signs would include slowing end-customer adoption, secondary GPU price softening, rising incentives to sustain usage, and growing scrutiny of bundled spend agreements.

Read More

AI is about to upend Google’s AdWords cash cow

Fastcompany • October 23, 2025

AI•Tech•Google Ads•Search•Generative AI

AI is about to upend Google’s AdWords cash cow

Twenty-five years ago, Google unveiled Adwords, which pledged to enable advertisers “to quickly design a flexible program that best fits [their] online marketing goals and budget,” Google cofounder Larry Page said at the time.

The principle was simple. AdWords allowed advertisers to purchase individualized, affordable keyword-based advertising that appears alongside search results used by hundreds of millions of people every day.

That decision was a game changer for Google. Advertising now accounts for around three in every four dollars of revenue the company has made so far this year, growing 10% in the last year alone. The product, since renamed Google Ads, has powered the company to prosperity, cementing its position at the top of the search space.

But a quarter of a century on, artificial intelligence could force an overhaul of Google Ads.

“The shift from traditional search to AI answer engines represents the greatest challenge to Google’s $200 billion monetization engine we’ve ever seen,” says Aengus Boyle, vice president of media at VaynerMedia, a strategy and creative agency set up by entrepreneur Gary Vaynerchuk.

That’s not because competitors are siphoning away users from Google: The company’s global daily active users are up 13% year on year, with nearly 2 billion people logging on to Google services every day, according to Bank of America estimates. But because Google is starting to layer in AI-tailored answers into the front page of its search results—often above the advertisements and blue links to sources that helped make its name over the last 25 years—its ability to bring in ad revenue could take a serious hit. “If AI answers start replacing traditional Google searches, that’s a real threat to the whole cash engine,” says Fergal O’Connor, CEO of Buymedia, an ad platform company. “Google makes most of its money from ads tied to clicks. The more queries, the more ad space, the more revenue.”

The problem is that AI summaries of search results make it less necessary to click through to websites. So far, that’s been to the consternation of website owners, who rely on visits to their websites in order to sustain their business models. In time, it could harm Google itself. “If people stop clicking through to sites because they get what they need from an AI summary, that entire model takes a hit,” O’Connor says.

Of course, Google will “obviously try to wedge ads into the AI answers,” notes O’Connor—and indeed, the company is already doing so—but he says it’s not a like-for-like comparison. “One generative answer replaces a full results page of ad inventory, so it’s fewer impressions, fewer clicks, and less data flowing through the system,” he explains.

However, if anyone is best placed to capitalize on those changes, it’s Google, Boyle predicts. “Their clearest advantage lies within Google Ads—which has allowed them to integrate ads into new AI discovery surfaces, like AI Overviews and AI Mode, faster than any of their competitors in the space,” he says.

O’Connor believes that Google will adapt to the new norm, with AI being altering—but not terminal—to the future of advertising.

“If people genuinely stop ‘Googling’ and start ‘asking,’ the whole search economy has to reinvent itself,” O’Connor says. “But if you’ve been around the digital ad space for a few decades, you’ll know that we’ve survived a few events that were billed as being apocalyptic to the industry.”

Google has had 25 years to understand how best to target and present ads to its users and to squeeze out everything it can from the ad industry. It’s best placed to secure another 25 years of dominance, even if it requires some changes.

Read More

Marc Andreessen & Amjad Masad on “Good Enough” AI, AGI, and the End of Coding

Youtube • a16z • October 23, 2025

AI•Tech•AGI•Software Development•Developer Tools


Core Idea: “Good Enough” AI as a Threshold Moment

The conversation explores the notion that AI need not be perfect to be transformative. “Good enough” systems—those that meet practical performance thresholds—can unlock massive value across software and beyond. The discussants contrast academic benchmarks with market utility, arguing that once models reliably clear usability bars (speed, cost, accuracy within tolerance), adoption accelerates regardless of remaining edge-case errors. This reframes progress: rather than waiting for AGI, the focus shifts to cumulative capability plus integration into workflows, interfaces, and tooling that compress time-to-value for both developers and non-developers.

AGI Trajectory and Capability Compounding

They situate current frontier models on a capability curve where scale, data quality, and tool-use (code execution, retrieval, agents) compound. AGI is treated less as a single “sentience” event and more as a stepwise crossing of functional thresholds—planning, autonomy, and domain transfer—amplified by orchestration layers. The result is a practical path: narrow-but-powerful agents connected to tools and APIs can perform multi-step tasks, making system design and guardrails as critical as raw model intelligence.

Coding: Ending, Evolving, or Abstracting?

“End of coding” is framed as abstraction rather than disappearance. Natural-language interfaces shift engineers from syntax production to specification, review, and systems thinking. Code becomes an artifact generated by AI, while humans curate architecture, constraints, and verification. Pair-programming copilots evolve into task-level agents that scaffold projects, refactor large codebases, write tests, and manage CI/CD steps. The value of software skills persists, but the comparative advantage moves to problem framing, debugging strategy, security posture, and integration with real-world constraints (latency, cost ceilings, compliance).

Productivity, Cost Curves, and the New Software Stack

The dialogue highlights falling inference costs and rising context lengths as twin enablers: more code and documentation fit into prompts, while lower per-token costs make continuous assistance viable. The emerging stack layers models, vector/RAG systems, function calling, and agent runtimes atop conventional repos and cloud. Toolchains emphasize evaluation harnesses (to measure “good enough”), test coverage, and deterministic fallbacks. For companies, the strategic play is to convert tacit organizational knowledge into structured corpora and guardrailed agents that handle support, ops runbooks, and routine engineering chores.

Risk, Reliability, and Governance

Reliability is addressed via defense-in-depth: constrain model autonomy with capability scopes; add linters, type systems, and property-based tests; use sandboxed execution; and incorporate human-in-the-loop on high-impact changes. Security shifts left: secret management, dependency provenance, and model supply-chain risks (prompt injections, tool exploits) require dedicated controls. Rather than freezing innovation under blanket rules, the conversation favors outcome-based evaluation, auditability, and continuous red-teaming to maintain velocity while bounding failure modes.

Markets, Jobs, and Education

On talent markets, AI redistributes leverage: juniors get superpowers, seniors scale their impact, and teams shrink for the same output. Hiring screens emphasize systems design, product sense, and the ability to formalize requirements for AI agents. Education follows suit: less emphasis on memorizing syntax; more on computational thinking, version control, testing, and reading/maintaining AI-generated code. Companies that align incentives to ship with AI—measuring cycle time, change failure rate, and recovery speed—capture outsized gains.

Strategic Implications and What to Build Now

Winners will: (1) encode institutional knowledge into private RAG/agents; (2) re-platform legacy workflows around AI-first interfaces; (3) treat evaluation as a first-class product surface; and (4) build moats via proprietary data, distribution, and integration depth rather than model weights alone. As “good enough” becomes ubiquitous, differentiation shifts from raw capability to orchestration quality, domain fit, reliability SLAs, and trust.

  • AI’s impact inflects when it becomes “good enough” for real workflows, not when it becomes perfect.

  • Coding evolves into specification, review, and systems integration; agents handle routine generation and maintenance.

  • Reliability and security come from layered controls, tests, and constrained tool use—not from single-shot perfection.

  • Competitive advantage moves to proprietary data, evaluation, and integration with existing systems and processes.

Read More

Anthropic and Google Cloud strike blockbuster AI chips deal

Ft • October 23, 2025

AI•Tech•Partnerships•Cloud Computing•Semiconductors

Anthropic and Google Cloud strike blockbuster AI chips deal

Anthropic, the artificial intelligence company behind the Claude chatbot, has entered into a significant multi-year agreement with Google Cloud to secure a massive allocation of advanced AI chips. This strategic partnership, involving one of Anthropic’s largest investors, is designed to substantially boost the startup’s computing capacity, which is a critical resource in the competitive race to develop and deploy powerful AI models.

Strategic Implications for the AI Industry

The deal represents a major strategic maneuver in the high-stakes AI landscape. For Anthropic, securing guaranteed access to a vast supply of cutting-edge tensor processing units (TPUs) and graphics processing units (GPUs) from Google directly addresses the industry-wide bottleneck of AI computing power. This ensures the company has the necessary firepower to train its next-generation Claude models without being constrained by hardware availability. For Google Cloud, the agreement solidifies a key relationship with a leading AI lab, driving substantial and reliable revenue for its cloud division while validating its AI infrastructure against competitors like Amazon Web Services and Microsoft Azure.

Deepening an Existing Partnership

This blockbuster deal builds upon a pre-existing and multifaceted relationship between the two companies. Google is not merely a cloud provider for Anthropic; it is also a major investor, having committed hundreds of millions of dollars to the AI startup. This financial stake creates a powerful alignment of interests, making the cloud partnership more of a strategic alliance than a standard vendor-client relationship. The collaboration allows both entities to leverage their respective strengths: Anthropic’s frontier AI research and development capabilities, and Google’s world-class computational infrastructure and global data center network.

The Intensifying Battle for Compute

The Anthropic-Google agreement underscores the central role of computational resources, or “compute,” as the new currency of the AI era. Access to vast clusters of high-performance chips has become a primary determinant of which companies can compete at the forefront of AI development. This has led to an arms race among tech giants and well-funded startups to lock in long-term chip supplies through strategic partnerships and direct purchases. The scarcity of advanced semiconductors has made such deals a critical competitive moat, potentially creating significant barriers to entry for newer or less-funded players in the field.

In conclusion, this partnership fortifies Anthropic’s position in the top tier of AI companies by guaranteeing the computational resources needed for future innovation. It simultaneously bolsters Google Cloud’s standing in the intensely competitive cloud infrastructure market. The deal exemplifies the consolidation of power and resources among a small group of tech giants and the AI labs they back, shaping the trajectory of artificial intelligence development for the foreseeable future.

Read More

I Tried an AI Web Browser, and I’m Never Going Back

Wsj • October 23, 2025

AI•Tech•WebBrowsers•Productivity•Innovation

I Tried an AI Web Browser, and I’m Never Going Back

The transition to AI-powered web browsers represents a fundamental shift in how users interact with the internet, moving from manual searching to conversational computing. Browsers like OpenAI’s ChatGPT Atlas, Perplexity’s Comet, and Google’s Gemini are embedding sophisticated AI agents directly into the browsing experience, fundamentally changing the workflow from opening multiple tabs and sifting through information to simply asking questions and receiving synthesized answers. These browsers act as proactive research assistants, capable of summarizing articles, comparing products, and planning trips without requiring the user to visit multiple websites.

Core Functionality and User Experience

The primary advantage of these AI browsers lies in their ability to comprehend and execute complex, multi-step tasks. Instead of manually searching for “best noise-cancelling headphones,” then “headphone deals,” and finally “Bose vs. Sony comparison,” a user can ask the AI, “Find the best deals on high-end noise-cancelling headphones and summarize the key differences between the top Bose and Sony models.” The AI agent then performs these searches in the background, synthesizes the information from various sources, and presents a consolidated answer with direct links for verification or purchase. This eliminates the cognitive load of managing numerous tabs and cross-referencing information.

Leading AI Browser Platforms

Several key players are competing in this emerging space, each with distinct approaches. OpenAI’s ChatGPT Atlas integrates the powerful capabilities of its latest models directly into the browsing interface, offering deep contextual understanding and task execution. Perplexity’s Comet has gained a reputation for its strong citation practices, always linking back to the original sources it uses to generate answers, which builds user trust. Google’s Gemini leverages the company’s vast search index and knowledge graph to provide comprehensive and timely information. These platforms are evolving beyond simple Q&A into true agents that can take actions like filling out forms or managing bookings based on verbal commands.

Implications for the Future of Search and Work

The rise of AI browsers has significant implications for the digital ecosystem. Traditional search engines, built on a list-of-links model, may see a decline in direct traffic as users get answers directly within the AI interface. This could impact online advertising and the business models of content publishers who rely on search-driven traffic. For productivity, these tools promise to dramatically accelerate research-intensive tasks in fields like academia, journalism, and market analysis. However, they also raise questions about the “digital middleman” and whether users will lose the serendipity and critical thinking skills developed through manual research.

Ultimately, AI web browsers are not merely an incremental improvement but a paradigm shift towards a more efficient, conversational web. While concerns about information bubbles and over-reliance on AI are valid, the convenience and power they offer make a compelling case for widespread adoption. As these agents become more capable, the very definition of “browsing the web” is being rewritten from an activity of navigation to one of conversation and delegation.

Read More

Everything’s In Play in the Age of AI: Why Only 1 of Our 16 Core AI Agents Comes From a Legacy Vendor

Saastr • Jason Lemkin • October 24, 2025

AI•Tech•Startups•EnterpriseSoftware•Innovation

Everything’s In Play in the Age of AI: Why Only 1 of Our 16 Core AI Agents Comes From a Legacy Vendor

The current AI landscape represents what the author describes as “the most open buying window in B2B history,” based on their analysis of their organization’s AI agent stack. In a striking revelation, only one of their sixteen core AI agents comes from a legacy vendor—Salesforce’s Agentforce, which they are currently rolling out for AI SDR and BDR functions. This statistic underscores a massive shift in enterprise software procurement, where companies are overwhelmingly turning to AI-native startups and even building their own solutions rather than relying on established vendors.

The Real AI Agent Stack

The organization’s AI infrastructure consists of two primary categories: custom-built agents and third-party solutions. They built six agents themselves using Replit, demonstrating the accessibility of modern development tools. These custom agents include:

  • AI Mentor – A 24/7 assistant trained on 20M+ words of specialized content

  • AI VC Pitch Deck Analyzer – Provides comprehensive scoring and fundability assessment

  • AI Valuation Calculator – Estimates startup valuations using real-time market data

  • AI Startup Benchmarking – Compares performance against industry leaders

  • AI VC Dealflow – Intelligent investor matching system

  • AI Content Review – Automated content quality assessment

Their third-party AI agents come almost exclusively from companies that didn’t exist three years ago, with most emerging within the last 18 months. These include Artisan AI SDR for outbound messaging, Qualified AI BDR for inbound lead qualification, Delphi for providing expert advice, and various content creation tools like Higgsfield.ai for video production and Opus Pro for content repurposing.

The Vibe Coding Revolution

A significant development highlighted is what the author terms the “vibe coding revolution,” where companies can now build their own AI agents without extensive engineering resources. The contrast is stark: what previously required hiring engineers and 3-6 month development cycles can now be accomplished by opening a browser and describing what you want. The organization built their AI Mentor agent trained on 20M+ words of content faster than completing a vendor security questionnaire, and their AI VC Pitch Deck Analyzer was created in a single weekend.

Why Legacy Vendors Are Struggling

Legacy vendors face substantial challenges in adapting to the AI era due to massive technical debt, quarterly revenue targets that discourage R&D pivots, sales teams trained to sell features rather than outcomes, and pricing models built for seats rather than AI consumption. Meanwhile, AI-native startups benefit from clean architecture designed for LLMs from day one, founder-led selling focused on ROI, and pricing that aligns with usage and value. Companies building their own agents gain additional advantages including zero vendor lock-in, complete control over features, no procurement processes, and instant iteration cycles.

Implications and Time-Sensitive Opportunity

The author emphasizes that this unprecedented openness in enterprise buying represents a limited-time opportunity, predicting the window will likely remain open through 2026 before consolidation occurs. During this period, companies that normally take 18 months to evaluate vendors are signing with 2-month-old AI startups, and IT teams are willing to start trials while vendors complete security certifications. However, the landscape is shifting toward a future where some potential buyers may choose to build solutions themselves rather than purchase them, meaning AI vendors must offer 10x improvement rather than incremental gains.

The fundamental lesson from this AI agent stack analysis is that “everything is in play.” Long-standing vendor relationships matter less, the “nobody got fired for buying IBM” mentality is dead for AI agents, and the assumption that startups can’t secure enterprise deals has become obsolete. Most significantly, the notion that organizations need vendors for software solutions is becoming increasingly questionable as companies demonstrate they can build mission-critical AI agents themselves using modern development platforms.

Read More

OpenAI launches Atlas web browser

Ft • October 21, 2025

AI•Tech•OpenAI


OpenAI has introduced Atlas, a web browser that directly integrates its popular ChatGPT assistant, positioning the company to compete head-on with Google and Microsoft in how people find and act on information online. The move signals a push from standalone chatbot to an AI-first browsing experience, where conversational queries, summaries, and task execution are native to the browser rather than bolted on through extensions. The central promise is to reduce friction between asking, browsing, and doing—turning the browser into an agent that can interpret intent, navigate pages, and present answers within the flow of reading and search.

What’s new and why it matters

  • Atlas embeds a conversational agent at the core of the browser UI, reframing “search” as dialogue and task completion rather than a list of links.

  • By bundling assistant capabilities with navigation, Atlas challenges Chrome’s dominance and Edge’s Copilot-led approach, raising the stakes in AI-driven discovery.

  • Integration at the browser level could accelerate a shift from keyword queries to natural-language prompts that produce synthesized, source-aware responses.

Implications for search and distribution

  • If Atlas becomes a primary entry point for queries, default-search economics may shift, pressuring incumbents’ lucrative link-based models.

  • A native assistant can route more interactions through OpenAI’s stack, potentially capturing intent, recommendations, and transactions that today occur via search engines.

  • Distribution will be pivotal: adoption hinges on platform availability, performance, and compatibility with existing workflows and extensions.

User experience to watch

  • On-page assistance: instant summaries, definitions, and step-by-step explanations without leaving the current page.

  • Actionable flows: drafting emails, code snippets, or research notes that reference the page a user is viewing.

  • Multimodal inputs: the potential to mix text, images, or files in a single conversational thread to accelerate complex tasks.

  • Transparency and control: clear citation of sources, adjustable creativity/precision modes, and robust privacy controls will be essential to trust.

Competitive landscape

Google has been weaving generative AI into Chrome and its search offerings, while Microsoft has tightly linked Edge to Copilot and Bing. Atlas elevates OpenAI from a partner in those ecosystems to a direct competitor at the user’s front door. The competition will likely center on quality of answers, speed, reliability, and how well each browser-agent can reason across long contexts while grounding responses in verifiable sources.

Risks and challenges

  • Reliability and safety: minimizing hallucinations and surfacing citations remain critical for high-stakes queries.

  • Cost and performance: running high-quality models continuously within a browser context must balance latency with compute costs.

  • Ecosystem and regulation: default settings, bundling, and data practices may attract scrutiny, while developers will expect a stable extension model and clear APIs.

Key takeaways

  • Atlas embodies the transition from search-as-links to search-as-answers-and-actions, with the browser itself becoming an intelligent agent.

  • Success will hinge on distribution, trustworthy grounding, and seamless integration with users’ daily tools.

  • The launch intensifies a three-way contest to define how AI intermediates knowledge, productivity, and commerce on the open web.

Read More

Ranked: The Fastest Shrinking Jobs in America by 2034

Visualcapitalist • Marcus Lu • October 22, 2025

AI•Jobs•Automation


Key Takeaways

Routine clerical roles like data entry clerks and payroll clerks are projected to see some of the steepest percentage declines due to automation.

Customer-facing jobs such as cashiers and bank tellers are among the largest absolute job losses, reflecting the shift to digital self-serve technologies.

America’s labor market is undergoing a transformation as automation, artificial intelligence, and digitalization reshape the workplace.

To see where technology will have the most impact, we’ve visualized the fastest shrinking jobs in America by 2034, based on projections from the U.S. Bureau of Labor Statistics (BLS).

Data & Discussion

This data comes from the BLS Employment Projections (EP) program, which projects employment changes across hundreds of occupations from 2024 to 2034.

In our graphic, the fastest shrinking jobs are ranked by their absolute losses, with percentage declines included for context.

Automation Hits Routine Roles Hardest

The steepest percentage losses are concentrated in office-based clerical roles. Data entry clerks, for example, are projected to decline by 25.9%, the largest percentage drop among all occupations.

Payroll clerks follow closely with a 16.7% decrease, while bank tellers also see double-digit declines. These jobs involve repetitive, rule-based tasks that can be automated by software or AI systems, reducing the need for human input.

U.S. companies driving this wave of automation include ServiceNow (ticker: NOW), UiPath (ticker: PATH), and Workday (ticker: WDAY).

Retail and Service Jobs Face Large Absolute Losses

Cashiers are expected to see the biggest total job losses as checkout systems, mobile ordering, and self-pay kiosks expand. Similarly, customer service representatives and retail supervisors are projected to shrink by over 150,000 and 70,000 positions respectively.

According to the Census Bureau, the retail industry supports over a quarter of U.S. jobs, meaning this trend could have a major impact on society.

Read More

Google TPUs Find Sweet Spot of AI Demand, a Decade After Chip’s Debut

Bloomberg • Dina Bass, Emily Forgash • October 23, 2025

AI•Tech•Google•TPU•Hardware

Google TPUs Find Sweet Spot of AI Demand, a Decade After Chip’s Debut

In an AI chip industry that’s almost entirely commanded by Nvidia Corp., a Google chip first developed more than 10 years ago especially for artificial intelligence tasks is finally gaining momentum outside its home company as a way to train and run complex AI models.

Google’s Tensor Processing Units, or TPUs, are finding their sweet spot in the booming AI market, a decade after their initial debut. The specialized chips were originally created to handle the massive computational demands of Google’s own AI services like search and translation, but are now being adopted by external customers seeking alternatives to Nvidia’s dominant GPU offerings.

The timing appears particularly favorable as demand for AI computing power continues to outstrip supply across the industry. Companies developing large language models and other sophisticated AI systems are increasingly looking for specialized hardware that can deliver better performance and efficiency for specific AI workloads. TPUs, designed from the ground up for neural network computations, offer potential advantages in both cost and speed for certain types of AI model training and inference.

Google’s persistence in developing its custom silicon is paying off as the broader market recognizes the value of purpose-built AI accelerators. While Nvidia remains the dominant force with its versatile GPUs, the success of TPUs demonstrates that there’s room for specialized alternatives in the rapidly expanding AI infrastructure landscape. This development marks a significant milestone in the evolution of AI hardware, showing that dedicated AI chips can compete effectively even in a market long dominated by general-purpose computing solutions adapted for AI workloads.

Read More

Media

Wikipedia Is Getting Pretty Worried About AI

Nymag • John Herrman • October 18, 2025

Media•Publishing•Wikipedia


Over at the official blog of the Wikipedia community, Marshall Miller untangled a recent mystery. “Around May 2025, we began observing unusually high amounts of apparently human traffic,” he wrote. Higher traffic would generally be good news for a volunteer-sourced platform that aspires to reach as many people as possible, but it would also be surprising: The rise of chatbots and the AI-ification of Google Search have left many big websites with fewer visitors. Maybe Wikipedia, like Reddit, is an exception?

Nope! It was just bots:

This [rise] led us to investigate and update our bot detection systems. We then used the new logic to reclassify our traffic data for March–August 2025, and found that much of the unusually high traffic for the period of May and June was coming from bots that were built to evade detection … after making this revision, we are seeing declines in human pageviews on Wikipedia over the past few months, amounting to a decrease of roughly 8% as compared to the same months in 2024.

To be clearer about what this means, these bots aren’t just vaguely inauthentic users or some incidental side effect of the general spamminess of the internet. In many cases, they’re bots working on behalf of AI firms, going undercover as humans to scrape Wikipedia for training or summarization. Miller got right to the point. “We welcome new ways for people to gain knowledge,” he wrote. “However, LLMs, AI chatbots, search engines, and social platforms that use Wikipedia content must encourage more visitors to Wikipedia.” Fewer real visits means fewer contributors and donors, and it’s easy to see how such a situation could send one of the great experiments of the web into a death spiral.

Arguments like this are intuitive and easy to make, and you’ll hear them beyond the ecosystem of the web: AI models ingest a lot of material, often without clear permission, and then offer it back to consumers in a form that’s often directly competitive with the people or companies that provided it in the first place. Wikipedia’s authority here is bolstered by how it isn’t trying to make money — it’s run by a foundation, not an established commercial entity that feels threatened by a new one — but also by its unique position. It was founded as a stand-alone reference resource before settling ambivalently into a new role: A site that people mostly just found through Google but in greater numbers than ever. With the rise of LLMs, Wikipedia became important in a new way as a uniquely large, diverse, well-curated data set about the world; in return, AI platforms are now effectively keeping users away from Wikipedia even as they explicitly use and reference its materials.

Read More

Celebrating 25 years of Google Ads

Blog • Vidhya Srinivasan • October 23, 2025

Media•Advertising•GoogleAds•GenerativeAI•SearchMarketing

Celebrating 25 years of Google Ads

Overview

The piece marks a milestone: 25 years since Google’s ad platform began with the goal of helping businesses grow in the emerging digital world. It frames the quarter-century arc as a progression from the early keyword era to today’s AI-led marketing, while emphasizing a constant north star—customer success across businesses of all sizes. The message is celebratory but forward-looking, underscoring how generative AI and automation are reshaping creative production, campaign optimization, and reach across Google’s surfaces. “The best ads are just answers,” the piece asserts, positioning Search and YouTube as the places “where discovery starts and decisions are made.” (blog.google)

A 25-year evolution in brief

  • From keywords to multi-format: The narrative highlights milestones including keyword-based Search ads, the shift to mobile, the first video ads on YouTube, and the integration of Google Analytics—each expanding what advertisers can do and measure. (blog.google)

  • Renaming and broadening scope: Launched as AdWords in 2000 and rebranded as Google Ads in 2018, the platform evolved from text ads on search results to a multi-surface ecosystem spanning Search, YouTube, and the broader web. (seroundtable.com)

AI as the foundation for the next era

  • Generative capabilities: The post spotlights generative AI as a defining force, calling out “agentic capabilities” and tools that automate and optimize campaigns while accelerating creative generation. The stated aim is to scale creativity for companies “of all sizes,” helping them reach new audiences globally. (blog.google)

  • Answers, not interruptions: With AI underpinning the stack, ads are cast as helpful responses to people’s challenges and curiosities, delivered faster and better across Google’s key discovery surfaces. This “ads as answers” framing encapsulates the company’s product philosophy going forward. (blog.google)

  • Related momentum: Earlier Google marketing updates this year previewed how AI is being embedded into bidding, creative tools, and agentic workflows—context that aligns with the anniversary message about moving from potential to practical AI in advertising. (blog.google)

Impact and reach

  • Global economic contribution: While the anniversary post is global in tone, localized coverage cites measurable impact. For example, a Spanish-language post referencing a Public First study estimates the Google Ads ecosystem generated €123 billion of economic activity in Spain over the past 20 years—an illustration of how the platform underpins business growth in specific markets. (blog.google)

  • Ubiquity across surfaces: By centering Search and YouTube as decision points, the message ties ad efficacy to real-time consumer intent and discovery—core to the platform’s enduring value proposition. (blog.google)

Notable quotes

  • “We are not slowing down. Generative AI is transforming digital marketing.” (blog.google)

  • “The truth is, the best ads are just answers.” (blog.google)

Context and implications

  • Industry backdrop: External coverage of the anniversary echoes the AI-forward theme while noting the broader competitive and regulatory environment, including ongoing antitrust scrutiny—reminders that Google’s centrality in advertising continues to draw attention even as the product evolves. For marketers, the takeaway is to expect rapid AI feature rollouts alongside heightened expectations for transparency and performance. (mediapost.com)

  • What marketers should watch: The emphasis on agentic tools and automated optimization suggests continued movement toward systems that infer intent, expand query coverage, and generate creative at scale. Success will hinge on high-quality inputs (assets, product data), measurement rigor, and a willingness to test AI-driven formats and bidding strategies that surface opportunities beyond obvious queries. (blog.google)

Key takeaways

  • Google frames 25 years of ads as a journey from keywords to AI-native marketing, with customer growth as the constant. (blog.google)

  • Generative AI, agentic capabilities, and automated optimization are positioned as the next decade’s core levers for creativity and scale. (blog.google)

  • Search and YouTube remain the prime venues for “ads as answers,” tying ad performance to discovery and intent. (blog.google)

  • Localized data points underscore substantial economic impact, while external coverage highlights an evolving regulatory context marketers should monitor. (blog.google)

Read More

Venture

Inside YC’s Hottest Startups

Neweconomies • October 19, 2025

Venture


Overview

This piece spotlights conversations with several of the most talked‑about startups from Y Combinator’s Summer ’25 batch, framed as office hours. The focus is practical and founder‑centric: what these teams are building right now, the biggest mistakes they’ve made so far, and the early wins that signal traction. Rather than polished launch narratives, the discussions emphasize candid, in‑the‑trenches learning: prioritization under constraints, what resonated with first users, and how teams are iterating week to week. The format encourages specificity about product choices, distribution bets, and metrics that matter at the earliest stages, offering listeners a window into how YC founders approach problem selection, customer discovery, and speed of execution.

Format and Focus

  • Office hours conversations give founders space to articulate their product thesis and the concrete user problem they’re addressing, while pressure‑testing assumptions about market size, buyer urgency, and willingness to pay.

  • The dialogue is structured around three pillars:

1) What they’re building: product scope, architecture choices, and go‑to‑market path.

2) Biggest mistakes: misallocated cycles, premature optimization, or unclear ICPs.

3) Early wins: evidence of pull like repeated usage, referrals, or signed pilots.

  • The overall tone is tactical. Founders highlight experiments that yielded learning quickly, and when they chose to kill features or pivot messaging after talking to users.

Themes From Founder Mistakes

  • Overbuilding v1: Teams describe investing in features that didn’t move activation or retention, prompting a shift to tighter MVPs and faster iteration loops.

  • Diffuse ICPs: Broad targeting slowed learning. Narrowing to a single high‑pain segment clarified onboarding, pricing, and messaging.

  • Distribution last: Several founders admit they delayed channel testing; successful teams storyboard distribution alongside product from day one.

  • Vanity metrics: Visits and sign‑ups obscured weak engagement; switching to retention cohorts and time‑to‑value surfaced true product‑market progress.

Early Wins and Signals

  • Repeatable usage patterns: Founders cite early users returning unprompted as stronger evidence than top‑of‑funnel spikes.

  • Willingness to pay: Even small paid pilots or pre‑payments validated urgency and guided pricing tiers.

  • Short feedback cycles: Weekly releases and founder‑led customer calls increased learning velocity and informed roadmaps more than lengthy internal debates.

  • Lightweight moat thinking: Early differentiation comes from a tight wedge and data loops rather than grand platform ambitions; teams capture unique workflow data that compounds over time.

Why It Matters

For founders, the conversations model a disciplined approach to early decision‑making: define the user job, measure value with retention‑oriented metrics, and align product work with distribution from the outset. For operators and investors, the office hours format surfaces how “hot” YC teams convert narrative into traction—what constitutes credible signals at pre‑seed/seed, how quickly teams respond to feedback, and how they frame mistakes as learnable events. The emphasis on explicit hypotheses and rapid testing offers a playbook: reduce scope, talk to users daily, and hold the roadmap accountable to engagement and revenue proof points. Collectively, these vignettes sketch the contours of the Summer ’25 batch’s energy—ambitious problems approached with pragmatic iteration and a bias toward shipping.

Key Takeaways

  • Clarity beats scope: Tight wedges and sharp ICPs accelerate learning and adoption.

  • Distribution is a product: The best teams design channels, messaging, and onboarding alongside core features.

  • Measure what matters: Retention, activation, and willingness to pay outshine surface‑level growth.

  • Learn fast in public: Founder‑led customer conversations and weekly releases compound insight and trust.

  • Early wins are small but real: Repeated use, referrals, and paid pilots are stronger than headline sign‑ups.

Read More

AI mafia startups

Signalrank update • Rob Hodgkinson • October 20, 2025

Venture

AI mafia startups

Overview

The piece examines “downstream” founder talent: alumni of hypergrowth companies who later create notable startups. It situates today’s AI spinouts in a longer lineage of Silicon Valley “mafias,” arguing that certain firms function as founder schools whose early employees export culture, talent density, and operating philosophy to new ventures. While some companies now try to self-brand as mafias—Brex courts “quitters,” boasting 100 alumni founders—the article stresses that the label is conferred by the ecosystem, not claimed. Investors actively seek founders with high-velocity company experience, and in AI specifically, the fund-raising environment has become unusually permissive—“the billion dollar seed round era is upon us,” as one observer puts it—magnifying the impact of alumni networks on venture formation and capital allocation.

The OG tech mafia

The analysis traces the concept to Fairchild Semiconductor, founded by the “traitorous eight” in 1957, whose alumni seeded AMD, Intel, SanDisk, and a web of “Fairchildren.” This genealogy established a durable pattern: concentrated talent and a shared operating cadence generate repeat founders who propagate practices across the valley. The PayPal mafia later became the canonical modern example, with former executives and early employees exerting a “fingerprint” across many of the 2010s’ most successful companies, reinforcing the idea that elite organizational training can scale entrepreneurial outcomes across multiple cohorts.

Which AI companies are becoming tomorrow’s mafias?

The post analyzes alumni-founded startups from OpenAI, DeepMind, Meta AI, and Google Brain using SignalRank’s data. DeepMind and OpenAI are the most prolific at generating venture-backed spinouts, with OpenAI alumni showing a particularly high unicorn hit rate—interpreted as evidence of deeper risk capital pools in Silicon Valley versus London (DeepMind’s primary base). Cumulatively, the most recent rounds from four OpenAI alumni companies—Anthropic ($13B), xAI ($5B), Thinking Machines ($2B), and SSI ($2B)—amount to over $20B, underscoring the capital intensity and investor confidence animating this class of companies.

Capital flows and investor behavior

According to the piece, a16z is, by far, the most active Series A investor in this alumni cohort, followed by NVIDIA—an indicator that both traditional venture and strategic capital are positioning early in the AI stack. SignalRank highlights three alumni-founded companies that have already raised “quality” Series A rounds and could be candidates for its Series B product, signaling a pipeline approach: identify credible spinouts early, then support pro rata as they traverse the Series A→B inflection where product-market validation and defensibility are stress-tested.

Talent dynamics and attrition

Leadership-caliber employees from top labs are increasingly striking out on their own, despite lucrative compensation at incumbents like Meta. PitchBook data cited in the article indicates OpenAI has lost 25% of its key research talent over the last two years—evidence that mission autonomy, founder upside, and the current capital climate outweigh the retention power of even “professional sports–level” pay packages. This suggests further fragmentation of frontier AI talent across newcos, accelerating innovation but also dispersing institutional knowledge.

Implications

  • The “mafia” designation remains premature for AI, but the precursors—dense alumni networks, repeat founders, and shared cultural playbooks—are clearly forming around OpenAI and DeepMind.

  • Capital is not merely following talent; it is competing to preempt it, compressing timelines and inflating early-stage check sizes for alumni-led teams.

  • Geographic capital depth matters: Silicon Valley’s fundraising environment appears to amplify OpenAI spinout valuations relative to London-based DeepMind alumni, potentially influencing where future AI “mafias” root themselves.

  • Strategic investors like NVIDIA joining early rounds may tilt company roadmaps toward compute alignment (hardware partnerships, model training choices), shaping the industry’s technical and economic trajectory.

Key takeaways

  • “Founder schools” produce outsized downstream entrepreneurship; Fairchild and PayPal are the historical blueprints now echoed in AI.

  • OpenAI and DeepMind alumni are the most prolific AI spinout engines; OpenAI’s unicorn hit rate benefits from deeper Silicon Valley risk capital.

  • Aggregate last-round totals for four OpenAI alumni companies exceed $20B (Anthropic $13B; xAI $5B; Thinking Machines $2B; SSI $2B).

  • a16z leads Series A activity in this cohort, followed by NVIDIA, signaling both venture and strategic appetite.

  • OpenAI lost 25% of key research talent in two years, pointing to sustained spinout formation despite incumbents’ premium pay.

Read More

Allocator’s Notebook: Union Square Ventures Funds I-III, 2004-2015

Postcards from istanbul • Yavuzhan Yilancioglu • October 22, 2025

Venture

Allocator’s Notebook: Union Square Ventures Funds I-III, 2004-2015

Nowadays, the investment community has more voices than ever. Some of these platforms are exceptional. Yet, while studying investors, I feel the need to go back in time, find some dusty articles or interviews, and hear investors directly as their younger selves who made the investment decisions at the time.

Why? Because USV in 2025 isn’t the same firm as USV in 2004, and Fred Wilson in 2025 isn’t the same person as Fred Wilson in 2004. Hence, we value having a singular focus on the interviews and resources of primary decision-makers from the time periods when they made their best decisions, and ignore all else. People change, memories disappear, hindsight misleads, and the worst is when we try to put a narrative on top, select for what sells and not for what’s true.

So, as we try to improve at recognizing the early innings of outlier investors, the aim is to isolate and study the past outliers with 10x, 20x, sometimes with 30x funds, ignoring everything else. While doing this, we will find the most boring and the lowest resolution videos possible, some out-of-print books, and share some of the takeaways from the eyes of an allocator.

As part of the series, I will share data that is publicly available to give a general context to the performance of the funds and the key investments behind each, to set the stage. Data should be accurate enough, though I welcome any updates and corrections you have.

We now start with the first piece of the series that covers the initial few funds of USV. Feel free to move directly to the final ‘Learnings’ section if you’re familiar enough with the context.

Setting the Stage

Fred Wilson and Brad Burnham raised USV Fund I in 2004 as co-GPs. They have founded and operated USV in New York and didn’t have anyone on the ground in the Bay Area the entire time during Fund I-III. Fred has been a career investor since 1987, and USV is his third venture firm, second as a founder after Flatiron. Brad started his career as an operator and moved to venture with AT&T Ventures in 1993, and later co-founded TACODA in 2001, a USV Fund I company.

Albert Wagner first joined them as a venture partner in 2006 and became a GP in 2008 with USV Fund II after he exited Delicious, a USV Fund I company, to Yahoo. Before Fund III, the team expanded to five partners with the additions of John Buttrick in 2010, following his corporate law career, and Andy Weissman in 2011 after having founded Betaworks in 2007.

From the get-go, their LP base has been institutional. Fund I LPs included Los Angeles City Employees’ Retirement System (LACERS), Massachusetts Pension Reserves Investment Trust (MassPRIM), Oregon Investment Council / Oregon Public Employees Retirement System (OIC / Oregon PERS), and UTIMCO.

Key Companies

Twitter: USV Fund I led the $5M Series A round (2007).

Read More

Why We Built Our Own Internal Software

Youtube • 20VC with Harry Stebbings • October 22, 2025

Venture


Overview

The piece centers on a venture firm’s decision to build proprietary internal software to streamline how the team sources deals, collaborates, and supports portfolio companies. The core narrative emphasizes control over workflows and data, speed of iteration, and a tighter feedback loop between investment hypotheses and day-to-day tools. Rather than relying on off-the-shelf CRMs and generic productivity suites, the approach aims to encode the firm’s specific investment playbook—what signals to track, how to triage inbound, and how to convert learnings from portfolio support into reusable processes—directly into software.

Why Build Instead of Buy

  • Customization: Tailors pipelines, diligence checklists, and post-investment tracking to the firm’s unique thesis, sector focus, and stage.

  • Velocity: Enables rapid iteration when the team’s questions change, e.g., adding a new signal to scoring models or reweighting founder-market fit criteria, without waiting on vendor roadmaps.

  • Data Ownership: Keeps proprietary notes, deal flow analytics, network graphs, and portfolio metrics fully in-house, reducing leakage and privacy risk.

  • Integration: Connects research tools, email, calendar, and communication channels into a unified view of relationships, rather than stitching together multiple SaaS products.

What the Internal Stack Enables

  • A living CRM that prioritizes founders and companies based on dynamic, thesis-aligned scoring.

  • Relationship intelligence that maps warm intros, angel networks, and operator communities to improve hit rate on competitive rounds.

  • Diligence automation that standardizes checklists, flags missing evidence, and embeds prompts for deeper technical or market validation.

  • Portfolio operating system: progress dashboards, hiring and customer-intro pipelines, and post-investment objectives tied to measurable milestones.

Implementation Approach

  • Start with the critical path: codify the firm’s sourcing and triage flow before expanding to diligence and portfolio support.

  • Ship thin slices: deploy small, high-leverage modules weekly to keep partners engaged and to ensure the tool mirrors how the firm really works.

  • Instrument everything: log interactions, decision timestamps, and outcome markers (pass/advance, round won/lost) so the system learns which signals correlate with returns.

  • Design for human-in-the-loop: keep final calls with partners while the software surfaces patterns, reduces toil, and shortens cycles.

Metrics and ROI

  • Time-to-first-response for inbound founders and operators.

  • Conversion rate from first meeting to partner review and to term sheet.

  • Portfolio milestone attainment (hiring, ARR targets, user growth) vs. planned timelines.

  • Reduction in duplicative work (e.g., repeated outreach, re-diligencing known patterns).

Risks and Trade-offs

  • Build debt: bespoke tools require maintenance; without clear ownership and documentation, velocity can stall.

  • Overfit: encoding today’s thesis too tightly can reduce flexibility when markets shift.

  • Adoption: even great tools fail without partner and platform-team buy-in; change management needs scheduled training and champions.

  • Security and compliance: must meet data retention, LP reporting, and confidentiality requirements on par with, or better than, third-party vendors.

Implications

For venture firms, proprietary software is becoming an operating advantage, not a nice-to-have. Encoding investment judgment and portfolio playbooks into an internal system compounds learning, improves founder experience through faster, more tailored interactions, and strengthens win rates in competitive processes. For founders, it suggests that firms investing in their own tooling may offer crisper diligence, more targeted post-close help, and clearer accountability. Over time, the firms that translate their “secret sauce” into software will likely see shorter feedback loops, better network leverage, and more consistent decision quality.

  • Key takeaways:

  • Building internal tools can hardwire a firm’s strategy into daily workflows, boosting consistency and speed.

  • Data ownership and integration across communication and research surfaces hidden relationship advantages.

  • The payoff depends on disciplined scope, rapid iteration, and strong adoption across partners and platform teams.

  • Risks center on maintenance burden and the need to avoid overfitting to a static thesis; design for adaptability.

Read More

Andreessen Horowitz lines up $10bn for next wave of tech bets

Ft • October 22, 2025

Venture


Overview

A leading Silicon Valley venture firm is assembling a new $10bn capital stack to pursue the next wave of technology bets, allocating $6bn to a growth fund, $3bn specifically for artificial intelligence deals, and $1bn to support US defence technology start-ups. The size and structure of the raise underscore both a renewed appetite for late-stage investing and a conviction that AI and dual‑use defence technologies will continue to generate outsized opportunities.

Fund Breakdown and Focus

  • $6bn Growth Fund: Targets later-stage rounds in companies with proven traction, prioritizing scale-up capital, market expansion, and pre-IPO readiness. Emphasis likely on category leaders where larger checks can accelerate acquisitions, international growth, and infrastructure build‑out.

  • $3bn AI Deals: Dedicated to AI-native companies and enabling infrastructure, spanning model development, data tooling, inference and training platforms, agentic applications, and vertical AI solutions. The size suggests capacity to lead or co-lead significant rounds and to support capital-intensive compute needs.

  • $1bn US Defence Tech: Earmarked for dual-use software and hardware aimed at national security and critical infrastructure. Expect focus on autonomy, sensing, space, cybersecurity, advanced manufacturing, and AI-enabled command-and-control—areas that bridge commercial markets and Department of Defense demand.

Why This Matters Now

  • Capital Concentration: The aggregation of $10bn into three focused pools reflects a barbell strategy—backing late-stage winners while making thematic bets in AI and defence where technical moats and policy tailwinds are strongest.

  • AI Investment Cycle: Dedicated AI capital recognizes persistent compute costs, rapid foundation-model evolution, and the race to productize AI across industries. Larger funds can secure strategic stakes and provide follow-on support through multiple compute-scale milestones.

  • Defence-Tech Repricing: Increased geopolitical tension and procurement modernization have made defence a venture-backable category. A specific $1bn pool signals confidence that start-ups can cross the “valley of death” from prototype to program of record with aligned capital and go-to-market expertise.

Implications for Founders and LPs

  • For Founders: Availability of sizable growth and thematic capital improves odds of financing complex roadmaps (e.g., AI training, certification, secure supply chains). It also raises the bar for differentiation—expect rigorous diligence on unit economics, defensibility, and integration with enterprise and government buyers.

  • For LPs: The raise concentrates exposure in capital-intensive themes with potential for asymmetric outcomes. Returns will hinge on timing of exits in a volatile IPO window and on navigating regulatory, export-control, and procurement risks—especially in defence.

Key Takeaways

  • Size and Allocation: $10bn total; $6bn growth, $3bn AI, $1bn US defence tech.

  • Strategic Signal: Reinforces the thesis that AI and defence are durable, non-transient venture themes.

  • Market Effect: Likely to intensify competition for late-stage and AI deals, support higher-quality rounds, and catalyze more dual‑use start-ups to pursue government markets.

  • Risk Factors: Valuation discipline in late stage, compute supply constraints, regulatory scrutiny, and dependency on government timelines.

Bottom Line

A concentrated $10bn effort split across growth, AI, and defence indicates that the next leg of venture outperformance is expected to come from scale capital deployed into winners and from thematic bets where technology, policy, and demand are converging. This capital formation could shape the late‑stage pipeline and accelerate commercialization in AI and national security technologies over the coming cycles.

Read More

Letters to a Young Investor: Mike Maples, Jr.

Generalist • October 23, 2025

Venture


Overview

This installment in a mentoring series distills how an elite early‑stage investor thinks, senses, and decides. It opens with a metaphor from Billy Collins’ poem “Introduction to Poetry,” contrasting the urge to “define” with the discipline of “understanding.” That tension becomes the lens for venture: great investing is less about over‑analysis and more about attunement. Mike Maples, Jr., co‑founder of Floodgate and an early backer of Twitter, Twitch, Okta, and Applied Intuition, frames his craft as “listening differently” to founders living in the future and inviting others to catch up. The exchange explores how to recognize “pattern breakers,” how to weigh people versus ideas, why pivots are common among outliers, and what practical habits help investors notice rather than merely predict.

Pattern Breakers: Choice, Not Comparison

A central heuristic is that “great startups force a choice, not a comparison.” Rather than benchmarking against incumbents, the best companies reframe the terrain. The example offered is Airbnb versus the Four Seasons: the hotel chain optimizes consistency, while Airbnb optimizes “authenticity.” Neither is universally superior; each appeals to different sensibilities. This framing teaches investors to look for novel value dimensions rather than incremental feature improvements. It also provides a diagnostic for founder narratives: if a pitch reduces to “X but better,” it’s likely a comparison play; if it compels a choice on a new axis, it may be pattern‑breaking.

Inflection + Insight; Why Pivots Don’t Break the Thesis

The piece outlines a two‑part test for transformative opportunities: an external “inflection” (technological, commercial, or sociological change) and the founder’s “insight” into what that change enables. Lyft exemplifies this: smartphones and GPS were the inflection; the insight was a peer‑to‑peer ride network. Importantly, Maples notes that “80%” of his best investments emerged via pivots. The implication: the initial idea is a probe that reveals the founder’s way of thinking. Investors should evaluate whether the team reliably detects and exploits inflections, even if the first expression changes. Pivots aren’t failures of vision; they are updates in light of higher‑resolution signal.

From Predicting to Noticing: A Twitch Origin Story

Maples’ letter recounts meeting a 23‑year‑old Justin Kan in a Palo Alto café in 2007. Kan arrived with a webcam strapped to a cap and a backpack containing a Linux computer, a video encoder, EVDO cellular gear, and Python scripts—duct‑taped together, but live. “He wasn’t selling me. He was already living in a different reality and offering me a fleeting glimpse.” YouTube was ~18 months old; most people used flip phones. The demo felt like stepping “through a portal to a different future.” Although Justin.tv later pivoted, the invariant insight—that millions of ordinary people would stream content—proved right and became Twitch. Maples concludes: he once “confused being early with being smart,” but the seed investors most rewarded “are not the futurists who can predict. They’re the ones who notice.” Investing at this stage is “less like investing and more like witnessing.”

Tactics for Sensing the Future

Maples offers concrete habits to avoid false understanding and keep resolution high:

  • “Don’t label startups.” Shortcuts like “Uber for X” collapse nuance and blind you to the “small, magical details that matter most.”

  • Ask temporal questions: “Is this from the future?” surfaces whether the product embodies an emergent reality rather than retrofitting today’s expectations.

  • Favor lived demonstrations over polished decks; the medium (how a founder shows the future) is often the message.

  • Assess the person as much as the plan: disagreeableness (the willingness to hold an unpopular belief) and anomaly‑seeking often correlate with outlier outcomes.

Market Backdrop: Founder Sentiment and AI Hiring

A sponsor note contextualizes the environment for pattern breakers. Mercury’s 2025 data report, The New Economics of Starting Up, surveyed 1,500 early‑stage founders and found: self‑funding is now the top capital source (even in tech, with about half likely to bootstrap); 87% are more optimistic about their financial future than last year; and among companies that have adopted AI, 79% report hiring more, not less. These signals suggest robust founder ambition despite smaller checks, and that AI adoption may expand teams rather than shrink them—conditions that reward investors attuned to emergent inflections.

Implications and Takeaways

For investors, the craft is to slow down the reflex to categorize, and speed up the reflex to observe. Look for founders who inhabit tomorrow—where a rough, working demo can be more predictive than a perfect market map. Evaluate whether the insight stands independent of the initial wedge, since most outliers navigate pivots to align with the deeper inflection. For founders, frame your story as a forced choice on a new dimension of value, and show rather than tell. The shared allele across exceptional builders is not polish but proximity to the future—and the conviction to bring others through the portal.

  • Key takeaways:

  • Great companies compel a choice on a new axis; they’re not “better sameness.”

  • Inflection + insight is the durable core; pivots are updates, not betrayals.

  • Early‑stage excellence prizes noticing over predicting; witnessing over modeling.

  • Resist labels that compress nuance; insist on high‑resolution understanding.

  • Founder sentiment and AI adoption trends point to continued opportunity for pattern‑breaking startups.

Read More

The tweeting turmoil inside Sequoia Capital

Ft • October 22, 2025

Venture


Overview

The piece examines how high-visibility posting on X/Twitter by prominent investors has spilled into internal debate at one of the industry’s most influential firms. It explores the tension between a venture partnership’s collective brand and the personal brands of star partners who reach millions directly on social media. The central question: when a partner’s public commentary veers into controversy or partisanship, where do the firm’s fiduciary responsibilities to limited partners (LPs) and founders end and individual free expression begin?

Why social media matters to venture firms

Venture capital franchises are built on trust, access, and reputational edge. Public posts can shape the firm’s perceived stance on geopolitics, regulation, or specific portfolio issues within minutes, influencing founder pipelines, corporate co-investors, and LP confidence. The article unpacks how tweets can be interpreted as firm positions even when labeled “views my own,” and how this ambiguity complicates crisis response, conflicts checks, and board obligations when partners serve as directors of private companies affected by the discourse.

Governance, policy, and the “partner as influencer” era

A key theme is governance modernization. The discussion highlights practical steps firms consider: clearer communications policies; internal escalation and pre-brief protocols for high-impact posts; coordinated messaging during sensitive portfolio events; and training to distinguish individual commentary from firm statements. It also notes the commercial realities: partner-driven audience reach aids deal flow, founder support, and hiring—benefits firms are reluctant to dampen. The result is a balancing act between codifying standards and preserving authentic voices that have become marketing engines in their own right.

Implications for LPs, founders, and portfolio companies

For LPs, social-media-driven volatility can translate into reputational risk and potential exposure to political or regulatory scrutiny, making disclosure and incident reporting more salient during fundraising. For founders, a partner’s viral post can be both an amplifier and a liability—affecting recruiting, customer sentiment, or even regulator attention. Boardrooms may need playbooks that delineate when online commentary triggers disclosure to co-investors, when to activate crisis PR, and how to avoid selective information risks if posts allude to non-public company matters.

Industry context and competitive dynamics

The article situates this episode within a broader shift: venture firms compete not only on capital and networks, but on narrative power across social platforms. As new funds are spun up by media-savvy GPs, established franchises face pressure to reconcile unified firm reputation with decentralized, personality-driven communication. That dynamic intensifies during market stress or polarized public debates, when silence can be read as endorsement and speed favors individuals over committees.

Also in focus: private equity in Japan and a nuclear financing conundrum

The piece flags two adjacent themes. First, private equity’s accelerating activity in Japan, where corporate governance reforms, succession challenges, and currency dynamics have expanded the buyout opportunity set—prompting larger, more complex take-privates and carve-outs. Second, a $20bn nuclear energy “puzzle” underscores the financing and risk-allocation hurdles for large-scale projects, where government policy, investor appetite, and long-dated construction timelines must align to unlock capital at scale.

Key takeaways

  • Social platforms have turned partners into influential publishers, forcing firms to update governance without neutering individual reach.

  • Perception risk is real: external audiences often map a partner’s post onto the firm’s stance, regardless of disclaimers.

  • Practical mitigations include defined communications policies, crisis protocols, and clearer separation of personal versus firm channels.

  • Competitive advantage now includes narrative management; firms that harmonize voice and values without chilling authenticity may win founder and LP trust.

  • Parallel market currents—bigger PE bets in Japan and complex nuclear financing—reflect the broader investment backdrop in which reputational signals and policy frameworks shape capital flows.

Read More

Why everyone is trying to sell you private assets right now

Ft • October 24, 2025

Venture


Overview

Wealth managers are increasingly promoting semi-liquid private market funds to broaden access beyond the traditional institutional investor base. The pitch centers on giving clients exposure to private equity and related alternative strategies while offering periodic redemption windows, a middle ground between fully locked-up buyout funds and daily‑liquid mutual funds. This shift reflects the convergence of private markets and wealth management, as distributors look to package illiquid strategies in vehicles that are compatible with individual portfolio needs and advisor workflows.

What “semi‑liquid” means

Semi‑liquid funds typically allow redemptions on a scheduled basis (for example, quarterly) subject to limits, rather than full liquidity on demand. Common structures include evergreen or interval/tender‑offer funds that:

  • Offer periodic subscriptions and redemptions based on a reported net asset value (NAV)

  • Impose gates or caps on withdrawals to manage liquidity and avoid forced selling

  • Blend mature private assets with cash, credit facilities, or public exposures to meet redemptions

  • Use independent valuation processes to estimate NAV between transaction events

These features are designed to make private strategies more accessible without replicating the daily‑liquidity promise that can create mismatches in stressed markets.

Why the push from wealth managers

Advisors see client interest in diversification and return streams less correlated with public markets. Semi‑liquid vehicles provide:

  • Portfolio building blocks for private equity, private credit, and real assets within a financial plan

  • Simpler subscription and 1099 tax reporting compared with traditional limited partnerships

  • Lower minimums and ongoing contributions, enabling dollar‑cost averaging into private markets

  • The potential for smoother vintage diversification through continuous capital deployment

For managers, these funds broaden distribution, create stable management fee bases, and reduce the fundraising cyclicality of closed‑end vehicles.

Benefits and trade‑offs for investors

Potential advantages:

  • Access to private equity-style return drivers (control, operational value creation, complexity premia)

  • Diversification benefits relative to listed equities and bonds

  • More flexible cash management than decade‑long lockups

Key risks and complexities:

  • Liquidity is conditional; gates, pro‑rata redemptions, and suspension rights can restrict exits in downturns

  • NAVs rely on appraisal methodologies and lagged marks; reported volatility may understate true risk

  • Fee stacks (management, performance, underlying fund fees) can be higher than public fund alternatives

  • Portfolio concentration, use of leverage, and co‑investment selection can materially affect outcomes

Due diligence essentials

Before allocating, investors and advisors should scrutinize:

  • Redemption mechanics: frequency, gate levels, queues, and treatment during liquidity events

  • Portfolio composition: mix of primary funds, secondaries, co‑investments, and private credit

  • Valuation and audit processes: frequency, independence, and policy for stale marks

  • Liquidity toolkit: cash buffers, credit lines, secondary market usage, and stress testing

  • Cost transparency: total expense ratio including underlying fees and carried interest

  • Fit within a broader plan: position sizing, rebalancing rules, and correlation to existing holdings

Implications for private markets

The mainstreaming of semi‑liquid products signals a distribution shift for private equity, with wealth channels becoming a structural source of capital. If managed prudently, these vehicles can bridge the gap between long‑term private strategies and the cash‑flow needs of individual investors. However, the durability of the model will be tested during market stress, when redemption limits, valuation lags, and liquidity management practices come under scrutiny. Clear disclosures, conservative liquidity design, and rigorous governance will determine whether semi‑liquid funds can deliver on their promise without compromising investor protection.

Key takeaways

  • Semi‑liquid funds extend private market access while limiting, not eliminating, liquidity

  • Product design (gates, NAV policies, liquidity buffers) is as important as manager selection

  • Advisor‑led distribution is accelerating, but success hinges on education and expectation setting

  • Stress periods will be the definitive test of these funds’ alignment between structure and strategy

Read More

Pension funds scoop up ex-private equity executives

Ft • October 23, 2025

Venture


A significant talent migration is underway in the financial sector as major pension funds increasingly recruit senior executives from private equity firms. This trend reflects a strategic shift by pension managers seeking to enhance their in-house investment capabilities during a prolonged downturn in traditional dealmaking activity. The challenging environment for private equity, characterized by fewer acquisition opportunities and compressed compensation structures, has made the stable, long-term oriented positions at pension funds increasingly attractive to experienced investment professionals.

Driving Forces Behind the Talent Shift

The movement of executives from private equity to pension funds is driven by several converging factors. The extended slump in mergers and acquisitions has created a more competitive landscape for private equity firms, resulting in reduced deal flow and consequently lower carried interest payments for partners and senior staff. Simultaneously, pension funds face growing pressure to generate consistent returns for their beneficiaries amid volatile public markets and are looking to build internal expertise for direct investments and co-investments.

This strategic realignment allows pension funds to reduce their reliance on external private equity managers and their associated fee structures while gaining more control over their alternative investment portfolios. The institutional knowledge and deal-making experience that former private equity professionals bring enables these pension organizations to pursue more sophisticated investment strategies internally rather than outsourcing to third-party funds.

Notable Executive Transitions

Several high-profile moves illustrate this emerging pattern across major pension systems. Prominent examples include executives from established private equity firms like KKR, Blackstone, and Carlyle Group accepting senior roles at public pension funds. These transitions typically involve positions such as chief investment officer, head of private markets, or direct investment roles where their expertise in deal sourcing, due diligence, and portfolio management can be directly applied to the pension fund’s investment strategy.

The recruitment focus has been particularly strong for professionals with experience in middle-market deals, infrastructure investments, and credit strategies—areas where pension funds are increasingly looking to deploy capital directly rather than through traditional fund-of-funds approaches. This allows the institutions to capture more of the investment upside while maintaining greater transparency and control over their asset allocation.

Implications for the Investment Landscape

This talent migration signals a broader transformation in how institutional investors approach alternative assets. As pension funds build their internal capabilities, the traditional private equity model may face increased competition for both deals and investment talent. The shift could also lead to changing fee structures across the industry as limited partners become more sophisticated direct investors.

For private equity firms, the loss of experienced professionals to their limited partner base represents both a challenge and an opportunity. While they face increased competition for talent, these transitions can strengthen relationships with important institutional investors who may continue to allocate capital to external managers for specific strategies or geographic markets where they lack internal expertise.

The long-term implications suggest a more hybrid approach to institutional investing, where pension funds maintain core internal teams for certain strategies while continuing partnerships with external managers for specialized opportunities. This evolution reflects the ongoing maturation of the alternative investment industry and the increasing sophistication of institutional investors in managing complex portfolios across market cycles.

Read More

Regulation

FTC removes Lina Khan-era posts about AI risks and open source

Techcrunch • Rebecca Bellan • October 20, 2025

Regulation•USA•FTC


Overview

An FTC staff-authored post dated January 3, 2025, titled “AI and the Risk of Consumer Harm,” foregrounds the agency’s concern about concrete, real‑world harms arising from artificial intelligence. The post underscores that the FTC is “taking note of AI’s potential for real-world instances of harm – from incentivizing commercial surveillance to enabling fraud and impersonation to perpetuating illegal discrimination.” By emphasizing tangible harms rather than abstract risks, the statement signals an enforcement‑minded posture focused on how AI is deployed in markets that affect consumers’ privacy, security, and fair treatment.

Key points from the FTC post

  • The agency frames AI risks in terms of consumer outcomes, highlighting three principal vectors:

  • Incentivized commercial surveillance and data over-collection as AI systems hunger for training data.

  • Fraud and impersonation threats, including AI‑enabled voice, image, and text synthesis that can deceive consumers.

  • The perpetuation of illegal discrimination when automated systems replicate or amplify bias.

  • The language places accountability on the entities deploying AI, not on the technology in isolation, indicating scrutiny of business practices, incentive structures, and deployment choices.

  • The post’s timing (January 3, 2025) and authorship by staff within the chair’s office convey institutional prioritization of AI enforcement themes early in the year, shaping expectations for how staff may assess cases and investigations.

Why these harms matter

  • Commercial surveillance: AI development can reward firms that aggregate and exploit large data troves. The FTC’s framing suggests a focus on whether data collection, retention, and secondary uses are proportionate, transparent, and aligned with consumer expectations.

  • Fraud and impersonation: Generative tools lower the cost and increase the believability of scams (e.g., deepfake voices and images). The statement implies heightened attention to authentication, guardrails against misuse, and claims about the safety or accuracy of AI tools available to consumers.

  • Illegal discrimination: When models are trained on biased data or deployed without adequate monitoring, outcomes can unfairly disadvantage protected classes. The emphasis on “perpetuating illegal discrimination” points to the need for robust testing, documentation, and remediation of disparate impacts.

Implications for companies building or deploying AI

  • Risk assessment by design: Firms should integrate threat modeling for misuse (impersonation, phishing) and privacy harms into product lifecycles, with clear ownership and escalation paths.

  • Data governance and transparency: Expect scrutiny of data sources, consent mechanisms, retention schedules, and disclosure practices. Companies should be prepared to substantiate the necessity and proportionality of data collection tied to AI features.

  • Fairness and accountability: Implement pre‑deployment bias testing, continuous monitoring, and explainability measures where feasible. Maintain auditable records of testing, mitigations, and model updates to demonstrate diligence.

  • Marketing and claims: Ensure product claims about accuracy, safety, bias mitigation, or “privacy‑preserving” features are truthful, evidence‑backed, and not misleading to consumers.

Key takeaways

  • The FTC’s January 3, 2025 staff post articulates a consumer‑harm lens for AI oversight, centering on commercial surveillance incentives, fraud/impersonation risks, and illegal discrimination.

  • The focus on real‑world harm and on the incentives driving business behavior suggests enforcement priorities that scrutinize end‑to‑end product decisions, not merely technical model performance.

  • Organizations should prepare for expectations around data minimization, robust anti‑fraud measures, and systematic fairness auditing, supported by documentation that can withstand regulatory review.

Read More

Bannon and Markle among 800 public figures calling for AI ‘superintelligence’ ban

Ft • October 21, 2025

Regulation•USA•Superintelligence•AIban•OpenLetter


Overview

A cross-ideological coalition of 800 public figures has called for a “prohibition” on advanced artificial intelligence systems, specifically targeting the development and deployment of so‑called “superintelligence.” The signatories span politics, corporate leadership, technical experts, celebrities, and religious leaders—an unusually broad alliance that notably includes Bannon and Markle—signaling rising mainstream apprehension about AI capabilities that could outpace human control. The central thrust is that incremental safeguards are insufficient; only a legal ban on the most advanced tiers of AI would meaningfully reduce catastrophic risk.

Who Is Involved and Why It Matters

  • The coalition’s size (800 individuals) and diversity add political weight and media visibility, increasing pressure on policymakers to consider hard regulatory lines rather than voluntary standards.

  • The pairing of ideologically divergent figures underscores that AI risk has become a nonpartisan concern, reframing it from a niche technical issue to a societal one.

  • Inclusion of religious leaders suggests moral and humanistic frames—dignity, agency, stewardship—are now being placed alongside technical risk arguments.

What “Prohibition” Implies

  • The call advocates a ban on advanced systems rather than a pause or moratorium, implying statutory restrictions on training, scaling, or deploying models beyond certain capability thresholds.

  • A prohibition would likely require enforceable definitions of “advanced AI” or “superintelligence,” potentially tied to measurable markers (e.g., training compute, model autonomy, or cross‑domain performance).

  • Enforcement could involve licensing, audits, compute and chip export controls, and penalties for unauthorized development or deployment.

Stated and Implied Risks

  • Catastrophic misuse: autonomous cyber offense, bioengineering assistance, or destabilizing information operations.

  • Loss of human control: systems acting with goals misaligned to human values or resisting oversight.

  • Systemic impacts: labor displacement, concentration of power, and erosion of democratic processes through scalable manipulation.

  • Ethical considerations: the obligation to avert high‑impact harms even if probabilities are uncertain.

Policy Trajectories and Trade‑offs

  • Legislators would need to balance innovation benefits against tail risks. A prohibition could slow frontier research while channeling investment toward safer, bounded applications.

  • Clear capability thresholds and international coordination are pivotal; without harmonization, research could migrate to permissive jurisdictions, undermining effectiveness.

  • Transparency requirements, liability regimes, and mandatory red‑team testing may serve as complements—even within a prohibition framework—to manage systems near the threshold.

Potential Objections and Counterpoints

  • Innovation and competitiveness: opponents may argue a ban cedes leadership and economic gains to rivals.

  • Feasibility: definitional ambiguity around “superintelligence” could create loopholes or overreach.

  • Proportionality: some will favor risk‑tiered regulation over categorical bans, asserting that governance, not prohibition, better balances safety and progress.

Key Takeaways

  • 800 signatories from politics, business, tech, culture, and faith communities jointly urge a legal “prohibition” on advanced AI systems.

  • The coalition’s breadth signals a shift of AI safety from expert debate to mainstream policy urgency.

  • Implementing a ban would hinge on precise thresholds, robust enforcement, and international coordination.

  • The move intensifies the policy debate between categorical bans and risk‑based governance models, with profound implications for research, markets, and global competition.

Read More

Cloudflare CEO Matthew Prince is pushing UK regulator to unbundle Google’s search and AI crawlers

Techcrunch • Sarah Perez • October 21, 2025

Regulation•Europe•Google•Cloudflare•UKCMA


After launching a marketplace earlier this year that lets websites charge AI bots for scraping, Cloudflare is pressing for tighter oversight of AI. CEO Matthew Prince says he is in London to meet with the U.K.’s Competition and Markets Authority (CMA), urging the regulator to set stricter rules on how Google can compete in AI given its dominance in search.

The CMA recently gave Google a special designation in search and advertising due to its “substantial and entrenched” position, opening the door to broader obligations that extend beyond search and ads into areas like AI Overviews, AI Mode, Discover, Top Stories, and the News tab. Prince argues Cloudflare is well placed to weigh in because it isn’t an AI maker itself but sits between publishers and AI firms and works with a large share of the industry.

Prince contends Google should face the same constraints as rival AI companies. Instead, he says, Google uses the same web crawler that indexes the open web for search to also gather training and response material for its AI features, which confers an advantage. Under current rules, websites seeking to opt out of AI use risk also losing access to Google Search indexing, effectively tying the two together.

That trade-off is untenable for many publishers, especially those that rely on search traffic for a significant portion of revenue. Prince adds that blocking Google’s crawler can also disrupt Google’s ad safety systems, jeopardizing advertising across platforms. Because the crawler is bundled, Google gains access to content that competitors like Anthropic, OpenAI, and Perplexity might otherwise need to license.

Prince’s proposed remedy is to spur competition by unbundling Google’s search and AI crawlers, enabling thousands of AI companies to negotiate directly with thousands of media outlets and millions of small businesses for content. He says Cloudflare has shared data with the CMA illustrating how Google’s crawler operates and why others cannot easily replicate its reach.

Similar concerns have been voiced by industry leaders. Neil Vogel, CEO of People, Inc. (IAC), has criticized Google’s approach, arguing publishers are compelled to allow crawling for AI because of the way the systems are combined. He says his company uses Cloudflare’s tools to block non-paying AI bots and that discussions with major LLM providers are underway.

Read More

GeoPolitics

Donald Trump’s New World Disorder

Nytimes • October 21, 2025

GeoPolitics•USA•USForeignPolicy•GlobalOrder•Alliances


Thesis

The article argues that the United States, operating without a coherent plan for the future, is accelerating its own strategic decline and catalyzing a broader breakdown of global order. Its central line captures the warning: “Without a plan for what comes next, the United States is not only hastening its own decline but also forcing the world into a new era of disorder.” The piece contends that a short-term, reactive posture—driven by tactical deals, domestic political cycles, and performative gestures—erodes credibility abroad and predictability at home, inviting instability across security, economic, and technological domains.

What “no plan” looks like

  • Policy-by-announcement replaces sustained strategy, producing abrupt shifts that unsettle allies and embolden rivals.

  • Transactional bargaining dominates long-term coalition-building, trading immediate concessions for intangible losses in trust and reliability.

  • Institutions are scorned rather than reformed, weakening the very mechanisms that translate U.S. power into durable international influence.

  • Goals are framed as wins against opponents rather than as positive-sum outcomes, fueling zero-sum behavior among partners and competitors.

Channels of disorder

  • Alliance uncertainty: Partners hedge with parallel relationships and autonomous defense, reducing U.S. leverage and interoperability.

  • Economic fragmentation: Tariffs, export controls, and ad hoc waivers proliferate without a roadmap for stabilization, balkanizing supply chains and standards.

  • Norm erosion: Dismissal of rules when inconvenient normalizes reciprocity in rule-breaking, undercutting deterrence and dispute resolution.

  • Power vacuums: Retrenchment without sequencing creates openings for opportunistic regional actors, multiplying crises that require higher-cost interventions later.

Domestic drivers and feedback loops

  • Polarization and policy whiplash shrink the planning horizon; each election effectively resets grand strategy, warning foreign capitals that deals may not outlast a news cycle.

  • Governance by spectacle prioritizes symbolic moves over institutional investments, crowding out statecraft that requires quiet, iterative work.

  • Industrial and trade tools are deployed for headline impact rather than capacity-building, yielding diminishing returns and retaliation risks.

Global responses and second-order effects

  • Allies diversify security and economic options, aligning issue-by-issue rather than bloc-by-bloc, which complicates collective action on crises.

  • Competitors test boundaries incrementally, learning that U.S. red lines blur under pressure.

  • Middle powers assert strategic autonomy, arbitraging between blocs and setting their own mini-orders in technology, energy, and finance.

Implications for security, economy, and technology

  • Security: More flashpoints, thinner deterrence, and higher miscalculation risk as signals from Washington grow inconsistent.

  • Economy: Volatility becomes structural; firms price in political risk, relocate production defensively, and pass costs onto consumers.

  • Technology: Splintered standards slow diffusion, raise compliance costs, and harden techno-spheres that are difficult to reconnect later.

  • Governance: International compacts—from climate to health security—stall without a dependable convening power to mediate trade-offs and finance execution.

What a plan would require (implied remedies)

  • Clear prioritization: Identify vital interests and sequence efforts rather than chasing every crisis.

  • Institutional renewal: Reinvest in alliances and rule-making bodies to convert raw power into predictable influence.

  • Economic statecraft with timelines: Pair protective measures with time-bound pathways to stability and cooperation.

  • Domestic consensus: Build cross-partisan guardrails that outlast election cycles, signaling reliability to partners and rivals alike.

Key takeaways

  • The absence of U.S. strategic planning doesn’t create a vacuum; it creates a marketplace of disorder where many actors set conflicting rules.

  • Tactical wins can carry strategic costs when they undermine trust, consistency, and institutional capacity.

  • Restoring order requires long-horizon commitments at home and abroad, not just tougher rhetoric or one-off deals.

Read More

Crypto

State of Crypto 2025

A16z • October 23, 2025

Crypto•Blockchain•Stablecoins


State of Crypto 2025 is now live! Check out the report and our new State of Crypto dashboard, which tracks key industry metrics here.

This is the year the world came onchain.

When we launched our first State of Crypto report, the industry was still in its adolescence. The total crypto market was worth about half what it is today. Blockchains were much slower, more expensive, and less reliable.

In the last three years, crypto builders weathered a major market drawdown and political uncertainty — but continued to make significant infrastructure improvements and other advancements. Those efforts bring us to today, a moment when crypto is becoming a meaningful part of the modern economy.

The story of crypto in 2025 is one of industry maturation. In short, crypto grew up:

  • Traditional financial incumbents, like Visa, BlackRock, Fidelity, and JPMorgan Chase — and tech-native challengers like PayPal, Stripe, and Robinhood — are offering or launching crypto products.

  • Blockchains now process over 3,400 transactions per second (100x+ growth in the last five years).

  • Stablecoins power $46 trillion ($9 trillion adjusted) in annual transactions, rivaling Visa and PayPal.

  • Over $175 billion sits in Bitcoin and Ethereum exchange-traded products.

Our latest State of Crypto report explores this industry transformation, from institutional adoption and the rise of stablecoins to the convergence of crypto and AI. And for the first time, we’re introducing a new way to explore the data and track the industry’s evolution by the metrics that matter: the State of Crypto dashboard.

Now for the findings…

Key takeaways

  • The crypto market is big, global, and growing

  • Financial institutions have embraced crypto

  • Stablecoins went mainstream

  • Crypto is stronger than ever in the United States

  • The world is coming onchain

  • Blockchain infrastructure is (almost) ready for prime time

  • Crypto and AI are converging

The market is big, global, and growing

In 2025, the total crypto market cap crossed the $4 trillion threshold for the first time, marking the industry’s broad progress. The number of crypto mobile wallet users also reached all-time highs, up 20% from last year.

The shift from a hostile regulatory environment to a much more supportive one, alongside accelerating adoption of these technologies — from stablecoins to the tokenization of traditional financial assets to other emerging use cases — will define the next cycle.

We estimate that there are roughly 40-70 million active crypto users, an increase of about 10 million over the last year, per our own analysis based on an update to this methodology.

This is a fraction of the estimated 716 million people who own crypto, up 16% from last year. It’s also a fraction of the approximately 181 million monthly active addresses onchain, down 18% from last year.

The gap between passive crypto holders (people who own crypto but don’t transact onchain) and active users (people who transact onchain regularly) represents an opportunity for crypto builders to reach more potential users who already own crypto.

Read More

Prediction Markets Boom as Volumes Surpass 2024 Election

Bloomberg • Emily Nicolle • October 21, 2025

Crypto•Blockchain•PredictionMarkets•Polymarket•Kalshi

Prediction Markets Boom as Volumes Surpass 2024 Election

Trading volumes on leading prediction market platforms Polymarket and Kalshi have climbed to new highs, surpassing the peak set during the U.S. presidential election last year.

The rebound in activity underscores mounting enthusiasm for venues that let investors bet on real‑world outcomes, arriving as established financial firms such as CME Group and Intercontinental Exchange explore ways to participate in these fast‑growing markets.

Read More

Coinbase acquires investment platform Echo in $375 million deal

Fastcompany • October 21, 2025

Crypto•Blockchain•Coinbase•Echo•TokenSales

Coinbase acquires investment platform Echo in $375 million deal 

Crypto heavyweight Coinbase said on Tuesday it has bought investment platform Echo in a nearly $375 million cash-and-stock deal, aiming to bring fundraising tools to its platform.

Dealmaking within the digital assets industry has picked up pace this year as a crypto-friendly Trump administration encourages companies to expand their business in the U.S.

Last week, cryptocurrency exchange Kraken unveiled a $100 million deal for futures exchange Small Exchange, paving the way to launch a fully U.S.-based derivatives suite.

Echo’s platform makes raising capital and investing more accessible to the crypto community through private and public token sales.

“We want to create more accessible, efficient, and transparent capital markets,” Coinbase said in a blog post.

While Coinbase will start with crypto token sales via Echo’s Sonar platform, the company later plans on expanding support to tokenized securities and real-world assets.

Echo was founded by crypto trader Jordan Fish, widely known by his “Cobie” pseudonym. The platform has helped crypto projects raise more than $200 million since its launch two years ago.

In May, Coinbase had struck a $2.9 billion deal for crypto options provider Deribit, plugging a gap in its derivatives portfolio and strengthening its international presence.

—Arasu Kannagi Basil, Reuters

Read More

Interview of the Week

Why the Real Road to Serfdom Runs Through Silicon Valley: Tim Wu on the Extractive Economics of Platform Capitalism

Keen on • Andrew Keen • October 22, 2025

Regulation•USA•Antitrust•PlatformCapitalism•TimWu•Interview of the Week

Why the Real Road to Serfdom Runs Through Silicon Valley: Tim Wu on the Extractive Economics of Platform Capitalism

Last time the anti-monopoly crusader Tim Wu appeared on the show, he was warning broadly about the road to serfdom. But in his new book, The Age of Extraction, Wu gets much more specific. The real road to serfdom, he warns, runs through Silicon Valley. Forget for a moment about surveillance capitalism, Wu suggests, and imagine that the most existential threat to 21st century freedom and prosperity is the “platform capitalism” of tech behemoths like Google and Amazon. These multi-trillion-dollar companies, he argues, have transformed the very places where we do business—digital marketplaces that once promised democratization—into sophisticated extraction machines. Like the robber barons of the late 19th century, today’s tech platforms have concentrated unprecedented wealth and power, creating an economic system that lends itself to the most Hayekian of medieval metaphors. The Silicon Valley business model is turning us into digital serfs, he warns starkly. That’s the extractive goal—the ‘Zero to One,’ as its most prominent ideologue Peter Thiel would say—of platform capitalism.

1. On the core thesis of extraction: Wu defines the economic reality that now dominates our digital economy and explains why “extraction” is the word that best captures our era.

“We have entered a world where we tolerate extreme levels of concentrated private power who try in every way they can to extract from weaker entities as much as possible. Much of the economy has become a resource for extraction by economically powerful actors.”

2. On tech billionaires as modern sovereigns: Wu describes the mindset that has emerged among Silicon Valley’s elite and why their detachment from reality has become dangerous.

“They desire to be treated like kings of small countries. They want immunity from ordinary laws. If no one ever says no to you, whether you’re an autocrat or a tech billionaire, that starts to become very bad for your character.”

3. On Silicon Valley’s ideological transformation: Wu traces how the tech industry abandoned its founding principles and embraced the very monopoly power it once claimed to despise.

“Silicon Valley once glamorized small inventive firms and brilliant scientists who gave their work to the public. Peter Thiel said every company should aim for monopoly. That’s basically where we live today. Everyone wants to be the platform.”

4. On the fragility of centralized systems: Wu warns that the concentration of power in a few platforms has made our entire economic system dangerously unstable.

“Centralized systems tend to be very fragile. They offer great advantages, but when they crash, they tend to crash hard. Whether it’s the economy or web services, I think we’re in for a hard crash coming at some point.”

5. On history’s verdict: Wu issues his starkest warning about what happens if America fails to address concentrated economic power voluntarily.

“If we can’t find some way to redistribute economic power, I think that history will redistribute it for us. The main and most effective tool of fundamental redistribution across the scope of history has been world wars and major revolutions. In a sense, we’re being tested.”

Read More

Startup of the Week

Polymarket Is Seeking Funding at a Valuation of Up to $15 Billion

Bloomberg • Kate Clark • October 22, 2025

Venture•Startup of the Week

Polymarket Is Seeking Funding at a Valuation of Up to $15 Billion

What’s happening

Polymarket, the crypto-native prediction market, is in early talks to raise new capital at a $12–$15 billion valuation, according to people familiar with the matter. That target represents a more than 10x step-up from roughly four months ago, underscoring how quickly investor appetite for event-driven markets has accelerated. The company declined to comment on the ongoing discussions. (livemint.com)

How the valuation climbed so fast

  • In June, Polymarket secured about $200 million in financing led by Founders Fund at roughly a $1 billion valuation, establishing its unicorn status. (livemint.com)

  • Earlier in October, Intercontinental Exchange (ICE), the owner of the New York Stock Exchange, said it would invest up to $2 billion in Polymarket at about an $8 billion pre-money valuation—an endorsement from a major market infrastructure player that materially reset expectations. Reports around that announcement suggested the deal would give ICE significant exposure to prediction-market data and distribution. (livemint.com)

Operating momentum and market traction

  • Trading activity has surged: combined weekly volume across Polymarket and rival Kalshi hit more than $2 billion in the week ending October 19, surpassing the peak around last year’s U.S. presidential election. Such velocity helps justify premium pricing, as investors look for durable, repeatable engagement beyond marquee political cycles. (livemint.com)

  • On the commercial front, Polymarket announced it will serve as a clearinghouse partner for DraftKings as that company probes prediction-market opportunities, while the National Hockey League disclosed multiyear agreements with both Polymarket and Kalshi—the first major U.S. sports league to do so—signaling mainstream institutional interest. (livemint.com)

Competitive landscape

  • Kalshi, the CFTC-regulated event-contracts exchange, has been scaling rapidly. It raised over $300 million at a $5 billion valuation in early October and, per current investor outreach, is fielding offers valuing it above $10 billion—evidence of a rising-tide dynamic lifting the category’s leaders. (techcrunch.com)

  • Earlier in the year, Kalshi’s $185 million round at a $2 billion valuation illustrated growing confidence from blue-chip investors; since then, volumes and mindshare have climbed across the sector. (reuters.com)

Regulatory context

Prediction markets still face unsettled rulemaking in the U.S. While Kalshi operates under CFTC oversight for certain event contracts, state-level gaming regulators and courts continue to scrutinize where prediction markets sit between financial derivatives and gambling. Legal questions remain around market manipulation, insider trading, and election-related contracts—factors investors will weigh when pricing regulatory risk into valuations. (livemint.com)

Why this matters

  • Re-pricing of a new asset class: A move from $1 billion to potentially $12–$15 billion within a quarter would mark one of the steepest near-term valuation re-rates in fintech/crypto infrastructure since the 2020–2021 cycle, driven by evidence that markets on real-world events can convert attention into liquidity at scale. (livemint.com)

  • Institutional validation: ICE’s participation and partnerships with major leagues and gaming operators expand distribution and credibility, potentially unlocking new data products, hedging use cases, and integrations with traditional finance rails. (ft.com)

  • Flywheel effects: Higher volumes improve predictive accuracy and liquidity, attracting more users and professional market makers, which in turn can support higher enterprise valuations and monetization through fees, data licensing, and structured products. (livemint.com)

Key takeaways

  • Polymarket is exploring a $12–$15 billion raise after an $8 billion pre-money ICE deal and a $1 billion valuation in June. (livemint.com)

  • Weekly category volume topped $2 billion, with new partnerships (DraftKings, NHL) broadening mainstream reach. (livemint.com)

  • Rival Kalshi’s rapid ascent to $5+ billion highlights a two-horse race drawing significant late-stage capital amid ongoing regulatory uncertainty. (techcrunch.com)

Read More

Post of the Week

Star Quality is Real

Youtube • Uncapped with Jack Altman • October 23, 2025

Venture•Post of the Week


Core Idea

The video advances a concise but pointed argument: “star quality” is a real, observable phenomenon that meaningfully influences outcomes in business and investing. Rather than a vague vibe, it’s a composite of traits—clarity, ambition, energy, and credibility—that shows up consistently in how a person communicates, decides, and mobilizes others. The message frames star quality as a pragmatic tool for selection and prioritization: people who have it tend to pull talent, capital, and opportunities toward them, often accelerating company-building and compounding advantages over time.

What “Star Quality” Looks Like in Practice

  • Magnetic communication: the ability to explain a mission crisply, answer hard questions directly, and make complex ideas feel inevitable.

  • Evidence-backed conviction: not loudness, but well-reasoned, falsifiable beliefs that survive pushback and evolve with new information.

  • Talent magnetism: great people want to work with them; they can close candidates above the firm’s brand level and create followership.

  • Momentum creation: quick cycles from idea to action; they leave a trail of shipped work, closed partnerships, and unblocked decisions.

  • Consistency across rooms: investors, recruits, and customers independently describe similar strengths after separate interactions.

Why It Matters to Investors and Operators

  • Selection signal: When founders or leaders display star quality, the probability-weighted upside increases because distribution—of attention, capital, and hiring—is easier to achieve. This doesn’t replace diligence, but it is a powerful prior when time is limited.

  • Execution multiplier: The same product built by teams with star-quality leaders tends to travel further—more press, more candidate interest, faster customer intros—compressing time-to-insight and time-to-scale.

  • Culture shaping: These leaders set pace and standards; they make excellence legible and contagious, raising the median performance bar.

How to Discern It Without Getting Fooled

  • Separate charisma from coherence: prefer leaders who produce new, testable understanding over those who simply perform confidence.

  • Look for repeated closes: track record of recruiting A-players, convincing skeptical customers, and winning fair fights for attention.

  • Stress-test with specifics: ask for concrete timelines, measurable milestones, and “what would change your mind?” thresholds.

  • Triangulate: compare impressions across reference calls, previous collaborators, and short working trials; star quality travels.

Implications for Founders and Leaders

  • Build the muscle, don’t fake it: star quality grows from craft. Invest in preparation, storytelling, and rapid iteration so conviction stems from earned knowledge, not theater.

  • Show receipts: demonstrate momentum with shipped features, customer quotes, and team upgrades. Make progress visible and easy to audit.

  • Design for compounding: use early “wins” (press, hires, marquee customers) to open the next door; treat attention as a resource to be reinvested, not consumed.

Key Takeaways

  • Star quality is best understood as a bundle of observable behaviors that reliably correlate with recruiting power, speed, and surface area for luck.

  • It is neither necessary nor sufficient for success, but when present with execution, it acts as a force multiplier across hiring, fundraising, and distribution.

  • Disciplined observers can identify it by seeking coherence, repeatable closes, and momentum over time rather than mistaking charisma for substance.

  • Leaders can cultivate it by sharpening thought, raising the pace of action, and making outcomes—more than rhetoric—the centerpiece of their narrative.

Read More


A reminder for new readers. Each week, That Was The Week, includes a collection of selected essays on critical issues in tech, startups, and venture capital.

I choose the articles based on their interest to me. The selections often include viewpoints I can't entirely agree with. I include them if they make me think or add to my knowledge. Click on the headline, the contents section link, or the ‘Read More’ link at the bottom of each piece to go to the original.

I express my point of view in the editorial and the weekly video.

Discussion about this video

User's avatar