0:00
/
0:00
Transcript

AI, Agents & the New Age of Progress: Work, Poverty & the Future

Contents

Editorial:

AI, Agents & the New Age of Progress: Work, Poverty & the Future.


If you read the headlines this week, you might believe the internet is collapsing. Between Fast Company’s warning that AI browsers are “trying to kill” the open web and Niall Ferguson’s dark diagnosis of an “OpenAI House of Cards,” the prevailing mood is one of defensive panic. The narrative is simple: AI is a parasite eating its host, stealing the clicks that fund human creativity, and inflating a financial bubble that will inevitably burst.

It is a compelling story. It is also wrong.

We are not witnessing the destruction of the web, or the end of a house of cards, but the beginning of a new age of progress. The link between the AI browser and the end of poverty may seem weak, but there is a straight line from automation of tasks to the end of work, money and poverty, as Elon Musk predicted this week.

The panic over “stolen clicks” misses the much larger structural shift underway: the merger of the internet, the browser, the brain and the real world via robotics. We are moving from a world of passive consumption to one of active delegation and action, a shift that, if managed correctly, doesn’t just save the web’s economics but potentially solves the problem of labor and poverty.

Baby Steps: The Browser Wakes Up

For thirty years, the “browser” has been a dumb window—a pane of glass we look through to find information. But as Tanay Jaipuria notes in “The Rise of Background Agents,” that era is ending. We are transitioning from “chatting with” AI to “assigning tasks to” AI. The future isn’t a smarter search bar; it is a background process that books your travel, refactors your code, and manages your life while you sleep.

This is the “Intelligent action based UI.” In this model, the browser ceases to be a window and becomes an agent. Fast Company worries this will “kill the open web” by removing the need to visit websites. “If an agent can read every review... and buy the product without you ever visiting a website,” they argue, the ad model collapses.

They are right about the ad model, but wrong about the consequence.

This is a micro problem. But it does have a solution. The solution isn’t to force AI to send us back to ad-cluttered pages we hate; it is to integrate an authoritative “links database” into the AI itself.

We need a business model transition where “paid links” and attribution move from the webpage to the interface.

If an AI agent uses a publisher’s review to make a purchase decision, the value transfer should happen between the AI platform and the seller. The web doesn’t die; it just gets a new, more efficient front door.

But the implications of AI agents carrying out tasks is much larger. Especially when they become embedded in robotics.

Bigger Steps: The Economics of Optional Work

If we solve the interface problem, we unlock the economic promise that Elon Musk hinted at this week. In a viral clip, Musk argued that in a future of abundant intelligence and robotics, “work will be optional” and “currency becomes irrelevant.”

It is easy to dismiss this as sci-fi hyperbole, but the economic logic is sound. If the marginal cost of intelligence (via Gemini 3.0) and the marginal cost of labor (via humanoid robotics) trend toward zero, the cost of goods and services must follow. We are entering an era of deflationary abundance.

However, the “end of work” shouldn’t mean the end of purpose. As David Friedberg argued in a sharp rebuttal to Rep. Ro Khanna, trying to protect existing jobs by stalling technology is a trap. “If you had done this with the emergence of the tractor to protect loss of jobs,” Friedberg notes, we would still be subsistence farmers. The goal isn’t to preserve the drudgery of the present, but to automate it away so that human labor becomes a choice, not a survival mechanism.

Problem to Solve: The Policy Gap

This utopian outcome—a world where the internet supports billions of intelligent agents and work is a hobby—is pretty inevitable. But who benefits from that is a policy choice.

Without a new framework, the “Background Agents” that own our attention will be owned by a tiny oligopoly—a fear reinforced by Microsoft and Nvidia’s massive $15 billion investment in Anthropic. If we allow the “rich to get richer” without structural reform, the abundance of AI will be hoarded, not shared.

This is where Peter Leyden’s argument for “A New Progressive Era” becomes the most critical piece of the puzzle. Leyden compares our current moment to the Gilded Age—a time of terrifying inequality and rapid technological change that eventually birthed the middle class through aggressive political reform. He wants a new kind of state and government to realize this potential, whereas I am a sceptic on trusting Government (with a big G) to deliver it. But we agree on the opportunity.

We are at that same crossroads. The technology to liberate us from toil is arriving. The browser is evolving into a tool of immense power, but that is a baby step. The question for 2026 is not “Will the bubble burst?” but “Will we build the rules to ensure abundance is distributed?”

The web can be saved. Work can be optional. Poverty can be a thing of the past. But only if we stop panicking about preserving the past and start legislating for the future. For that we need a bottoms up desire for the things Elon Musk articulated this week (see post of the week for that)

Essay

A New Progressive Era Is Emerging

Peterleyden • Peter Leyden • November 19, 2025

Essay•GeoPolitics•Progressive Era•Artificial Intelligence•US Politics


A wave of new general-purpose technologies transforming America. The entrepreneurs and investors behind the technologies capturing vast amounts of wealth. Tech titans exerting unprecedented power in politics amid the increasing corruption of government. The nation experiencing mounting income inequality and the rise of populists on the right and left.

Many might say that’s a good description of America today, but it could also describe the country in the late-19th century, sometime around 1895. That was the high point of what Americans still call “The Gilded Age of the Robber Barons,” industrialists who amassed spectacular fortunes around the general-purpose technologies of their time, including electricity. The most powerful of them, such as those who controlled the vast networks of railroads, also controlled many elected officials at all levels of government. This helped ensure they got what they wanted, even if it was against the interests of the masses, leading to the rise of angry movements on the right and left in the forms of rural prairie populists and urban socialists.

But America at the turn of the 20th century did not devolve into an authoritarian plutocracy as many feared. Nor did the populists on the right or left ever amass the power needed to transform America along the lines of their more extreme visions.

What actually happened? Intellectuals and educated professionals, upper-middle-class elites from both the Republican and Democratic parties, thinkers and doers from left-of-center and right-of-center on the political spectrum, ignited a reform movement that, over the next 25 years, transformed how America’s economy and society worked.

What started as an elite endeavor of big ideas quickly attracted broad-based support from the mainstream middle and working classes. With a roughly 60/40 majority, the coalition was able to drive many structural changes in America, including fundamental amendments to the U.S. Constitution.

We now call this period of great reform from 1895 to 1920 — a time that remade America in general and its urban areas in particular — The Progressive Era.

I think a strong case can be made that America today could be entering a similar era of structural reform and great progress — one that may be eventually seen as The 21st-century Progressive Era.

Despite widespread fears from left-of-center, America will not devolve into a right-wing autocracy controlled by a plutocracy of billionaires with a nod from the tech titans — even given MAGA and President Donald Trump’s authoritarian tendencies.

Despite widespread fears from the right-of-center, America won’t fall under the control of the far left, with old-school socialism and big government bureaucrats at the helm. That’s even more of a distant dream.

I think the most probable outcome of our current juncture will be the emergence of a new majority of smart, practical, common-sense Americans. They will embrace the realities of powerful new general-purpose technologies like artificial intelligence, while recognizing the need to restructure the economy and reform society to ensure the techs’ benefits are shared by all.

We’re still in the early stages, but a realignment within politics and the acceleration of great progress may well be in our near future.

Read More

New York Is An Industry Town

Digitalnative • Rex Woodbury • November 19, 2025

Essay•AI•New York City•Applied AI•Startup Ecosystems

New York Is An Industry Town


San Francisco and New York are the two biggest startup hubs in the world.

The Bay Area ranks first, with ~25% of U.S. venture deals (measured as a percent of the total number of deals). New York follows with ~14% of deals last year. Here’s a breakdown by sector:

This is U.S. data, but we see the same trend globally, with SF and NYC meaningfully outpacing 3rd-place Beijing. For many years, we’ve had a clear 1 and a clear 2.

This gets at a familiar debate in tech: NYC vs. SF.

I lived in the Bay for five years before moving back to New York a few years ago. The right answer is: both cities are great places to build startups! There’s a reason they rank 1 and 2, and they probably will for years to come.

AI has been a deus ex machina for San Francisco, which saw a pandemic exodus that spurred lots of hand-wringing and “SF is dead” proclamations. Those proclamations were always overblown; SF never lost its seat as the nucleus of technology.

But New York has boomed in recent years, and I think its star is only rising. The Bay dominates for foundation models and infrastructure companies. But New York is perfectly suited to applied AI. Why? Well, because New York is an industry town.

You could make the argument that New York is a (the?) global epicenter for a dozen major industries. To take 10 examples:

Let’s tick through New York industries and look at opportunities for reinvention.

Read More

Presentations — Benedict Evans | Benedict Evans | 40 comments

LinkedIn • Keith Teare • November 19, 2025

LinkedIn•Essay

Presentations — Benedict Evans | Benedict Evans | 40 comments

Source: LinkedIn | Author


Twice a year, I produce a big presentation exploring macro and strategic trends in the tech industry.

2025+autumn+ai
18.7MB ∙ PDF file
Download
Download

Download

Read more on LinkedIn

“You don’t need to learn to code” = BAD ADVICE

Youtube • 20VC with Harry Stebbings • November 17, 2025

Essay•AI•Coding•Startups•Founders


Core Argument

The content presents a concise argument that the common advice “you don’t need to learn to code” is misleading, especially for ambitious people working in technology, startups, or product-building environments. The central thesis is that while not everyone needs to become a professional software engineer, understanding how to code confers a significant strategic edge in terms of creativity, execution speed, and credibility with technical teams. Coding is framed less as a narrow technical specialty and more as a foundational literacy for modern builders and operators, akin to being able to read a balance sheet in business or use spreadsheets in finance. The message targets non-technical founders, operators, and aspiring entrepreneurs who might otherwise rationalize away the effort required to develop technical fluency.

Why Coding Matters Beyond Engineering Roles

  • Coding is portrayed as a force multiplier for anyone who wants to build products, automate workflows, or experiment quickly.

  • Even a basic ability to write scripts, prototype interfaces, or manipulate data can eliminate friction, reduce dependency on others, and accelerate iteration cycles.

  • Technical fluency enables better communication with engineers: asking for realistic timelines, understanding tradeoffs, and scoping work in ways that are implementable.

  • Rather than being treated as optional, coding is positioned as a skill that expands the surface area of what an individual can do on their own, especially in early-stage environments where resources are limited.

Reframing “You Don’t Need to Learn to Code”

  • The statement “you don’t need to learn to code” is critiqued as comforting but ultimately disempowering advice for people who want to be builders.

  • The underlying implication is that this phrase often becomes a convenient excuse to stay in a comfort zone of purely non-technical tasks, relying on others for core product execution.

  • The content suggests that while no one is literally forced to learn to code, those who do will be able to see more opportunities, validate ideas faster, and make more informed decisions.

  • In a world increasingly shaped by software, not learning to code is framed as voluntarily giving up leverage.

Impact on Founders and Operators

  • For founders, knowing how to code—even modestly—can:

  • Help them build the first version of a product themselves.

  • Make them more credible to high-caliber engineers, who respect leaders that understand the work.

  • Reduce early hiring pressure and extend runway by delaying the need for a large technical team.

  • For operators (in roles like growth, ops, or product), coding skills unlock:

  • Automation of repetitive workflows.

  • Custom internal tools tailored to the team’s actual needs.

  • Data-driven decision-making through simple scripts or dashboards rather than waiting on engineering resources.

  • The overarching impact is that individuals with coding skills can move from “asking others to do things” to “directly changing the product or system” themselves.

Broader Implications

  • Coding is implicitly framed as a modern career hedge: as automation, AI, and software increasingly shape industries, being able to interact with these systems at a technical level makes one more resilient and adaptable.

  • The message also hints at a cultural shift: in tech-centric ecosystems, the distinction between “technical” and “non-technical” is eroding, and those who bridge the gap gain outsized influence.

  • By challenging the idea that coding is optional, the content encourages a mindset of ownership—if you want to build, you should embrace the skills that let you build directly, rather than outsourcing the core of your creative power.

Key Takeaways

  • Coding is not only for engineers; it is a leverage skill for anyone serious about building products or companies.

  • The comforting narrative that you can succeed in tech without any coding knowledge is challenged as bad advice for ambitious builders.

  • Even basic technical proficiency dramatically increases speed, autonomy, and credibility in early-stage and high-growth environments.

  • In a software-driven world, choosing not to learn to code is effectively choosing to limit your scope of impact and control over what you create.

Read More

Student Debt as Modern American Serfdom: A Mother Stole $200,000 in Her Daughter’s Name

Keenon • November 18, 2025

Essay•Education•Student Debt•Bankruptcy•Debt Collective


It’s the ultimate financial nightmare. Kristin Collier, a young student in Minnesota, woke up one morning to discover that her mother had taken out $200,000 in Kristin’s name. Collier tells this story in What Debt Demands, a book about America’s student debt crisis that is both personal and political. Collier, who proudly defines herself as a “democratic socialist”, believes that student debt is a form of modern American serfdom. So what to do? She argues for massive debt cancellation, free public higher education funded by taxes on stock trades, and restoring bankruptcy protections that existed before 2005. But with the average American now carrying $105,000 in debt and one in four households living paycheck to paycheck, can any political initiative—a Mamdani democratic socialist style or otherwise—actually address this crisis before it triggers a nightmarish financial crisis in the broader economy?

  1. Student Debt Has Become Inescapable Serfdom Since 2005, student loans—both federal and private—are nearly impossible to discharge through bankruptcy. Borrowers must meet an “undue hardship” standard so stringent that people are literally having their Social Security payments garnished in retirement to pay off loans taken out at age 20. Unlike mortgages or credit card debt, education debt follows you for life.

  2. Private Student Lenders Operate Like Subprime Mortgage Predators During the mid-2000s, banks offered “direct consumer private loans” up to $30,000 with no school certification required, transferred straight to bank accounts, with interest rates of 10-12%. A $30,000 loan could balloon to $100,000. Collier’s mother was able to take out eight separate loans totaling $200,000 using only a Social Security number and forged signature—the system had no safeguards because lenders prioritized profit over verification.

  3. Biden’s Big Moves Failed, But Smaller Wins Succeeded Biden’s signature executive action to cancel $10,000-$20,000 in federal student debt (which would have freed 20 million borrowers) was blocked by courts, as was his generous SAVE income-driven repayment plan. However, his reforms to Public Service Loan Forgiveness, existing income-driven repayment programs, and borrower defense protections have canceled billions in debt—demonstrating that incremental administrative changes work better than bold executive action in our current legal landscape.

  4. The Debt Crisis Extends Far Beyond Students With average American consumer debt at $105,000 and one in four households living paycheck to paycheck, we’re potentially heading toward systemic economic collapse. The issue isn’t just student loans—it’s medical debt, rental debt, and a broader affordability crisis. Collier’s organization, the Debt Collective (born from Occupy Wall Street), treats this as a collective action problem requiring a union of debtors across all categories.

  5. Debt Creates Psychological Haunting, Not Just Financial Burden Collier describes debt as both “presence and absence”—a constant bodily heaviness and dread. She feared her credit card would be rejected at grocery stores, dreaded checking her bank account, assumed every unknown phone number was a debt collector. This shame is culturally reinforced: Americans are taught that unpayable debt reflects personal moral failure, even when the system itself is predatory. One borrower told her he avoided dating entirely because he was too ashamed to reveal his debt burden.

Read More

Prediction Markets to Rival Stocks Within Years, Kalshi CEO Says

Bloomberg • November 18, 2025

Essay•Geo Politics•Prediction Markets•Financial Innovation•Market Structure

Prediction Markets to Rival Stocks Within Years, Kalshi CEO Says

Overview of Rapid Growth in Prediction Markets

Prediction markets, platforms where people trade contracts based on the outcomes of future events, are expanding much faster than earlier industry expectations. The key claim is that these markets could grow to rival traditional stock exchanges within a few years. This reflects a broader shift in how individuals and institutions seek to express views on real-world events, manage risk, and access financial instruments tied to politics, economics, and other measurable outcomes.

Key Drivers of Accelerated Expansion

  • The rapid pace of growth suggests surging demand from both retail users and more sophisticated traders who see prediction markets as an alternative or complement to conventional financial assets.

  • Market participants are increasingly attracted by the ability to trade on clearly defined outcomes—such as election results, macroeconomic indicators, or policy decisions—rather than only on the performance of companies or indexes.

  • A faster-than-expected adoption curve implies improving market infrastructure, easier user interfaces, and greater regulatory clarity, all of which help reduce friction for new users and capital inflows.

Comparison to Traditional Stock Exchanges

  • The central argument is that prediction markets could reach a scale comparable to stock exchanges in a relatively short period of time. That means not just niche speculation, but significant volumes of capital and liquidity.

  • While stock exchanges focus on ownership stakes in companies, prediction markets provide direct exposure to “event risk.” As these markets deepen, they may begin to serve similar functions to stock markets in terms of price discovery and hedging, but for a wider array of real-world outcomes.

  • If prediction markets do approach the size of stock exchanges, they could become a primary venue for expressing views on macro events, much as equities are for corporate prospects today.

Implications for Finance, Risk Management, and Information

  • For investors and traders, prediction markets offer a new toolset to hedge or speculate on specific events—such as elections, regulatory decisions, or economic releases—that currently must be managed indirectly via equities, bonds, or derivatives.

  • Broader and deeper prediction markets could improve the quality of public information. As more participants trade on their beliefs and information, market prices may become a widely referenced “probability signal” for key events, similar to how stock prices signal expectations about corporate performance.

  • The convergence in scale between prediction markets and stock exchanges would blur traditional boundaries between speculative trading, risk management, and information aggregation, potentially reshaping parts of the financial ecosystem.

Strategic and Regulatory Considerations

  • Rapid growth raises strategic questions for existing financial institutions. Banks, hedge funds, and asset managers may need to integrate prediction markets into their analytics, risk frameworks, or even product offerings.

  • Regulators will likely face pressure to clarify how these markets are classified and supervised—whether as derivatives, gaming, or a new financial category—given their potential impact and the size envisioned in the near future.

  • If prediction markets do rival stock exchanges in a few years, policymakers will need to consider how these platforms influence public expectations and decision-making, particularly for politically sensitive or systemically important events.

Critical Takeaways

  • Prediction markets are expanding at a pace that significantly exceeds prior expectations.

  • Their proponents foresee them operating at a scale comparable to traditional stock exchanges within a relatively short time frame.

  • This trajectory suggests large implications for how markets aggregate information, price event risk, and complement or compete with existing financial infrastructure.

Read More

🔮 How Europe outsourced its future to fear

Exponentialview • November 19, 2025

Essay•Regulation•PrecautionaryPrinciple•Europe•AI


Hi, it’s Azeem, here with a special guest essay.

Europe once stood alongside the United States as a central force shaping global technology and industry. Its relative decline in the digital era is often pinned on regulation and bureaucracy.

But our guest, Brian Williamson – Director at Communications Chambers and a long-time observer of the intersection of technology, economics and policy – argues the deeper issue is a precautionary reflex that treats inaction as the safest choice, even as the costs of standing still rise sharply.

Over to Brian.

If you’re an EV member, jump into the comments and share your perspective.

It’s time to jettison the precautionary principle

“Progress, as was realized early on, inevitably entails risks and costs. But the alternative, then as now, is always worse.” — Joel Mokyr in Progress Isn’t Natural

Europe’s defining instinct today is precaution. On AI, climate, and biotech, the prevailing stance is ‘better safe than sorry’ – enshrined in EU law as the precautionary principle. In a century of rapid technological change, excess precaution can cause more harm than it prevents.

The 2025 Nobel laureates in Economic Sciences, Joel Mokyr, Philippe Aghion, and Peter Howitt, showed that sustained growth depends on societies that welcome technological change and bind science to production; Europe’s precautionary reflex pulls us the other way.

In today’s essay, I’ll trace the principle’s origins, its rise into EU law, the costs of its asymmetric application across energy and innovation, and the case for changing course.

How caution became doctrine

The precautionary principle originated in Germany’s 1970s environmental movement as Vorsorgeprinzip (literally, ‘foresight principle’). It reflected the belief that society should act to prevent environmental harm before scientific certainty existed. Errors are to be avoided altogether.

The German Greens later elevated Vorsorgeprinzip into a political creed, portraying nuclear energy as an intolerable, irreversible risk.

The principle did not remain confined to Germany. It was incorporated at the EU level through the environmental chapter of the 1992 Maastricht Treaty, albeit as a non‑binding provision. By 2000, the European Commission had issued its Communication on the Precautionary Principle, formalizing it as a general doctrine that guides EU risk regulation across environmental, food and health policy.

Caution can cut both ways

Caution may be justified when uncertainty is coupled with the risk of irreversible harm. But harm doesn’t only come from what’s new and uncertain; the status quo can be dangerous too.

In the late 1950s, thalidomide was marketed as a harmless sedative, widely prescribed to pregnant women for nausea and sleep. Early warnings from a few clinicians were dismissed, and the drug’s rapid adoption outpaced proper scrutiny. As a result of thalidomide use, thousands of babies were born with limb malformations and other severe defects across Europe, Canada, Australia, New Zealand and parts of Asia. This forced a reckoning with lax standards and fragmented oversight.

In the US, a single FDA reviewer’s insistence on more data kept the drug off the market – an act of caution that became a model for evidence‑led regulation. In this instance, demanding better evidence was justified.

Irreversible harm can also arise where innovations that have the potential to reduce risk are delayed or prohibited. Germany’s nuclear shutdown is the clearest example. Following the Chernobyl and Fukushima accidents — each involving different reactor designs and, in the latter case, a tsunami — an evidence‑based reassessment of risk would have been reasonable. Instead, these events were used to advance a political drive for nuclear phase‑out which was undertaken without a rigorous evaluation of trade‑offs.

Germany’s zero‑emission share of electricity generation was about 61% in 2024; one industry analysis found that, had nuclear remained, it could have approached 94%. The missing third was largely replaced by coal and gas, which raises CO₂ emissions and has been linked to higher air‑pollution mortality (about 17 life‑years lost per 100,000 people).

In Japan, all nuclear plants were initially shut after Fukushima. They overhauled the regulation and restarted permits on a case-by-case basis, under new, stringent safety standards. They never codified a legalistic ‘precautionary principle’ and have been better able to adapt. Europe often seeks to eliminate uncertainty; Japan manages it.

A deeper problem emerges when caution is applied in a way that systematically favours the status quo, even when doing so delays innovations that could prevent harm.

A Swedish company, I‑Tech AB, developed a marine paint that prevents barnacle formation, which could improve ships’ fuel efficiency and cut emissions. Sixteen years after its initial application for approval, the paint has not been cleared for use in the EU, though it is widely used elsewhere. The EU’s biocides approval timelines are among the longest globally. Evaluations are carried out in isolation rather than comparatively, so new substances are not judged against the risks of existing alternatives. Inaction is rewarded over improvement.

This attitude of precaution has contributed to Europe’s economic lag. Tight ex‑ante rules, low risk tolerance and burdensome approvals are ill‑suited to an economy that must rapidly expand clean energy infrastructure and invest in frontier technologies where China and the United States are racing ahead. The 2024 Draghi Report on European competitiveness recognized that the EU’s regulatory culture is designed for “stability” rather than transformation:

“[W]e claim to favour innovation, but we continue to add regulatory burdens onto European companies, which are especially costly for SMEs and self-defeating for those in the digital sectors. ”

Yet nothing about Europe’s present circumstances is stable. Energy systems are being remade, supply chains redrawn and the technological frontier is advancing at a pace unseen since the Industrial Revolution.

AI and the costs of stagnation

Like nuclear energy, AI may carry risks, but also holds the potential to dramatically reduce others - and the greater harm may lie in not deploying AI applications rapidly and widely.

This summer, 38 million Indian farmers received AI‑powered rainfall forecasts predicting the onset of the monsoon up to 30 days in advance. For the first time, forecasts were tailored to local conditions and crop plans, helping farmers decide what, when, and how much to plant – and avoid damage and loss.

Read More

AI browsers need the open web. So why are they trying to kill it?

Fastcompany • November 19, 2025

AI•Browsers•OpenWeb•Regulation•DataPrivacy•Essay

AI browsers need the open web. So why are they trying to kill it?

For those of us who earn a living publishing content on the open internet, Amazon’s lawsuit against AI startup Perplexity can seem darkly amusing. Perplexity is among the many AI companies that has spent years extracting value from the internet in exchange for little. Its crawlers have synthesized endless amounts of content from publishers, even working around publishers’ attempts to block this behavior, all so Perplexity can summarize content without having to send traffic to the websites themselves.

Now Perplexity and its rivals are going a step further, with a new wave of AI browsers that can navigate pages automatically. Perplexity has Comet, OpenAI has ChatGPT Atlas, Opera has Neon, and others are on the way. The pitch is that AI “agents” will soon be able to trudge through the web on your behalf, booking your flights, buying your groceries, and shopping on sites like Amazon. Both Perplexity and OpenAI view these browsers as imperative in their goals to build AI “operating systems” that can manage your life.

Amazon, which has a lot to lose if people stop accessing its website directly, is suing to stop that from happening. It’s been trying to block Perplexity, but so far to no avail.

Therein lies the irony: These AI browsers promise a future where you’ll never have to visit a website again, yet that promise depends on having viable websites to crawl through in the first place. Amazon’s lawsuit is a sign that these two goals may be incompatible.

For companies like Perplexity and OpenAI, web browsers are suddenly important because they open the door to content and data that would otherwise be inaccessible. Consider Amazon. If you’re just using ChatGPT’s website, you might ask it to recommend a few Amazon items or summarize a product’s user reviews, but its answers wouldn’t include any personal data from Amazon’s site. By contrast, ChatGPT Atlas and Perplexity Comet can access Amazon exactly as it appears in your own browser window. That means they can crawl through your order history or weigh in on Amazon’s personalized product recommendations.

Perplexity says these “agentic” browsers make for a better shopping experience, which is why Amazon should embrace them—but Perplexity also stands to benefit in other ways. By understanding things like your order history, personalized recommendations, and all the questions you asked Perplexity’s AI to arrive at a particular product, the company can build a much richer user profile for things like targeted advertising.

“You’ve gone from behavior tracking to psychological modeling,” says Eamonn Maguire, who leads the machine learning team at Proton. “Where you have traditional browsers tracking what you do, AI browsers infer why you do it.”

This isn’t speculation. Perplexity CEO Aravind Srinivas said on the TBPN podcast earlier this year that its browser will enable “hyper-personalized” ads by understanding more about users’ personal lives. “What are the things you’re buying, which hotels are you going to, which restaurants are you going to, what are you spending time browsing, tells us so much more about you,” Srinivas said.

Read More

The hot new investment trend is the ‘Total Portfolio Approach’. Does it work?

Ft • Nangle • November 16, 2025

Essay•Venture


Overview of the “Total Portfolio Approach”

The article introduces the “Total Portfolio Approach” (TPA) as a fashionable but still loosely defined trend in institutional asset allocation. It is presented as an attempt to rethink how large investors — particularly pension funds and sovereign wealth funds — construct their portfolios in a world of lower expected returns, more frequent macro shocks, and increasingly complex alternative assets. Rather than treating each asset class in isolation, the approach aspires to manage all holdings as a single, integrated pool, aligned tightly to an institution’s objectives and risk tolerance, and more responsive to changing market conditions.

Key Features and Ambitions of TPA

  • TPA seeks to move beyond traditional siloed asset allocation (equities vs bonds vs alternatives) toward a holistic risk-and-return view of the entire balance sheet.

  • It places strong emphasis on understanding the underlying drivers of risk (such as equity beta, interest-rate exposure, inflation sensitivity and illiquidity) across all asset classes, rather than just their labels.

  • Proponents argue it enables more dynamic allocation, faster rebalancing, and clearer trade-offs between, for example, liquidity needs, long-term return targets, and tolerance for drawdowns.

  • The approach is often associated with large, sophisticated asset owners that can build internal teams, analytics and governance structures to support it.

Why It Has Become Popular

The concept has gained traction in the wake of several market and macro developments:

  • A decade-plus of low interest rates and high asset prices has eroded expected returns from traditional 60/40 portfolios, pushing institutions toward alternatives and more complex strategies.

  • Episodes such as the global financial crisis, the Covid shock and inflation spikes have exposed weaknesses in static allocation frameworks and stress-tested liquidity assumptions.

  • Regulators and stakeholders increasingly scrutinize funding gaps, drawdown risks and liquidity profiles, incentivizing asset owners to demonstrate more integrated, risk-based decision-making.

  • Consulting firms and asset managers have promoted TPA as a modern, more “scientific” framework, contributing to its buzz and branding.

Conceptual Strengths but Practical Vagueness

While the article acknowledges the intuitive appeal of looking at the portfolio as a whole, it underscores how fuzzy the concept can be in practice:

  • There is no single, agreed definition of TPA; different institutions and consultants use the label for quite varied practices.

  • In many cases, what is marketed as TPA can be little more than enhanced risk reporting or factor-based decomposition of existing portfolios, without fundamentally changing governance or decision rights.

  • Implementing a true total portfolio framework requires substantial organizational change: clear objectives, risk-budgeting at the total fund level, and centralized decision-making that can override asset-class silos.

  • Without this deep integration, the label risks becoming a buzzword that overpromises and under-delivers.

Governance, Incentives and Human Factors

The article stresses that the main constraints on TPA are less about mathematics and more about institutions and people:

  • Asset owners often have long-established committee structures with separate equity, fixed income and alternatives teams, each defending their domain.

  • Incentive systems, benchmarks and performance evaluations tend to be asset-class specific, which can conflict with total-portfolio optimization.

  • Moving to TPA implies reconfiguring roles, consolidating authority, changing performance metrics and, in some cases, reducing autonomy for individual teams — steps that can be politically difficult.

  • The article suggests that any genuine adoption of TPA must be accompanied by explicit changes in governance and accountability, not just new risk dashboards.

Does It Actually Work?

Evidence for TPA’s superiority remains mixed and somewhat anecdotal:

  • Some high-profile funds that describe themselves as using a TPA-like framework point to more coherent risk management, better alignment with liabilities and improved liquidity planning.

  • However, performance differences are hard to disentangle from other factors such as strategic asset allocation decisions, internal skill and risk appetite.

  • The approach does not free investors from the need to make difficult calls on long-term returns, inflation and correlations; it merely frames those calls differently.

  • The article implies that, far from being a magic bullet, TPA is best viewed as a governance and process upgrade that may or may not lead to better returns, depending on how rigorously it is implemented.

Implications and Takeaways

The central message is that total portfolio thinking is directionally sound — especially for large, complex asset owners — but its effectiveness depends on depth of implementation rather than the label itself. If institutions are willing to overhaul governance, align incentives to total-fund outcomes and invest in robust risk analytics, TPA can help clarify trade-offs, prevent hidden risks and improve resilience. If not, it risks becoming another “buzzy but fuzzy” concept in asset management marketing.

Read More

Bubble?

Nvidia reports strong growth from bumper AI chip sales

Ft • November 19, 2025

AI•Tech•Nvidia•Semiconductors•AI Chips•Bubble?

Nvidia reports strong growth from bumper AI chip sales

Overview

The article focuses on Nvidia’s latest earnings update, highlighting that the company is experiencing powerful growth driven by surging demand for its artificial intelligence (AI) chips. As a key supplier of graphics processing units (GPUs) used to train and run large AI models, Nvidia’s results are presented as a critical indicator of the health, momentum and sustainability of the broader AI boom. The piece frames the company’s financial performance not just as a corporate success story, but as a bellwether for enterprise and cloud investment in AI infrastructure worldwide.

Nvidia as an AI bellwether

  • Nvidia’s earnings are portrayed as a proxy for overall AI infrastructure spending, because its chips underpin data-centre buildouts at major hyperscalers, cloud providers and leading AI labs.

  • Strong sales growth in Nvidia’s data centre segment is interpreted as evidence that companies are accelerating AI deployments rather than pulling back.

  • The article underscores that investors and industry observers closely track Nvidia’s quarterly figures to gauge whether AI spending is broadening beyond early experiments into large-scale, revenue-generating production workloads.

Growth drivers and demand signals

  • The central demand driver is the “bumper” volume of AI chips ordered by cloud platforms, large enterprises and AI start-ups seeking the compute capacity to train frontier models and run inference at scale.

  • The article notes that AI workloads—from generative AI to recommendation engines and analytics—are increasingly concentrated on Nvidia’s GPU platforms, reinforcing its market dominance.

  • Strong earnings are linked to multi-year capital expenditure plans by major tech platforms that are building or expanding AI-optimized data centres, suggesting that the demand pipeline is not merely cyclical but part of a structural shift.

Implications for the AI ecosystem

  • Because Nvidia sits at the core of the AI hardware stack, robust results imply continued funding and confidence across the AI value chain, including model developers, enterprise software vendors and cloud infrastructure providers.

  • The article suggests that Nvidia’s performance can signal whether the AI sector is overheating or entering a more mature, durable growth phase. Strong current numbers hint that customers still see tangible value in AI applications despite concerns about hype and high compute costs.

  • At the same time, the reliance of so many AI players on a single key supplier raises strategic questions about concentration risk, pricing power and potential bottlenecks in chip availability.

Market and strategic considerations

  • The piece implies that Nvidia’s success strengthens its bargaining power with cloud and enterprise customers, potentially affecting pricing, allocation of supply and the pace at which competitors can gain share.

  • It highlights that investors will interpret these earnings as a signal for broader equity markets, particularly tech and semiconductor stocks that are exposed to AI spending cycles.

  • Nvidia’s trajectory may shape how aggressively rivals—including alternative chipmakers and custom in-house accelerators from big tech firms—invest to challenge its dominance in key AI workloads.

Broader economic and technological impact

  • Continued strong AI-chip sales point to AI becoming a foundational layer of digital infrastructure, similar to previous waves of cloud and mobile computing.

  • The article suggests that as organizations race to integrate AI into products and workflows, Nvidia’s chips and accompanying software ecosystem will remain crucial enablers, influencing the speed and scope of AI adoption across industries.

  • Nvidia’s earnings thus resonate beyond the semiconductor sector, offering a snapshot of how quickly AI is moving from experimentation to scaled deployment, and how much capital global companies are willing to commit to this transition.

Key takeaways

  • Nvidia’s robust earnings underscore intense and ongoing demand for AI compute, positioning the company as a central beneficiary of the AI boom.

  • Because Nvidia is deeply embedded in the AI infrastructure stack, its performance serves as a leading indicator for the overall health and direction of the AI sector.

  • The results reinforce the view that AI is in a major investment phase, with significant implications for technology markets, corporate strategy and the pace of AI adoption across the global economy.

Read More

Nvidia’s Strong Results Show AI Fears Are Premature

Wsj • November 20, 2025

AI•Tech•Nvidia•AIChips•MarketSentiment•Bubble?

Nvidia’s Strong Results Show AI Fears Are Premature

Overall Argument and Market Context

The article argues that concerns about a slowdown in artificial-intelligence spending are premature in light of the chip maker’s latest earnings and guidance. Despite a sharp selloff that has dragged down its share price and overall valuation, the company reports that demand for its AI chips remains exceptionally strong and is likely to stay elevated through at least next year. This disconnect between investor fears and the company’s operational performance has left a business worth around $4.5 trillion looking comparatively “cheap” relative to its growth outlook and market position.

Evidence of Strong, Durable AI Demand

  • The company reports that orders for its flagship AI accelerators and related data-center products remain robust, with customers signaling multi-quarter, and in some cases multi-year, deployment plans.

  • Management indicates that hyperscale cloud providers, large enterprises, and emerging AI-native startups all continue to expand their infrastructure buildouts rather than pausing or cutting back.

  • Forward-looking commentary points to a strong demand pipeline “through next year,” suggesting that AI infrastructure spending is not just a short-lived boom but an ongoing investment cycle.

  • The article frames this as evidence that current market pessimism about a looming plateau in AI spending is not yet visible in the company’s actual order book or guidance.

Valuation Reset and “Cheapness” Argument

  • Following recent market volatility and a sector-wide selloff in high-growth technology names, the chip maker’s stock has declined enough to compress its valuation multiples.

  • On metrics such as price-to-earnings or price-to-sales (relative to its growth rate and margin profile), the company is depicted as inexpensive for a business of its scale, profitability, and strategic centrality to the AI ecosystem.

  • The article suggests that investors have priced in a meaningful deceleration in AI spending that is not supported by the company’s reported fundamentals.

  • This gap between perception and reality is presented as an opportunity: a $4.5 trillion business, deeply embedded in one of the most important technology shifts, trading at levels that imply far weaker prospects than its current demand trends suggest.

Implications for AI Cycle and Investor Sentiment

  • The strength of demand through next year undercuts the narrative that AI is a short-term hype cycle already heading toward saturation. Instead, AI infrastructure buildout is characterized as a multi-year, possibly decade-long transformation of data centers and enterprise computing.

  • If the company’s guidance proves accurate, it will likely force investors to reassess assumptions about the longevity and magnitude of the AI spending wave, with potential knock-on effects for other chip makers, cloud providers, and AI software firms.

  • Persistent demand at scale reinforces the company’s position as a central “arms supplier” to the AI revolution, making it difficult for rivals to materially erode its lead in the near term.

  • The article implies that sentiment may be more cyclical than fundamentals: fear-driven selling can temporarily obscure the structural nature of AI investment, but sustained earnings strength will eventually re-anchor valuations.

Key Takeaways and Outlook

  • The core message is that current anxiety about an imminent AI slowdown is not corroborated by this chip maker’s results or outlook.

  • The company’s indication that demand remains strong into next year supports the view that AI infrastructure spending is still in an early to middle phase, not at its peak.

  • Given its market dominance and scale, the company’s experience is a bellwether for the broader AI hardware ecosystem; its strong results suggest that AI adoption and monetization across industries are continuing to advance.

  • For investors, the combination of robust demand and a lower valuation after a selloff is framed as an unusual alignment of long-term opportunity with near-term price weakness.

Read More

Wall Street’s Worried About an AI Bubble. Nvidia Just Delivered an Answer

Bloomberg • November 20, 2025

AI•Tech•Nvidia•AI Bubble•Earnings•Bubble?

Wall Street’s Worried About an AI Bubble. Nvidia Just Delivered an Answer

Market Reaction and AI Bubble Fears

Nvidia’s latest earnings report has temporarily eased concerns that the rapid growth in artificial intelligence represents a speculative bubble rather than a durable technological shift. Strong financial results, driven by intense demand for Nvidia’s AI-focused chips, suggest that revenue is still catching up to the hype rather than collapsing under it. The company’s performance signals that enterprise and cloud customers continue to invest heavily in AI infrastructure, validating expectations that AI will remain a central driver of tech spending in the near term. At the same time, the relief is cautious: investors view this report as a reprieve rather than definitive proof that AI valuations are fully justified over the long run.

Drivers of Demand and Growth Pressures

  • Nvidia’s leadership in graphics processing units (GPUs) places it at the core of the current AI boom, with its hardware powering data centers, training large models, and running inference at scale.

  • Global demand comes from hyperscale cloud providers, big tech platforms, and a growing mix of enterprises seeking to deploy generative AI and advanced analytics.

  • This demand has created enormous expectations for Nvidia’s ability to continuously scale production and innovate newer, more efficient chips to stay ahead of competitors.

The company’s ability to keep pace with worldwide orders has become both its biggest strength and most immediate challenge. Maintaining high growth requires securing manufacturing capacity, managing complex supply chains, and ensuring that software and ecosystem support remain compelling enough to keep customers locked in.

Strategic and Operational Challenges

  • Nvidia must navigate an environment where rivals and alternative architectures (including custom AI chips from large cloud providers) are rapidly emerging.

  • Geopolitical and regulatory factors, including export controls and national security concerns, can constrain where and how Nvidia sells its most advanced products.

  • The pressure to deliver next-generation chips on tight timelines raises risks around execution, costs, and potential product delays.

These factors mean that even with strong current earnings, Nvidia is “not out of the woods.” The company has to balance short-term performance with long-term investments in R&D, software frameworks, and partnerships to maintain its central role in AI infrastructure.

Implications for the Broader AI Narrative

Nvidia’s results are a key barometer for broader sentiment about AI. Solid earnings and sustained demand weaken the argument that the sector is purely speculative, instead indicating that real spending and real workloads are following the hype. However, the article suggests that questions remain about how evenly AI benefits will be distributed across the tech ecosystem, and whether all high-flying valuations tied to AI will ultimately be supported by fundamentals.

For investors and policymakers, Nvidia’s position highlights how concentrated the AI hardware layer currently is, underscoring systemic risks if one company’s supply, technology roadmap, or regulatory environment falters. For companies adopting AI, Nvidia’s trajectory serves as a signal that while the AI buildout is real and ongoing, it is also subject to constraints—capacity, competition, and policy—that could shape the pace and geography of AI deployment.

Key Takeaways

  • Nvidia’s earnings have temporarily soothed Wall Street’s anxiety about an imminent AI bubble burst.

  • Global demand for AI chips remains intense, validating near-term expectations for continued AI infrastructure buildout.

  • The company still faces substantial challenges: supply constraints, competition, and geopolitical and regulatory pressures.

  • Nvidia’s performance is a proxy for the broader health and durability of the AI investment cycle, and its ability to navigate these challenges will heavily influence whether today’s AI boom proves sustainable.

Read More

OpenAI’s House of Cards

Niallferguson • November 18, 2025

Essay•AI•AI Bubble•OpenAI•Financial History


Fans of Dr. Seuss will know by heart the key stanzas of Green Eggs and Ham.

Do you like

green eggs and ham?

I do not like them,

Sam-I-Am.

I do not like

green eggs and ham.

For those who have never had to read a bedtime story, allow me to explain. An irrepressible little creature, Sam-I-Am, spends the entirety of the book pitching green eggs and ham—on the face of it, an unappetizing dish—to a skeptical and increasingly irascible larger creature. With every page, the pitch grows more elaborate. Would you like them on a boat? With a goat? In the rain? On a train? Surely, there must be some context in which green eggs would be appealing fare. By the time Sam prevails, his hapless victim inhabits a scene of chaos.

When you come to think of it, there is often someone called Sam trying to sell you something you don’t initially want. In the 1920s, as I learned from Andrew Ross Sorkin’s 1929: Inside the Greatest Crash in Wall Street History—and How It Shattered a Nation, it was Sam Crowther’s article, “Everybody Ought to Be Rich”—exhorting housewives to buy stocks with margin credit. A few years ago, it was Sam Bankman-Fried with his crypto exchange, FTX. At the height of his fame, Bankman-Fried declared, “I want FTX to be a place where you can do anything you want with your next dollar. You can buy bitcoin. . . . You can buy a banana.” And you could also have bought green eggs and ham—until FTX blew up and Sam landed in prison.

A lot of the applications of generative artificial intelligence remind me of green eggs and ham. Take OpenAI’s Sora 2.0. With a few prompts, you can generate soft-porn videos of scantily clad girl manga elves. This is also one of the ways Elon Musk tries to sell xAI’s Grok. But why would I want to watch such videos, any more than I want to eat green eggs and ham?

Financial history can help us here. If you’re unsure if there’s an AI bubble, refer to the historian Charles Kindleberger’s five-stage model:

Displacement: Some change in economic circumstances creates new and profitable opportunities for certain companies.

Euphoria or overtrading: A feedback process sets in whereby rising expected profits lead to rapid growth in share prices.

Mania or bubble: The prospect of easy capital gains attracts first-time investors and swindlers eager to defraud them.

Distress: The insiders discern that expected profits cannot possibly justify the now-exorbitant price of the shares and begin to take profits by selling.

Revulsion or discredit: As share prices fall, the outsiders stampede for the exits, causing the bubble to burst altogether.

We are currently at stage 3.

Read More

AI

Google releases Gemini 3.0 model, closes gap on ChatGPT

Youtube • CNBC Television • November 18, 2025

AI•Tech•Gemini


Overview

The segment discusses Google’s release of its Gemini 3.0 AI model and how it alters the competitive dynamics with OpenAI’s ChatGPT. The focus is on whether Gemini 3.0 meaningfully closes the performance and product gap, what new technical capabilities it brings, and how it fits into Google’s broader AI and business strategy. Commentary emphasizes user scale, multimodal and agentic features, and the importance of integration across Google’s ecosystem in challenging ChatGPT’s lead.

Key Features and Technical Advances of Gemini 3.0

  • Gemini 3.0 is positioned as Google’s next major flagship model upgrade over the Gemini 2.x and 2.5 series, aimed squarely at competing with GPT‑5–class systems.

  • The model focuses heavily on:

  • More capable multimodal reasoning across text, code, images, audio, and video.

  • Larger context windows for handling long documents and complex multi‑step tasks.

  • Improved “agentic” behavior: tool use, function calling, and orchestrating workflows rather than just answering prompts.

  • Commentators note that Google has iterated quickly from Gemini 1.5 to 2.0, 2.5, and now 3.0, suggesting a maturing release cadence and more stable platform for developers.

  • While not framed as a “disruptive” breakthrough, Gemini 3.0 is described as a substantial quality and usability lift that makes the experience feel closer to—if not on par with—top OpenAI models in many everyday tasks.

User Scale, Ecosystem, and Business Context

  • Google’s Gemini ecosystem reportedly serves hundreds of millions of users, with management highlighting momentum toward ChatGPT’s scale.

  • Gemini is deeply embedded across:

  • Search and Chrome

  • Android phones and the Gemini app

  • Workspace tools such as Gmail, Docs, Sheets, and Meet

  • Cloud and developer products via the Gemini API and AI Studio

  • This integration allows Google to:

  • Offer a powerful free tier to a massive installed base.

  • Monetize indirectly through search and advertising, as well as cloud usage, rather than relying solely on paid subscriptions.

  • The segment underscores that this business model gives Google financial and distribution advantages, enabling sustained AI investment without the same cash‑burn concerns facing some rivals.

Comparison with ChatGPT and Competitive Dynamics

  • Analysts frame Gemini 3.0 as “closing the gap” with ChatGPT rather than clearly surpassing it across the board.

  • Areas where Gemini 3.0 is seen as particularly competitive or superior include:

  • Tasks that depend on up‑to‑date web search or deep integration with Google services.

  • Workflow and productivity use cases inside Workspace, where it can automate email replies, summarization, and document drafting.

  • ChatGPT still appears ahead in:

  • Brand power and developer mindshare.

  • Certain reasoning and coding benchmarks, depending on the specific OpenAI model used for comparison.

  • The narrative suggests a shift from a one‑horse race to a more balanced duopoly, with Gemini now considered a credible first‑tier option for many enterprise and consumer use cases.

Implications and Outlook

  • For investors, Gemini 3.0 is framed as a strategic response that helps protect Google’s core search and advertising franchise from disruption by independent AI assistants.

  • For developers and enterprises, the launch signals:

  • A more unified, long‑term model family they can build against.

  • Stronger multimodal, long‑context, and agentic capabilities suited for complex applications.

  • For consumers, competition between Gemini 3.0 and ChatGPT is expected to drive:

  • Faster product improvements.

  • More generous free tiers and bundled capabilities, as providers vie for attention and usage.

  • The segment concludes that while OpenAI remains a powerful frontrunner, Gemini 3.0 marks a turning point where Google is no longer seen as a laggard, but as a serious co‑leader in large‑scale generative AI.

Read More

As consumers ditch Google for ChatGPT, Peec AI raises $21M to help brands adapt

Techcrunch • November 17, 2025

AI•Funding•Peec AI•Generative Engine Optimization•AIsearch


With consumers increasingly asking questions of ChatGPT — not Google — product discovery is changing. And the promise to give brands visibility and control over this fast-growing search channel has made Peec AI one of Europe’s hottest startups.

Just four months after its Seed round led by 20VC, the Berlin-based startup has raised a $21 million Series A led by European VC firm Singular. CEO Marius Meiners declined to disclose the valuation, but said it had tripled and was now above $100 million.

This comes after Peec AI grew its annual recurring revenue to more than $4 million in only ten months since its launch, attracting 1,300 companies and agencies to its platform.

These customers use Peec AI to monitor how their brands appear in AI-powered searches. But beyond analytics on visibility and ranking, Peec AI also tracks sentiment — and which sources shape these answers.

These insights are what make Generative Engine Optimization (GEO) possible — a way for marketing teams to optimize their brand’s presence in AI search results, similar to how SEO works for traditional search engines. With this promise, the startup says it is now adding some 300 customers a month, and its new funding will accelerate this growth while also supporting expansion plans.

Thanks to its new round, which was also backed by Antler, Combination VC, identity.vc, and S20, the startup plans to hire some 40 people in the next six months. These roles are mostly based in Berlin, where Meiners met his two cofounders in Antler’s Winter 2024 cohort: Tobias Siwonia is now Peec AI’s CTO, and Daniel Drabo is its CRO.

Expanding fast and being visible may be key to winning in an emerging category that could soon become crowded, with competitors already including New York-based Profound and Austrian startup OtterlyAI.

To help attract more talent, the 20-person startup is currently advertising itself on large outdoor ads throughout Germany’s capital city. But beyond its Berlin plans, Meiners told TechCrunch that Peec AI also plans to open a sales-focused office in New York City in the second quarter of next year.

As more GEO-focused tools become available, and SEO dashboards add AI tracking capabilities, Peec AI hopes to differentiate itself by offering marketing teams a dashboard that expands in scope while remaining simple to use, despite the fast-changing nature of AI searches.

Instead of revolving around keywords like SEO tools, Peec AI’s dashboard centers on prompts for which brands would like to show up well in search results. Customers can track up to 25 prompts for €75 per month ($87), increasing to 100 prompts for €169 per month ($196). Both plans offer free trials, unlike its enterprise offering, which starts from €424 per month ($493).

Read More

TurboTax gets an AI upgrade as Intuit inks major deal with OpenAI

Fastcompany • Taylor Hatmaker • November 18, 2025

AI•Tech•Intuit•OpenAI•ChatGPT

TurboTax gets an AI upgrade as Intuit inks major deal with OpenAI

AI can do your taxes now—sort of.

The tax software giant Intuit just struck a new deal with OpenAI that will weave AI deeply into its portfolio of financial apps, including the ones many Americans use to file their taxes.

In the multiyear deal, Intuit will pay ChatGPT maker OpenAI more than $100 million annually to implement its artificial intelligence models across products like TurboTax, personal finance manager Credit Karma, email marketing platform Mailchimp, and the accounting tool QuickBooks. Through the partnership, Intuit’s products will also become accessible directly through ChatGPT—the latest lucrative business integration for OpenAI.

“We are taking a massive step forward to fuel financial success for consumers and businesses, unlocking growth for both companies,” Intuit CEO Sasan Goodarzi said. “Our partnership combines the power of Intuit’s proprietary financial data, credit models, and AI platform capabilities with OpenAI’s scale and frontier models to give users the financial advantage they need to prosper.”

Intuit owns a big swath of the financial software market, and all of those apps will be popping up in ChatGPT soon to steer users toward personalized recommendations for credit cards and loans and to answer their tax and personal finance questions.

Intuit has been gravitating toward AI for a while now. Late last year, the company introduced AI-powered features into QuickBooks, inviting its users to automate rote, time-consuming tasks like sending invoices. Intuit insisted that it was being intentional about its implementation of AI, particularly given the rush for every business to boast about its AI capabilities.

“The idea is not to just have random sprinkles of AI across the product,” Dave Talach, Intuit senior vice president of the QuickBooks platform, told Fast Company at the time. “We’ve been thoughtful about approaching AI, not just for the sake of AI, but we want it to show up in a cohesive way in the product that is coherent to the customer.”

AI is choosing what consumers buy. Is your brand in the cart?

In June, QuickBooks released a set of AI agents for QuickBooks designed to get familiar with a company’s business and operations, taking over tasks to speed up bookkeeping and accounting. At the time, Intuit CEO Goodarzi emphasized that the company moved deliberately in building out its AI because it knows that missteps and inaccuracies are high stakes for the financial tools its customers rely on. “If it screws up, it’s a big problem,” he told Fast Company.

CHATGPT IS A PLATFORM NOW

OpenAI’s new partnership with Intuit is just the latest third-party integration for ChatGPT. In late September, OpenAI took what it called “first steps toward agentic commerce” with integrations for Shopify and Etsy, and went on to ink a deal with PayPal last month.

OpenAI also recently introduced a developer kit that would open its hit chatbot platform to third-party apps—a major shift for the chatbot that stands to remake the way that its 700 million-plus weekly users find and do things online. ChatGPT’s first wave of apps included Zillow, Spotify, Canva, and Expedia, with apps from DoorDash, Peloton, Uber, and Target in the works.

OpenAI’s recent moves point to the company’s vision of ChatGPT as an all-encompassing hub of utility that gives internet users little reason to go elsewhere. Those decisions coincide with OpenAI’s seismic shift away from its complex nonprofit roots into a more traditional for-profit company, although it technically will remain under the wing of a nonprofit.

Read More

Microsoft and Nvidia to invest up to $15bn in OpenAI rival Anthropic

Ft • November 18, 2025

AI•Funding•Anthropic•Microsoft•Nvidia


Overview

A major artificial intelligence start-up has entered into a far-reaching strategic alliance with Microsoft and Nvidia that could see total investment reach up to $15bn. As part of the deal structure, the AI company has committed to purchase $30bn worth of computing capacity from Microsoft, which will be delivered via data centres heavily powered by Nvidia’s advanced chips. The arrangement underscores how access to cutting-edge compute – especially Nvidia GPUs delivered through hyperscale cloud platforms – has become the central economic bottleneck and competitive moat in the generative AI race.

Structure of the Investment

  • The headline figure of “up to $15bn” reflects a mix of direct equity investment, cloud credits, and long-term infrastructure commitments rather than a single cash infusion.

  • Microsoft’s role is twofold:

  • Capital provider and strategic partner in AI model development and commercialization.

  • Primary cloud and compute supplier, locking in a huge, multi‑year customer for its Azure platform.

  • Nvidia’s role is primarily on the infrastructure and hardware side, supplying the GPUs and systems that power Microsoft’s data centres, which in turn provide the compute capacity contracted by the AI start-up.

This triangular structure tightly couples three layers of the AI stack: foundational model builder, cloud platform, and semiconductor provider, concentrating power among a small group of already dominant players.

Compute Commitments and Technical Implications

  • The AI start-up’s commitment to buy $30bn in computing capacity signals confidence that demand for its models and AI services will justify enormous infrastructure usage over time.

  • The compute will run in Microsoft data centres that are specifically optimized around Nvidia GPUs and networking, reflecting Nvidia’s continued dominance in training and serving large models.

  • Such a long-dated compute contract effectively functions like a capital-expenditure proxy for the start-up: instead of building its own data centres, it rents hyperscale capacity at massive scale, shifting costs to an operational model but committing to very large volumes.

This confirms a broader industry trend: leading AI labs are becoming anchor tenants of a handful of hyperscale clouds, rather than building standalone infrastructure from scratch.

Strategic and Competitive Impact

  • For Microsoft:

  • Deepens its portfolio of leading AI partners, complementing its existing high-profile alliances.

  • Locks in billions of dollars in future cloud revenue and strengthens Azure’s position as the preferred platform for frontier AI workloads.

  • For Nvidia:

  • Reinforces demand visibility for its most advanced chips, justifying ongoing heavy investment in GPU manufacturing and networking technologies.

  • Solidifies its status as the default choice for large AI compute, despite emerging competition from custom accelerators and rival chipmakers.

  • For the AI start-up:

  • Secures privileged access to scarce, top-tier compute resources, which are a prerequisite for training and deploying cutting-edge models.

  • Gains strategic backing from two of the most important companies in the AI value chain, which can accelerate productization, go‑to‑market efforts, and enterprise adoption.

At the ecosystem level, the deal intensifies concerns about concentration of power in AI, as both capital and compute continue to cluster around a small set of technology giants and a single dominant chip vendor.

Broader Market and Policy Implications

  • Such large, exclusive partnerships may attract regulatory attention, particularly around competition in cloud computing, semiconductors, and AI services.

  • Smaller AI start-ups may find it increasingly difficult to secure sufficient compute at competitive prices, potentially limiting innovation to those with access to hyperscale cloud partnerships or vast capital.

  • The size of the compute commitment ($30bn) illustrates how AI development has shifted from a software‑centric activity to one dominated by infrastructure economics, energy usage, and semiconductor supply chains.

Overall, this alliance highlights that in frontier AI, the core scarce asset is not just talent or algorithms but industrial-scale compute, tightly bound to a few cloud and chip incumbents whose strategic partnerships will shape the trajectory of the entire sector.

Read More

Gemini 3 may be the moment Google pulls away in the AI arms race

Fastcompany • Mark Sullivan • November 19, 2025

AI•Tech•Gemini

Gemini 3 may be the moment Google pulls away in the AI arms race

Google announced its widely anticipated Gemini 3 model Tuesday. By many key metrics, it appears to be more capable than the other big generative AI models on the market.

In a show of confidence in the performance (and safety) of the new model, Google is making one variant of Gemini—Gemini 3 Pro—available to everyone via the Gemini app starting now. It’s also making the same model a part of its core search service for subscribers.

The new model topped the scores of the much-cited LMArena benchmark, a crowdsourced preference of various top models based on head-to-head responses to identical prompts. In the super-difficult Humanity’s Last Exam benchmark test, which measured reasoning and knowledge, the Gemini 3 Pro scored 37.4% compared to GPT-5 Pro’s 31.6%. Gemini 3 also topped a range of other benchmarks measuring everything from reasoning to academic knowledge to math to tool use and agent functions.

Gemini has been a multimodal model from the start, meaning that it can understand and reason about not just language, but images, audio, video, and code—all at the same time. This capability has been steadily improving since the first Gemini, and Gemini 3 reached state-of-the-art performance on the MMMU-Pro benchmark, which measures how well a model handles college-level and professional-level reasoning across text and images. It also topped the Video-MMMU benchmark, which measures the ability to reason over details of video footage. For example, the Gemini model might ingest a number of YouTube videos, then create a set of flashcards based on what it learned.

Gemini also scored high on its ability to create computer code. That’s why it was a good time for the company to launch a new Cursor-like coding agent called Antigravity. Software development has proven to be among the first business functions in which generative AI has had a measurably positive impact.

Benchmarks are telling, but as the response to OpenAI’s GPT-5.1 showed, the “feel” or “personality” of a model matters to users (many users thought GPT-5 was a dramatic personality downgrade from GPT-4o). Google DeepMind CEO Demis Hassabis seemed to acknowledge this in a tweet Tuesday. “[B]eyond the benchmarks it’s been by far my favorite model to use for its style and depth, and what it can do to help with everyday tasks.” Of course users will have their own say about Gemini 3’s communication style, and how well it adapts to user preferences and work habits.

With the release of Google’s third-generation generative AI model, it’s a good time to look at the wider context of the race to build the dominant AI models of the 21st century. The contest, remember, is only a few years old. So far, OpenAI’s models have spent the most time atop the benchmark rankings, and, on the strength of ChatGPT, have garnered most of the attention of all the players in the emerging AI industry.

From the start, Google has enjoyed some distinct advantages. It’s been investing in AI talent and research for decades, starting long before OpenAI became a company in 2015. It began developing machine learning techniques for understanding search intent, defining page rank, and for placing ads as far back as 2001. It bought London-based AI research lab DeepMind back in 2014, and DeepMind has been responsible for some of Google’s biggest AI accomplishments (AlphaGo, AlphaFold, Gemini models).

Read More

Gemini 3.0 and Google’s custom AI chip edge

Youtube • CNBC Television • November 19, 2025

AI•Tech•Gemini


Overview

The content centers on a YouTube video discussing Gemini 3.0 and Google’s advantage from developing custom AI chips. The central theme is how Google’s vertically integrated approach—owning both the large language model (Gemini) and the underlying hardware—could provide efficiency, performance, and cost benefits in the rapidly intensifying AI race. The discussion highlights the strategic importance of custom silicon for AI workloads and frames Gemini 3.0 as a key part of Google’s broader product and infrastructure ecosystem rather than a standalone model release.

Gemini 3.0 in Google’s AI Strategy

  • Gemini 3.0 is presented as the latest generation of Google’s flagship AI model family, aimed at competing with leading frontier models on reasoning, multimodality, and coding.

  • The video emphasizes how Google is pushing Gemini deeper into its consumer and enterprise products—search, workspace tools, cloud offerings, and Android—making Gemini a foundational layer across the company.

  • A key thread is that model capability alone is no longer differentiating; integration into products and the ability to run at scale and low latency are increasingly important.

Custom AI Chips and Vertical Integration

  • The segment focuses on Google’s in‑house AI accelerators (such as TPUs and other custom chips) as a major competitive lever.

  • By designing chips specifically optimized for Gemini and related workloads, Google can:

  • Improve inference efficiency (more tokens per second per watt).

  • Reduce serving costs for high‑traffic AI features.

  • Fine‑tune the hardware–software stack for latency‑sensitive applications like search and assistant experiences.

  • The video contrasts this with companies that must rely primarily on third‑party GPUs, arguing that owning the chip stack can smooth supply constraints and give more predictable capacity planning.

Competitive Positioning in the AI Race

  • The discussion situates Google among AI leaders that combine cloud platforms, advanced models, and specialized hardware.

  • Custom chips are framed as a response to exploding demand for AI compute, giving Google more control over unit economics as usage scales.

  • The host notes that AI infrastructure has become a strategic battleground: the ability to secure, design, and operate compute at massive scale can decide which players can profitably deploy increasingly large models.

  • Gemini 3.0 is portrayed as both a technological and a signaling milestone—demonstrating that Google is still moving quickly in foundational models while leveraging its long‑running hardware investments.

Implications for Cloud, Developers, and Investors

  • For cloud customers and developers, Google’s chip edge could translate into:

  • More competitive AI pricing or higher‑performance tiers for model serving.

  • Access to models that are tightly optimized for Google’s infrastructure, possibly improving reliability and throughput for enterprise applications.

  • For Google’s broader business, custom silicon plus Gemini 3.0 strengthens:

  • Differentiation of Google Cloud versus other hyperscalers.

  • The defensibility of Google’s search and productivity tools as they become more AI‑centric.

  • The video implies that, for investors, the key questions are whether this integration will:

  • Sustain margins in the face of massive AI capex.

  • Help Google keep pace with or surpass rival ecosystems that also pair models with specialized hardware.

Key Takeaways

  • Gemini 3.0 is depicted as a major step in Google’s AI roadmap, but its significance is amplified by Google’s ownership of the full stack—from data centers and custom chips to end‑user products.

  • Custom AI silicon is highlighted as a structural advantage, potentially lowering costs and enabling aggressive deployment of AI features at scale.

  • The broader message is that future AI leadership may hinge less on any single model release and more on who best aligns models, hardware, and products into a cohesive, efficient platform.

Read More

Robinhood’s Vlad Tenev on AI, Prediction Markets, and the Future of Trading

Youtube • Uncapped with Jack Altman • November 20, 2025

AI•Tech•Retail Investing•Prediction Markets•Fintech


Overview

The content centers on a long-form conversation about how artificial intelligence is reshaping retail investing, trading infrastructure, and the broader financial ecosystem. The discussion links the rapid progress of AI tools with changing investor behavior, the emergence of new trading products like prediction markets, and evolving business models in brokerage platforms. It also examines how technology-driven user experiences have lowered barriers to market participation, while raising fresh questions about risk, regulation, and market structure. Throughout, the focus is on how advances in software and data can make markets more accessible, more efficient, and potentially more accurate in aggregating information about the future.

AI and the Transformation of Retail Investing

  • AI is portrayed as a foundational technology that can dramatically improve how individuals interact with markets—everything from idea discovery and research to execution and risk management.

  • Modern AI assistants and recommendation engines can help investors parse vast quantities of financial data, corporate filings, and news flows that were previously accessible only to professionals with specialized tools.

  • The conversation emphasizes the potential for AI to personalize investment guidance—surfacing strategies and products that match a user’s risk tolerance, time horizon, and interests—while keeping humans in control of final decisions.

  • At the same time, AI introduces challenges around explainability, bias, and overreliance on models, which could amplify behavioral errors if users treat outputs as infallible.

Prediction Markets and the Future of Price Discovery

  • Prediction markets are highlighted as a powerful extension of trading technology, enabling people to trade directly on real-world events (elections, policy decisions, macroeconomic indicators, product launches, etc.).

  • These markets can serve as “crowdsourced probability engines,” offering continuously updated odds that reflect the collective beliefs of many participants.

  • Such instruments may complement traditional securities by providing clearer signals about expected outcomes, potentially informing hedging strategies and corporate or policy decisions.

  • However, they raise complex regulatory and ethical questions: what counts as gambling versus financial hedging, how to prevent manipulation, and how to protect retail participants from extreme volatility or information asymmetry.

Democratization of Markets and User Experience

  • A central theme is the democratization of access to trading: zero-commission models, intuitive mobile apps, and streamlined onboarding have brought tens of millions of new participants into financial markets.

  • Design choices—simple interfaces, real-time notifications, and social components—have made investing feel more approachable, especially for younger users who grew up with consumer internet products.

  • This shift has rebalanced power between traditional financial institutions and retail participants, with platforms acting as technology layers that abstract away old frictions like high fees and complex account setups.

  • Nonetheless, the same design choices can also heighten risks of overtrading and speculation, underscoring the importance of education and tools that nudge users toward better long-term behavior.

Regulation, Responsibility, and Long-Term Vision

  • The conversation acknowledges that as AI, prediction markets, and new trading products emerge, regulators will need to adapt frameworks designed for an earlier era of finance.

  • There is an implicit call for collaboration between platforms and regulators to design rules that protect users without stifling innovation in market access and information efficiency.

  • Long-term, the vision is of markets that are more inclusive and data-driven, where sophisticated tools once reserved for hedge funds become available to everyday investors, and where prices better encode collective expectations about the future.

  • The ultimate impact could be a financial system that is both more participatory and more informative—but only if technology, incentives, and rules are aligned to promote transparency, resilience, and user welfare.

Read More

ChatGPT launches group chats globally

Techcrunch • Aisha Malik • November 20, 2025

AI•Tech•ChatGPT•GroupChats•ProductUpdate


ChatGPT is launching group chats globally to all users on Free, Go, Plus, and Pro plans, OpenAI announced on Thursday. The move comes a week after the company began piloting the feature in select regions, including Japan and New Zealand.

The feature allow users to collaborate with each other and ChatGPT in one shared conversation. OpenAI says the launch turns ChatGPT from a one-on-one assistant into a space where friends, family, or coworkers can work together to plan, create, and make decisions.

The company sees group chats in ChatGPT as a way for people to coordinate trips, co-write documents, settle debates, or work through research together, while ChatGPT helps search, summarize, and compare options.

Up to 20 people can participate in a group chat as long as they’ve accepted an invite. Personal settings and memory stay private to each user, the company says.

To start a group chat, users need to tap the people icon and add participants, either directly or by sharing a link. Everyone will be asked to set up a short profile with their name, username, and photo.

Read More

Can business schools really prepare students for a world of AI? Stanford thinks so

Fastcompany • November 19, 2025

Education•Universities•AI•Business Schools•Leadership•Ethics

Can business schools really prepare students for a world of AI? Stanford thinks so

[Photo: SGSB]

As artificial intelligence rapidly transforms business functions across all industries, educational institutions face the critical challenge of preparing future leaders for an AI-dominated landscape. Stanford University’s Graduate School of Business has launched a comprehensive initiative to address this challenge, positioning AI not as an add-on but as a core component of business education that integrates with essential human leadership skills.

Stanford’s AI Integration Strategy

The school’s approach centers on AI@GSB, a student-led program that includes hands-on workshops with new AI tools and a speaker series featuring industry experts. This initiative is complemented by new courses specifically focused on AI, including “AI for Human Flourishing,” which shifts the emphasis from technical capabilities to ethical considerations of what AI should accomplish.

Dean Sarah Soule emphasized the difficulty of this transition, noting that AI is changing “every function of every organization” at a rapid pace. Rather than mandating top-down changes, the school is leveraging its Silicon Valley location and extensive alumni network to create organic integration. Soule explained, “It would not be easy for me as the new dean to just come in and mandate that everybody begin teaching AI in whatever their subject matter is,” acknowledging that such an approach would likely fail.

Ethical Considerations and Leadership Development

A central theme in Stanford’s approach is the parallel development of AI competency and ethical leadership. Soule identified several pressing concerns, including privacy issues in HR processes where algorithms screen résumés, and the broader societal impact of disappearing entry-level jobs. “What does the world look like if a lot of entry-level jobs begin to disappear? How do we think responsibly about reskilling individuals for work that will enable AI?” she questioned, acknowledging that while answers remain elusive, the business school must lead in asking these critical questions.

The leadership model being developed emphasizes five key facets that become increasingly crucial in an AI-powered world: self-awareness, perspective-taking, communication skills, critical and analytical decision-making, and contextual awareness. Soule stressed that as AI handles more rote tasks and analysis, human leaders will need to focus on “leading others well, and leading them in a principled and purposeful fashion.”

Faculty Development and Classroom Innovation

Faculty training occurs through the school’s teaching and learning hub, where pedagogical experts conduct sessions on AI integration. More importantly, organic collaboration among faculty members has created excitement around developing new approaches. Many professors are already using AI in their research, which naturally translates to classroom innovation.

One notable example involves a faculty member who created a custom GPT to search management journals and provide evidence-based answers to common managerial questions. Students can ask, “What’s the optimal way to set up a high-functioning team?” and receive responses grounded in academic research, demonstrating practical AI application in business education.

The Enduring Value of Human Skills

Despite the focus on technological integration, Stanford continues to emphasize the importance of traditional interpersonal skills. The school’s famous “Touchy Feely” course (Interpersonal Dynamics) remains extremely popular, with nearly every student taking this elective. Soule confirmed that emotional intelligence and self-awareness become even more valuable in an AI-dominated environment, where human connection and leadership differentiate human capabilities from automated processes.

This balanced approach suggests that successful business education in the AI era requires neither wholesale adoption of technology nor resistance to change, but rather thoughtful integration that enhances human leadership qualities while leveraging AI’s analytical capabilities.

Read More

ChatGPT Group Chats, Meta and the Encryption Trade-off, Network Effects and Ad Models

Stratechery • Ben Thompson • November 17, 2025

AI•Tech•ChatGPT•Meta•Encryption


Overview of ChatGPT Group Chats and Strategic Context

The article argues that the introduction of group chats in ChatGPT is both a long-awaited product improvement and a strategic move aimed directly at Meta’s social and messaging dominance. Group chats turn ChatGPT from a primarily one‑to‑one assistant into a collaborative environment where multiple people can interact with an AI in real time. This shift is framed as a way to embed AI more deeply into everyday communication and coordination, positioning OpenAI as a competitor not just to search engines like Google, but to social platforms like WhatsApp, Messenger, and Instagram as well.

Group Chats as a Product Evolution

  • Group chats fulfill a long-standing request from power users who wanted AI integrated into multi-person conversations, not just individual prompts.

  • In practice, this means teams, families, or friends can share a space where the AI can summarize discussion, propose ideas, track decisions, and provide information on demand.

  • The feature is positioned as a natural extension of how people already use messaging apps: coordinating plans, debating ideas, and sharing links, now with an AI that can parse and act on the entire conversation context.

  • This moves ChatGPT closer to being a “workspace” or “social space” rather than a pure tool, increasing user stickiness and potential network effects as more people join shared chats.

Strategic Challenge to Meta and the Encryption Trade-off

  • The article highlights that Meta, particularly through WhatsApp and Messenger, sits at the center of global digital communication but is constrained by a strong commitment to end-to-end encryption.

  • End-to-end encryption ensures Meta cannot read user messages, which is valuable for privacy and trust but makes it much harder to embed powerful, server-side AI that relies on full access to conversation data.

  • ChatGPT group chats, by contrast, assume that OpenAI can process and analyze conversation content in the cloud, enabling richer AI behavior but raising different privacy expectations.

  • This creates an “encryption trade-off”: Meta’s privacy stance limits its ability to compete directly with ChatGPT-like AI in group contexts, whereas OpenAI’s model favors utility and intelligence over strong content opacity from the provider.

Network Effects, Ad Models, and Platform Positioning

  • Meta’s core strength has long been network effects: everyone is on its platforms, and advertisers follow attention. However, the article suggests that AI-centric products like ChatGPT group chats could shift where high-value interactions occur.

  • If people begin to collaborate, plan, and even work more inside AI-augmented group chats, these spaces could become new loci of attention and decision-making, potentially supporting new monetization models.

  • Traditional ad models rely on large-scale behavioral data and content feeds. AI-centric environments might instead monetize via subscriptions, usage-based pricing, or highly contextual, AI-mediated recommendations.

  • The article implies that Meta’s encryption constraints could limit its ability to build similarly rich, server-powered AI experiences in messaging, weakening its long-term leverage in ad targeting and network control.

  • Google “looms” as another player capable of combining search, productivity tools, and AI agents in shared environments; the competitive question is whether Google can translate its search dominance into compelling collaborative AI spaces before OpenAI or others entrench themselves.

Implications for AI, Communication, and Competition

  • ChatGPT group chats are seen as a step toward communication spaces where AI is a persistent participant, not a separate tool users visit occasionally.

  • This may change user expectations: instead of static, private messaging threads, people might increasingly accept AI-readable contexts in exchange for summarization, organization, and intelligent assistance.

  • For Meta, the tension between privacy guarantees and AI richness becomes a strategic dilemma: loosening encryption would damage trust and brand positioning, but keeping it limits its ability to compete directly in AI-enhanced conversations.

  • For the broader ecosystem, the move raises questions about where the next major “platform” will emerge—inside legacy social networks, productivity suites, or AI-native environments like ChatGPT—and how monetization and regulation will adapt to AI deeply embedded in everyday group communication.

Read More

The Next Billion-Dollar Opportunity in Voice AI Just Unlocked: Licensed Voice/Image Rights

Theaiopportunities • November 16, 2025

AI•Tech•VoiceAI•DigitalRights•SyntheticMedia


The emerging market for licensed voice and image rights

The article argues that a quiet but transformative shift is underway in Voice AI: human voices (and, by extension, likenesses) are turning into formal, licensable digital assets. Rather than being one-off performances or raw data for training models, voices can now be packaged, contracted, and scaled across formats—podcasts, audiobooks, ads, games, virtual influencers, and more—under clear legal and commercial frameworks. This change is framed as the foundation of a new “Voice Rights economy,” which the author believes represents a major, rapidly growing market that most people outside the AI ecosystem have not yet understood or priced in.

From performance to scalable digital asset

  • Historically, voice work has been transactional: a talent records lines, gets paid, and control largely ends there.

  • Voice cloning and generative AI now make a voice infinitely reusable—but that only becomes a legitimate business when rights, ownership, and compensation are clearly defined.

  • The article highlights that for “the first time,” talent, creators, and brands can legally license and monetize their voices at scale, rather than fear being replaced or misused.

  • Voices and images turn into portable, programmable assets that can be deployed globally across platforms with contractual guardrails, enabling recurring, royalty-like revenue instead of one-time payments.

Why celebrity adoption is a turning point

  • The newsletter teaser emphasizes that high-profile figures like Matthew McConaughey and Michael Caine have already participated in licensing deals for their voices, which the author positions as an industry inflection point.

  • Celebrity participation serves as both social proof and a legal/contractual template: once a few well-known actors have structured voice rights deals, everyone else—from mid-tier creators to brands—can follow similar patterns.

  • This legitimizes the space in the eyes of agencies, studios, and enterprises, encouraging them to explore Voice AI not as a grey-area experiment but as a standard rights product.

A rapidly growing market opportunity

  • The author claims this is “a completely new market” and notes that it is growing faster than people realize, suggesting that revenue and deal flow are already expanding beneath the surface.

  • The article promises concrete market sizing (“with real numbers”), implying that voice rights, image rights, and synthetic media licensing could reach billion‑dollar scale relatively quickly as more content formats and use cases adopt licensed AI voices.

  • This growth is driven by the explosion of content channels (short‑form video, podcasts, audiobooks, localized versions, interactive media), all of which can benefit from scalable, on‑brand, and legally licensed voices.

The four-layer Voice AI stack

  • The author previews a framework for understanding the Voice Rights economy as a stack of “4 layers of the new Voice AI economy.”

  • While the full breakdown sits behind the paywall, the teaser suggests a layered ecosystem including:

  • Rights and IP infrastructure (contracts, identity verification, compliance).

  • Core voice tech and platforms (cloning, generation, editing).

  • Vertical applications (ads, entertainment, education, localization, customer service).

  • Distribution and marketplaces (where voices, likenesses, and rights are discovered and licensed).

  • The article also hints at mapping “who’s already building in each category,” indicating that the stack is already populated by startups and incumbents vying for position.

Where the next billion‑dollar companies will emerge

  • The piece positions the Voice Rights economy as fertile ground for new category-defining companies, likely in:

  • Rights management platforms that standardize contracts and payouts for voice and image licensing.

  • AI infrastructure companies that provide safe, high‑quality cloning and generation tech.

  • Marketplaces and aggregators that connect talent, brands, and production pipelines at scale.

  • The author frames the newsletter as a playbook for investors, founders, and operators: helping them see “what’s happening now,” “where the money’s headed,” and “who’s already building it” so readers can invest, partner, or emulate leading players.

Implications for creators, brands, and the AI ecosystem

  • For creators and voice talent, this shift means a transition from fear of being replaced by AI to opportunities to productize and license their voice as a scalable asset with recurring income.

  • For brands and media companies, it unlocks consistent, on‑brand voices across global content footprints while staying within legal and ethical bounds.

  • For the broader AI ecosystem, it signals a maturing phase where IP, consent, and monetization mechanisms catch up with technical capabilities, enabling more sustainable growth.

Read More

Theil’s Hedge Funds Sells Entire Nvidia Stake | Bloomberg Tech

Youtube • Bloomberg Podcasts • November 17, 2025

AI•Tech•Nvidia•HedgeFunds•AIChips


Overview

  • The content centers on a hedge fund associated with Peter Thiel exiting its entire position in Nvidia, one of the most closely watched and valuable semiconductor and AI-chip companies in the world.

  • The central theme is the strategic significance of a complete Nvidia stake sale by a sophisticated technology-focused investor, and what this might signal about valuations, market sentiment toward AI, and positioning within the broader technology and chip sector.

  • The decision is framed in the context of Nvidia’s meteoric rise on the back of AI enthusiasm, hyperscaler demand for GPUs, and its role as a bellwether for the “AI trade” across global equity markets.

Nvidia’s Role in the AI and Chip Ecosystem

  • Nvidia is depicted as the leading provider of GPUs critical for training large AI models and powering data-center AI workloads.

  • Its stock has become a proxy for investor belief in the scale and durability of the AI boom, with demand driven by major cloud providers, large enterprises, and AI startups.

  • The company’s rapid revenue and profit expansion, driven by data-center segments and AI accelerators, has pushed its valuation to historic highs, making it a top-weighted name in major equity indices and a core holding for many hedge funds and mutual funds.

Rationale Behind Selling the Entire Stake

  • The hedge fund’s complete exit is presented as a deliberate strategic move rather than a minor portfolio rebalance, suggesting a strong view on risk, valuation, or the phase of the AI cycle.

  • Potential motivations highlighted include:

  • Concern that Nvidia’s valuation has run ahead of fundamentals after a sharp, extended rally.

  • Desire to lock in substantial gains following significant appreciation.

  • Reallocation of capital into other technology, AI, or semiconductor names that might offer better risk‑reward or are earlier in their growth trajectories.

  • Hedge against the possibility of cyclical slowdowns in data-center spending or normalization of GPU demand.

  • The sale is interpreted as a signal that even highly tech‑savvy investors may be seeking more balanced exposure to AI rather than concentrated bets on a single champion.

Market and Sector Implications

  • The move raises questions about whether the AI trade—centered heavily on Nvidia—may be entering a more mature, volatile, or selective phase.

  • For other investors, this exit can:

  • Prompt reassessment of concentration risk in their portfolios, especially for funds heavily exposed to Nvidia and a small group of mega-cap tech names.

  • Encourage diversification within the semiconductor value chain (e.g., alternative chip designers, foundries, EDA tools, or infrastructure providers).

  • Influence broader sentiment if market participants interpret the decision as a “top signal” for AI exuberance or, conversely, as simple prudent risk management after outsized gains.

  • Short‑term market reactions may include increased volatility in Nvidia shares and related AI beneficiaries as traders speculate about whether other funds will follow.

Broader Themes and Takeaways

  • The event underscores how pivotal Nvidia has become to the narrative of AI-driven growth across markets: decisions around the stock are now macro‑relevant rather than purely stock‑specific.

  • It illustrates the dynamic between long‑term belief in AI as a transformative technology and near‑term caution around valuations and cycle timing.

  • For individual and institutional investors, the key lessons include:

  • Recognizing that even the strongest fundamental stories carry valuation and concentration risks.

  • Understanding that high-profile exits by sophisticated funds can reflect portfolio mechanics, risk constraints, or shifting opportunity sets, not necessarily a rejection of the underlying technology trend.

  • Appreciating that the AI investment landscape is broad and evolving, and leadership within it may rotate over time.

Read More

OpenAI: Piloting Group Chats in ChatGPT

Openai • John Gruber • November 17, 2025

AI•Tech•ChatGPT•GroupChats•Collaboration


Today, we’re beginning to pilot a new experience in a few regions that makes it easy for people to collaborate with each other—and with ChatGPT—in the same conversation. With group chats, you can bring friends, family, or coworkers into a shared space to plan, make decisions, or work through ideas together.

Whether you’re organizing a group dinner or drafting an outline with coworkers, ChatGPT can help. Group chats are separate from your private conversations, and your personal ChatGPT memory is never shared with anyone in the chat.

Group chats are starting to roll out on mobile and web for logged-in ChatGPT users on ChatGPT Free, Go, Plus and Pro plans in Japan, New Zealand, South Korea and Taiwan.

To start a group chat tap the people icon in the top right corner of any new or existing chat. When you add someone to an existing chat, ChatGPT creates a copy of your conversation as a new group chat so your original conversation stays separate. You can invite others directly by sharing a link with one to twenty people, and anyone in the group can share that link to bring others in. When you join or create your first group chat, you’ll be asked to set up a short profile with your name, username, and photo so everyone knows who’s in the conversation. Group chats can be found in a new clearly-labeled section of the sidebar for easy access.

Group chats are separate from your private conversations. Your personal ChatGPT memory is not used in group chats, and ChatGPT does not create new memories from these conversations. We’re exploring offering more granular controls in the future so you can choose if and how ChatGPT uses memory with group chats.

Group chats work much like your usual ChatGPT conversations — only now, others can join in. Responses are powered by GPT‑5.1 Auto, which chooses the best model to respond with based on the prompt and the models available to the user that ChatGPT is responding to based on their Free, Go, Plus or Pro plan. Search, image and file upload, image generation, and dictation are enabled.

We’ve also taught ChatGPT new social behaviors for group chats. It follows the flow of the conversation and decides when to respond and when to stay quiet based on the context of the group conversation. You can always mention “ChatGPT” in a message when you want it to respond. We’ve also given ChatGPT the ability to react to messages with emojis, and reference profile photos — so it can, for example, use group members’ photos when asked to create fun personalized images within that group conversation.

Read More

The Rise of Background Agents

Tanayj • Tanay Jaipuria • November 17, 2025

AI•Work•BackgroundAgents•Automation•Productivity

The Rise of Background Agents

Most of us today use AI in a chat box. You type, it replies, everything happens in that one thread. That was the right starting point. It is the wrong place for a lot of the work we are now trying to hand off to agents.

Refactoring a codebase, watching the web for changes or building a briefing for tomorrow are not five second tasks. If you force those into chat and let it run for 10s of minutes, people inevitably get bored and leave.

For these tasks, The model should not be a slightly faster colleague who types back at you. It should feel more like a background process that you brief, that disappears for a while, and that hands you something useful where you actually work.

That is the shift I want to talk about: the rise of background agents, how they run, and where we already see them showing up.

What are background agents

A background agent is an agent you do not have to sit and watch. You might talk to it in chat, you might tag it in Slack, you might forward it an email, but once it understands the task, it runs elsewhere and comes back only when it has something worth your attention.

That already makes it different from a classic chatbot. In the chatbot world, every interaction is “you ask, it answers.” In the background world, the unit of interaction is a task. You brief the agent once, it may run for minutes or hours, and the result shows up as a pull request, a digest, a comment, or a notification.

On top of that, background agents can be:

Ambient: They are always running in some sense, and responding to some changes in some input. They wake up because the world changed, not just because you typed a prompt in. A new email arrives, a web page changes, a metric moves, a calendar event appears.

Proactive: They are allowed to tap you on the shoulder when there is something important, rather than waiting for you to ask at all. Think alerts, daily briefings, suggested decisions or actions (or at some point it may have taken these actions for you as well).

Not all background agents have to be proactive or ambient. You can have a very simple background agent that only acts when you tell it to. But once you accept that the work happens away from the chat window, it becomes natural to let agents respond to time and events as well as direct commands.

Chat is still useful. It is a great place to negotiate scope, explain edge cases, and set guardrails. It just does not have to be the only interface or the place where the heavy lifting happens.

Read More

AI Isn’t a Bubble But a Long-Term Opportunity, JPMorgan’s Erdoes Says

Medium • ODSC - Open Data Science • November 18, 2025

AI•Funding•AIMarketValuations•MaryCallahanErdoes•LongTermGrowth


Concerns about soaring valuations in the AI sector continue to drive volatility across equity markets, but JPMorgan Asset and Wealth Management CEO Mary Callahan Erdoes argues that investors may be focusing on the wrong question.

Speaking at CNBC’s Delivering Alpha conference, she emphasized that AI is far from a speculative bubble and instead represents a structural shift that markets have yet to fully grasp.

AI is an Opportunity

During the panel, Erdoes said many investors misunderstand the gap between current AI valuations and the underlying business transformation still underway. “I feel like we’re just on the precipice of a lot of this stuff,” she said, explaining that markets are attempting to price future AI multiples before companies fully capture productivity gains.

She compared the transition to a slow-moving shift that becomes obvious all at once, noting, “We’re in this disconnect of the world is pricing where AI multiples should be. The companies haven’t gotten it through the usage.”

It’s not a Bubble

Recent market pullbacks reflect ongoing anxiety around the rapid rise of Nvidia, AMD, and other AI-linked companies. Despite these fluctuations, major indexes remain near record highs. Erdoes pushed back against the notion that the sector resembles a speculative bubble.

AI itself is not a bubble. That’s a crazy concept. We are on the precipice of a major, major revolution in the way that companies operate,” she said. She expects substantial growth ahead, both in revenue potential and cost efficiencies, as organizations integrate AI into their operations.

Ares Management CEO Michael Arougheti echoed this outlook, stating that current investment levels fall far short of AI’s long-term economic influence. “We have a long way to go in terms of the economic investment relative to the size of the economy,” he said.

He pointed to a persistent imbalance between accelerating demand for AI capabilities and the slower pace of infrastructure expansion.

Confidence in AI Investment Moving Forward

Both executives also expressed confidence in the broader macroenvironment. Despite repeated predictions of an economic downturn, Arougheti noted, “People have been calling for a recession now for five years, and it just hasn’t come.

Erdoes added that the credit market reflects an attractive environment for investors. “If there’s not a recession on the horizon, it’s a great buying opportunity, and you should be leaning in and buying.”

Their aligned assessments suggest that AI-driven growth remains at an early stage. While market volatility may persist, both leaders believe the long-term trajectory points toward transformative economic change rather than speculative excess.

Read More

Build to Last

Oreilly • Jeremy Howard and Chris Lattner • November 19, 2025

AI•Tech•Software Craftsmanship•LLMs•Developer Tools

Build to Last

II’ve spent decades teaching people to code, building tools that help developers work more effectively, and championing the idea that programming should be accessible to everyone. Through fast.ai, I’ve helped millions learn not just to use AI but to understand it deeply enough to build things that matter.

But lately, I’ve been deeply concerned. The AI agent revolution promises to make everyone more productive, yet what I’m seeing is something different: developers abandoning the very practices that lead to understanding, mastery, and software that lasts. When CEOs brag about their teams generating 10,000 lines of AI-written code per day, when junior engineers tell me they’re “vibe-coding” their way through problems without understanding the solutions, are we racing toward a future where no one understands how anything works, and competence craters?

I needed to talk to someone who embodies the opposite approach: someone whose code continues to run the world decades after he created it. That’s why I called Chris Lattner, cofounder and CEO of Modular AI and creator of LLVM, the Clang compiler, the Swift programming language, and the MLIR compiler infrastructure.

Chris and I chatted on Oct 5, 2025, and he kindly let me record the conversation. I’m glad I did, because it turned out to be thoughtful and inspiring. Check out the video for the full interview, or read on for my summary of what I learned.

Talking with Chris Lattner

Chris Lattner builds infrastructure that becomes invisible through ubiquity.

Twenty-five years ago, as a PhD student, he created LLVM: the most fundamental system for translating human-written code into instructions computers can execute. In 2025, LLVM sits at the foundation of most major programming languages: the Rust that powers Firefox, the Swift running on your iPhone, and even Clang, a C++ compiler created by Chris that Google and Apple now use to create their most critical software. He describes the Swift programming language he created as “Syntax sugar for LLVM”. Today it powers the entire iPhone/iPad ecosystem.

When you need something to last not just years but decades, to be flexible enough that people you’ll never meet can build things you never imagined on top of it, you build it the way Chris built LLVM, Clang, and Swift.

I first met Chris when he arrived at Google in 2017 to help them with TensorFlow. Instead of just tweaking it, he did what he always does: he rebuilt from first principles. He created MLIR (think of it as LLVM for modern hardware and AI), and then left Google to create Mojo: a programming language designed to finally give AI developers the kind of foundation that could last.

Chris architects systems that become the bedrock others build on for decades, by being a true craftsman. He cares deeply about the craft of software development.

I told Chris about my concerns, and the pressures I was feeling as both a coder and a CEO:

“Everybody else around the world is doing this, ‘AGI is around the corner. If you’re not doing everything with AI, you’re an idiot.’ And honestly, Chris, it does get to me. I question myself… I’m feeling this pressure to say, ‘Screw craftsmanship, screw caring.’ We hear VCs say, ‘My founders are telling me they’re getting out 10,000 lines of code a day.’ Are we crazy, Chris? Are we old men yelling at the clouds, being like, ‘Back in my day, we cared about craftsmanship’? Or what’s going on?”

Chris told me he shares my concerns:

“A lot of people are saying, ‘My gosh, tomorrow all programmers are going to be replaced by AGI, and therefore we might as well give up and go home. Why are we doing any of this anymore? If you’re learning how to code or taking pride in what you’re building, then you’re not doing it right.’ This is something I’m pretty concerned about…

But the question of the day is: how do you build a system that can actually last more than six months?”

He showed me that the answer to that question is timeless, and actually has very little to do with AI.

Design from First Principles

Chris’s approach has always been to ask fundamental questions. “For me, my journey has always been about trying to understand the fundamentals of what makes something work,” he told me. “And when you do that, you start to realize that a lot of the existing systems are actually not that great.”

Read More

Venture

What Does a ‘Healthy’ Venture Market Actually Look Like?

LinkedIn • Keith Teare • November 20, 2025

LinkedIn•Venture“The highest order way I think about it is: ‘Is there opportunity?’ Is there opportunity for high impact, high growth, high scale businesses being born right now? And the answer to that is unequivocally yes.” - Kirsten Green, Founder & Partner, Forerunner

Back in August, I wrote a post called “Is Venture Broken?” – a data-driven look at whether the growing spread in fund sizes at the early stage was pushing venture to a breaking point. Our analysis found that while mega-funds have moved down-market with bigger checks and faster pace, boutique early-stage dedicated firms continue to anchor Seed, and to a lesser extent Series A.

Since then, AI bubble conversations have intensified, with some folks ringing alarms, while others seem to have grown more comfortable with bubbles as a recurring feature of tech cycles, especially during major inflection points. On the optimistic front, history reminds us that periods labeled as “bubbles” have also given rise to generational companies like Google, Amazon, and Meta. With this in the backdrop, we revisited (and updated) our data, and spoke with Forerunner‘s Founding Partner Kirsten Green to explore what a “healthy” venture market really looks like.

A quick update on our early-stage analysis, and a huge thank you to my colleague Evan Tarzian, CFA (🙏) who put in the real work behind the numbers in PitchBook. Looking at 2,000+ priced Seed and Series A rounds across the Bay Area and New York (chosen specifically as key innovation hubs where the majority of deals are getting done), the patterns remain striking. (*Caveat: We know there is a lot of activity on SAFEs but unfortunately they are incredibly hard to track, thus not included in our dataset.) Definitions:

  • Boutique Seed fund: $20M - $500M in individual fund size

  • Boutique Series A fund: $75M - $750M in individual fund size

  • Mega-fund: $501M+ in individual fund AUM (for Seed), $751M+ in individual fund AUM (for Series A), or multi-stage venture capital fund with $10B+ firmwide AUM.

Seed

Article content
  • Even with mega-funds stepping in, boutique, dedicated Seed funds continue to lead <60% of priced seed rounds in the sub-$10M range which account for the gross majority of deals done.

  • ~90% of priced Seed rounds happen below $20M - where dedicated funds maintain a clear advantage.

  • Once rounds cross $20M, the dynamic flips. Nearly 70% of deals done at $20M+ include a mega-fund as a lead or co-lead.

  • These $20M+ rounds represent only ~5% of total Seed activity, meaning the fiercest competition and most aggressive pricing is concentrated in a very small subset of the market that everyone is fighting to enter.

Series A

Article content
  • The majority of Series A activity now clusters around $20M, but this isn’t where the power dynamics are shaping the market.

  • At sub-$20M, dedicated A funds lead most rounds, and mega-fund involvement stays below half.

  • Once rounds cross $20M, the megas come flooding, leading or co-leading more than half of all rounds. By $50M, they own 3/4 of the market.

  • While only ~40% of total A rounds fall into the $20M+ buckets, this is where competition is fiercest. Valuations jump, mega-funds show up first, and boutiques must compete head-to-head with far larger balance sheets.

Top 10 Most Active Investors

Below are the top 10 most active mega and boutique funds that have led/co-led the most Seed and Series A priced rounds.

Article content

Out of the 90 mega-funds we looked at, interestingly Andreessen Horowitz, General Catalyst, Lightspeed and Sequoia Capital all remain in the top 5 across both Seed and A - with a16z far outrunning the pack at Seed.

Article content

“I’ve lost track of the difference between a Seed and an A”.

As Kirsten noted in our conversation, the traditional boundaries between stages are blurring. Round labels have become less about stage and more about traction, founder experience and ambition, and how fast markets are moving. That framing helps explain what we’re seeing in the data.

Read more on LinkedIn

Calpers adopts new approach to assess risk and returns

Ft • November 17, 2025

Venture


The article explains that the large US public pension fund is adopting an overhauled framework for assessing risk and return, with the most notable change being an increase in its allocation to equities. The central goal is to improve long‑term returns for beneficiaries while better aligning the portfolio with the fund’s risk tolerance and obligations. This marks a strategic shift in how the fund balances growth assets such as stocks against more defensive holdings like bonds and cash.

Under the new approach, the fund plans to raise its exposure to the equity markets, reflecting a belief that higher-risk assets are necessary to meet ambitious return targets in a low-yield environment. The move is framed as a response to prolonged periods of low interest rates and modest fixed-income returns, which make it difficult for large institutional investors to hit their required return assumptions without taking more risk. By increasing equities, the fund is explicitly accepting greater short‑term volatility in exchange for the potential for higher long‑term gains.

The revamped framework also implies a more nuanced approach to risk assessment. Rather than relying solely on traditional asset-class buckets, the fund is focusing on how different investments contribute to overall portfolio risk and return. This may involve segmenting the portfolio by risk factors, economic scenarios, or liability-matching characteristics, enabling more sophisticated stress testing and scenario analysis. The objective is not only to pursue higher returns but to understand how those returns might behave under market stress and over different economic cycles.

There is an important governance and policy dimension to this shift. Adjusting the equity allocation in a fund of this scale typically requires board-level approval and reflects a consensus that the previous asset mix and risk model were no longer optimal. The new framework may also bring greater transparency to how risk is defined and communicated to stakeholders, including public employees and retirees who depend on the fund’s stability. Clarifying the trade-offs between risk and return can help manage expectations during periods of market volatility.

The implications of this change extend beyond the fund itself. As one of the largest institutional investors in the world, its move toward higher equity exposure can influence market sentiment, especially in sectors or regions where it is a major shareholder. Other pension funds and institutional investors may view this as a signal that taking on more equity risk is becoming the norm in order to meet long-term obligations. At the same time, the decision underscores the ongoing challenge for public pensions: balancing the political and social pressure for safety with the financial necessity of generating sufficient returns.

In conclusion, the article portrays the new risk-and-return framework as a significant strategic evolution, centered on increasing equities to enhance expected returns while refining how risk is measured and managed. The shift reflects broader structural issues facing large pension funds—low yields, rising liabilities, and the need for more sophisticated portfolio construction—and suggests that more aggressive, analytically grounded approaches to risk may become standard practice across the institutional investment landscape.

Read More

‘Our funds are 20 years old’: limited partners confront VCs’ liquidity crisis

Techcrunch • Connie Loizos • November 18, 2025

Venture


These days, it’s not easy to be a limited partner who invests in venture capital firms. The “LPs” who fund VCs are confronting an asset class in flux: Funds have nearly twice the lifespan they used to, emerging managers face life-or-death fundraising challenges, and billions of dollars sit trapped in startups that may never justify their 2021 valuations.

Indeed, at a recent StrictlyVC panel in San Francisco, above the din of the boisterous crowd gathered to watch it, five prominent LPs, representing endowments, fund-of-funds, and secondaries firms managing over $100 billion combined, painted a surprising picture of venture capital’s current state, even as they see areas of opportunity emerging from the upheaval.

Perhaps the most striking revelation was that venture funds are living far longer than anyone planned for, creating a raft of problems for institutional investors.

“Conventional wisdom may have suggested 13-year-old funds,” said Adam Grosher, a director at the J. Paul Getty Trust, which manages $9.5 billion. “In our own portfolio, we have funds that are 15, 18, even 20 years old that still hold marquee assets, blue-chip assets that we would be happy to hold.” Still, the “asset class is just a lot more illiquid” than most might imagine based on the history of the industry, he said.

This extended timeline is forcing LPs to rip up and rebuild their allocation models. Lara Banks of Makena Capital, which manages $6 billion in private equity and venture capital, noted her firm now models an 18-year fund life, with the majority of capital actually returning in years 16 through 18. Meanwhile, the J. Paul Getty Trust is actively revisiting how much capital to deploy, leaning toward more conservative allocations to avoid overexposure.

The alternative is active portfolio management through secondaries, a market that has become essential infrastructure. “I think every LP and every GP should be actively engaging with the secondary market,” said Matt Hodan of Lexington Partners, one of the largest secondaries firms with $80 billion under management. “If you’re not, you’re self-selecting out of what has become a core component of the liquidity paradigm.”

The valuation disconnect (is worse than you think)

The panel didn’t sugarcoat one of the harsh truths about venture valuations, which is that there’s often a huge gap between what VCs think their portfolios are worth and what buyers will actually pay.

TechCrunch’s Marina Temkin, who moderated the panel, shared a jarring example from a recent conversation with a general partner at a venture firm: A portfolio company last valued at 20 times revenue was recently offered just 2 times revenue in the secondary market — a 90% discount.

Michael Kim, founder of Cendana Capital, which has nearly $3 billion under management focused on seed and pre-seed funds, put this into context: “When someone like Lexington comes in and puts a real look on valuations, they may be actually facing 80% markdowns on what they perceive that their winners or semi-winners were going to be,” he said, referring to the “messy middle” of venture-backed companies.

Kim described this “messy middle” as businesses that are growing at 10% to 15% with $10 million to $100 million in annual recurring revenue that had billion-dollar-plus valuations during the 2021 boom. Meanwhile, private equity buyers and public markets are pricing similar enterprise software companies at just four to six times revenue.

Read More

Unicorns Pick Up For The Second Month In A Row, Adding Close To $45B To The Board

Crunchbase • Gené Teare • November 19, 2025

Venture

A total of 20 companies joined The Crunchbase Unicorn Board in October, adding $44.5 billion in value. This was the highest valuation amount added to the unicorn board for a new cohort in the past three years.

The number of new monthly entrants has picked up in recent months. The top 20 companies on the board have also been reshuffled and we’ve seen a marked increase in new decacorn-valued companies.

Of the 20 companies that joined in October, 11 came from the U.S. China added three new unicorns and Sweden contributed two. Europe, the U.K., Germany and Ukraine each minted one new unicorn, as did India.

Among the new entrants, New York-based open model developer Reflection and Austin-based residential battery operator Base Power each raised billion-dollar rounds that valued them as unicorns for the first time.

The highest valued among the new unicorns were Reflection, which was valued at $8 billion, and San Francisco-based payments blockchain Tempo, valued at $5 billion.

Exits

A pair of companies from the unicorn board were acquired in October: Passwordless authentication company Stytch was acquired by Twilio, and Nexthink, an IT employee experience platform was acquired by Vista Equity Partners. In another October exit, data management tooling company dbt Labs merged with Fivetran in an all-stock deal.

Three companies also went public: Silicon Valley-based travel and expense management company Navan, Shanghai-based e-commerce software platform Jushuitan Network Technology, and Beijing-based silicon wafer production company Eswin Materials.

New unicorns

Here are October’s 20 newly minted unicorns across multiple sections. AI led with four companies, transportation with three, and healthcare and financial services followed, each with two companies.

AI

  • Open source model developer Reflection.AI, founded by DeepMind engineers to compete against DeepSeek, raised a $2 billion Series B from Nvidia among other investors. The 1-year-old New York-based company was valued at $8 billion.

  • Fireworks AI, which helps customers build AI applications, raised a $230 million Series C led by Lightspeed Venture Partners, Index Ventures and Evantic Capital. The 3-year-old Redwood City, California-based company was valued at $4 billion. It says it has 10,000 customers, up 10x from July 2024.

  • AI agent automation platform n8n raised a $180 million Series C led by Accel. The 6-year-old Berlin-based company was valued at $2.5 billion.

  • LangChain, a platform for deploying AI agents, raised a $125 million Series B led by IVP. The 3-year-old San Francisco-based company was valued at $1.25 billion.

Transportation

  • Zelos, a builder of autonomous robovans for B2B delivery, raised a $100 million Series B4 extension led by Ant Group. The 4-year-old Beijing-based company was valued at $1.6 billion.

  • Contemporary Amperex Intelligence Technology raised its first external financing, a $281 million funding round. The 4-year-old company is a Shanghai-based subsidiary of car battery provider CATL and was valued at $1.4 billion in the deal. It’s a developer of an integrated chassis for battery and electric vehicle functions for driving.

  • Self-driving trucking company Einride raised a $100 million funding led by existing investor EQT Venturesand quantum company IonQ. Einride builds electric big rigs, automated smaller delivery trucks for fixed routes, and a logistics platform. The 9-year-old Stockholm-based company was valued at $1 billion.

Healthcare and biotech

  • HistoSonics, provider of a noninvasive therapy for tumors, raised a $250 million private equity round led by its new owners which include K5 Global, Bezos Expeditions and Wellington Management, as well as additional investors Founders Fund and Thiel Bio. The 16-year-old Minnesota-based company was valued at $3 billion.

  • In women’s health, weight loss treatment provider SheMed raised a $50 million Series A. Investors were not disclosed. The 1-year-old London-based company was valued at $1 billion.

Financial services

Web3

  • Blockchain payments provider Tempo, incubated by Stripe and Paradigm, raised a $500 million Series A led by Greenoaks and Thrive Capital. The less than 1-year-old San Francisco-based company was valued at $5 billion.

Energy

  • Battery-powered home energy company Base Power raised a $1 billion Series C led by Addition. The 2-year-old Austin-based company was valued at $4 billion.

Aerospace

  • Reusable rocket manufacturer Stoke Space raised a $510 million Series D led by US Innovative Technology Fund to scale manufacturing. The 6-year-old Kent, Washington-based company was valued at $2 billion.

Professional services

  • Legora’s legal platform supports lawyers with research and legal drafting. The 2-year-old Stockholm-based legal tech company raised a $150 million Series C led by Bessemer Venture Partners. It was valued at $1.8 billion.

E-commerce

  • ShopMy, which connects brands with creators for e-commerce, raised a $70 million funding led by Avenir. The 5-year-old Holden, Massachusetts-based company was valued at $1.5 billion. ShopMy says it has enabled $1 billion in sales across its platform.

Sales and marketing

  • Vantaca, which provides a platform for community management for homeowners associations, raised a $300 million private equity round led by Cove Hill Partners. Vantaca says it serves more than 500 management companies. The 9-year-old Wilmington, North Carolina-based company was valued at $1.3 billion.

Defense tech

  • Defense acquirer Govini raised a $150 million private equity round led by Bain Capital. The 14-year-old Arlington, Virginia-based company was valued at $1 billion.

Beauty

  • Chinese skincare brand Chando raised a $104 million funding led by Harvest Capital and L’Oreal. The 24-year-old Shanghai-based company was valued at $1 billion.

Semiconductor

  • Substrate, a company planning to build a compact lithography machine to support the manufacturing of chips in the U.S. market, raised a $100 million Series A from Founders Fund, General Catalyst and In-Q-Telamong others. The 4-year-old San Francisco-based company was valued at $1 billion.

Read More

How to determine the right valuation for your startup | Peter Walker posted on the topic | LinkedIn

LinkedIn • Keith Teare • November 19, 2025

LinkedIn•Venture Capital•Venture

How to determine the right valuation for your startup | Peter Walker posted on the topic | LinkedIn

Source: LinkedIn | Peter Walker


So you’re raising between $500K and $1M for your startup. What’s the right valuation?

There isn’t one right answer, but the range depends on where you live.

Benchmarks taken from 3,370 SAFEs signed from June 2024-Sept 2025.

The ranges in the chart looks at the valuation caps on each of the signed SAFEs. The val cap range goes from the 10th percentile at the left to the 90th percentile on the far right.

𝗙𝗶𝗻𝗱𝗶𝗻𝗴𝘀

  1. Where you raise money matters. The top 25% or so of Bay Area valuation caps for this amount of money basically don’t exist anywhere else.

  2. There is no national market for startups. The deal you get will be based on the available investor pool and the perceived demand for your business.

  3. Founders in non-major markets do tend to get diluted more heavily at the beginning than founders in the core VC ecosystems.

  4. Valuation caps are NOT valuations. Many startups will convert at their first priced rounds below the cap (or at it, which again is not what the investors are hoping for).

  5. You should care about dilution, for sure. But you should not refuse to take in capital you need just because the dilution isn’t what you hoped for. Stay alive first!

Read more on LinkedIn

Portfolio valuations need to stand the scrutiny of auditors #valuations #fundaudit

Youtube • Carta • November 14, 2025

Venture


Core message

  • Portfolio companies’ valuations must be defensible under formal audit review. The standard is not just a point estimate but a transparent, well-documented process that a third party can replicate and challenge. This means aligning valuation methods with recognized accounting guidance, keeping evidence organized, and showing how judgments were reached and tested.

What auditor scrutiny entails

  • Clear valuation policy: Define frequency, methods permitted, materiality thresholds, review/approval steps, and escalation paths for complex cases.

  • Documentation: Maintain contemporaneous memos explaining method selection, inputs, assumptions, and calibration; archive supporting artifacts (term sheets, cap tables, financial statements, bank statements, board decks, KPI reports).

  • Replicability: Provide workbooks or models with version control and labeled inputs so auditors can rerun calculations.

  • Governance: Evidence of preparer and reviewer sign-offs, conflict checks for any third-party providers, and board/Audit Committee oversight.

Method selection and calibration

  • Use accepted approaches—market (comparable company and transactions), income (DCF), and cost approaches—selected based on data availability and company stage.

  • Calibrate to observable transactions, especially the price of recent financing, then adjust for timing, market conditions, and company-specific performance since the round.

  • For complex cap structures, employ option-pricing methods or probability‑weighted expected returns, allocating value across preferred and common with explicit assumptions about liquidation preferences, participation, and conversion.

Evidence auditors expect

  • Tie-outs: Model inputs reconciled to source systems; cap table reconciled to legal docs; revenue and key metrics tied to ledgers.

  • Reasonableness checks: Sensitivity analyses around key drivers (revenue growth, margins, discount rates, multiples) demonstrating the range of fair value.

  • Subsequent events: Documentation of material events after the valuation date (new term sheets, covenant breaches, customer churn/wins) and whether they inform measurement or disclosure.

  • Market context: A dated set of comparable multiples with rationale for inclusion/exclusion and adjustments for size, growth, and profitability.

Common pitfalls

  • Relying on stale comps or last-round prices without calibration to current performance and market shifts.

  • Ignoring rights and preferences that materially reallocate value from common to preferred.

  • Using generic discount rates or “rules of thumb” without empirical support.

  • Insufficient write-downs in adverse scenarios, or asymmetric treatment of upsides versus downsides.

  • Poor audit trail: missing memos, unlabeled models, and undocumented judgments that prolong audits and invite challenges.

Process and timing

  • Plan on a recurring cadence (quarterly or at least annually) with a documented timetable that starts weeks before audit fieldwork.

  • Engage independent valuation specialists for higher-risk or Level 3 measurements; ensure independence disclosures and scope letters are on file.

  • Maintain a centralized data room with immutable snapshots at the valuation date, plus a log of changes and approvals to streamline auditor requests.

Implications for stakeholders

  • For GPs and CFOs: Robust, audit-ready valuations reduce the risk of adjustments, qualified opinions, and fundraising delays; they also improve consistency across funds and vintages.

  • For LPs: Transparent methodologies and governance enhance trust and comparability, enabling better portfolio risk assessment.

  • For companies: Accurate fair value signals capital needs earlier, informs secondary transactions, and helps boards set expectations around performance and dilution.

Key takeaways

  • Treat valuation as a controlled process, not a one-off number.

  • Choose methods fit for stage and data, then calibrate rigorously to observable evidence.

  • Build a defensible file: clear memos, reconciliations, sensitivities, and governance records.

  • Anticipate auditor questions by making models replicable and assumptions explicit.

  • Consistency and transparency are the fastest paths to a smooth audit and credible marks.

Read More

Seedcamp’s investment in Function Health’s $298M Series B via our Select Fund

Seedcamp • November 20, 2025

Venture


We are delighted to be investing in Function Health to define the future of personalised care.

As a Day One investor in Ezra, we were an early believer in founder Emi Gals mission to bring AI cancer screening to everyone. Fast forward to today, Ezra is an integral part of Function Health and we are excited to “triple down” on our support as investors in Function Health’s newly announced $298M Series B via our Select fund. Redpoint Ventures led the round with participation from Andreessen Horowitz, Aglaé Ventures, Battery Ventures, QuantumLight, Wisdom VC and others. Function is now valued at $2.5 billion.

The current medical system operates reactively, hindering individuals from proactively taking charge of their health and well-being. Recent innovations in consumer health, such as comprehensive lab testing and full-body MRI, are driving change.

Ezra and Function Health, two companies pioneering the consumer healthcare space, joined forces earlierthis year to create a truly personalized healthcare platform.

On a mission to empower everyone to live 100 healthy years, Function is the first platform to make lab testing, advanced MRI and CT scans, and longitudinal health data accessible and understandable.

Function’s platform enables individuals to uncover trends and watch health transformations over time.Members can understand their whole body from heart and hormones to thyroid, nutrients, toxins, autoimmunity, immunity, and beyond.

Mark Hyman, MD, Chief Medical Officer and Co-Founder at Function and former Cleveland Clinic physician:

“Function is the most powerful approach I’ve seen in my career as a doctor. It delivers uncompromising depth with no shortcuts. This is the new standard for health.”

Alongside Dr Mark Hyman, Function Health’s world-class founding team includes CEO Jonathan Swerdlin, Pranitha Patil, Chief Business Officer, and Seth Weisfeld, Chief Design Officer.

New additions to the leadership team and excellent proof of Function Health’s ability to attract strong talent include:

  • Neil Shah, COO. He brings extensive operational experience from his previous roles at Bumble, Slack, and Twitter

  • Daniel K. Sodickson, MD, PhD, Chief Medical Scientist and MI Lab Co-Director

  • Tiffany Lester, MD, Women’s Health Medical Director

Function Health also announced the launch of the Medical Intelligence Lab (MI Lab), co-directed by Chief Medical Scientist Dan Sodickson, MD, PhD. Bringing together a team of top clinicians, researchers and technologists, the MI Lab focuses on leveraging AI to develop Medical Intelligence — a system designed to achieve the deepest view of each person’s unique biology by unifying data from lab testing, imaging, wearables, IoT devices, and medical records, integrating it with global medical research and the expertise of leading clinicians.

Function’s Medical Intelligence has three new AI capabilities:

  • Private AI Chat allows members to ask questions and receive responses informed by their health data, providing context-aware explanations and actionable insights.

  • Protocols translate complex health data into easy-to-understand steps members can put into practice immediately.

  • Upload Health Records allows members to upload past lab test results, visit notes, etc., into a secure vault that informs Private AI Chat and Protocols.

On Function Health’s mission and the newly launched platform, Jonathan Swerdlin, Co-founder and CEO of Function Health, emphasizes:

“This is bigger than any company or trend. Function’s MI Lab and Medical Intelligence introduces a new chapter in human health. This is the most important application of AI—helping people avoid suffering and preventable death.”

Function Health in numbers

Function offers access to over 160 comprehensive lab tests to detect 1,000+ diseases, along with detailed and actionable insights from the world’s top doctors for just $365 per year or $1/day.

2,000+ lab test locations across the U.S.

132 MRI scan locations

50+ million results delivered to Function members

Read More

Cursor Hit $1B ARR in 24 Months: The Fastest B2B To Scale Ever?

Saastr • Jason Lemkin • November 18, 2025

Venture

Cursor just crossed $1B in ARR less than 24 months from launch. And just closed a $2.3 billion Series D at a stunning $29.3 billion valuation.

For a company that launched its product … 17 months ago.

This isn’t just another funding announcement. This is the fastest value creation in B2B history, so far. Faster than OpenAI. And the data behind it rewrites everything we thought we knew about how fast B2B companies can scale.

The Numbers Are Jaw Dropping

Let’s start with the headline metrics:

Valuation Journey:

  • April 2022: $400K pre-seed

  • October 2023: Seed at ~$50M implied (based on $8M raise)

  • August 2024: $400M post-money (Series A)

  • December 2024: $2.6B post-money (Series B)

  • June 2025: $9.9B post-money (Series C)

  • November 2025: $29.3B post-money (Series D)

That’s a 73,250x increase in valuation in 43 months.

Revenue Trajectory:

  • December 2023: $1M ARR

  • April 2024: $4M annualized run-rate

  • October 2024: $48M ARR

  • January 2025: $100M ARR (20 months from launch)

  • June 2025: $500M ARR

  • November 2025: $1B+ ARR

From $1M to $1B ARR in 24 months. We’ve never seen anything like it.

The Series B to Series D Run Is Unprecedented

Here’s what really gets me:

December 2024 → November 2025 (11 months):

  • Valuation: $2.6B → $29.3B (11.3x increase)

  • ARR: ~$100M → $1B+ (10x increase)

  • Capital raised: $2.3B across two rounds

They raised their Series B at $2.6B in December 2024. Four months later, they raised their Series C at $9.9B. Five months after that, they raised their Series D at $29.3B.

The valuation jumped 3.8x between Series B and C in 4 months.

Then it jumped another 3.0x between Series C and D in 5 months.

I’ve been investing in SaaS for 15+ years. I’ve never seen anything close to this velocity.

Read More

Crunchbase: We’re At The Highest Level of Unicorn Production in 3+ Years

Saastr • November 20, 2025

Venture


The data is in. And it’s remarkable. And it’s what we’ve all been feeling and seeing in venture. The Unicorns Really Are Back.

Back to The Highest Level In 3+ Years.

October 2025 just became the biggest month for unicorn creation in over three years. Not just by count. By aggregate valuation. By momentum. By every metric that matters.

Twenty companies joined The Crunchbase Unicorn Board in October. Together, they added $44.5 billion in value. In one month.

This is the highest valuation amount added to the unicorn board for a new cohort in the past three years.

But capital is all concentrated in the winners in the Age of AI more than ever. More startups aren’t getting funded.

The deal count is also up, but nowhere near the pace of 2021. Yet:

Why This Matters For B2B

If you’re building a B2B SaaS company right now, here’s what this data actually means for you:

The funding environment is normalizing. Not back to 2021 craziness. But back to rational growth funding for companies with real metrics. If you’re doing $2M-$5M ARR with strong net retention and reasonable CAC payback, there’s capital available. More capital than there was 12-18 months ago.

AI-native companies are getting disproportionate attention. If your product has meaningful AI capabilities baked in from day one (not bolted on, not a chatbot feature, but real AI that changes how the product works), you’re in a different category. VCs are specifically hunting for these companies. The valuations reflect it.

The bar is higher but the rewards are bigger. Twenty unicorns in a month sounds like a lot. Until you realize thousands of companies raised Series A, B, and C rounds in October. The conversion rate to unicorn is still low. You need exceptional execution. But if you get there, the valuations are back to healthy levels.

The Bottom Line

We’re at an inflection point. The data from Crunchbase confirms what I’ve been seeing in deal flow and portfolio company performance: the market is healing. Not overnight. Not back to excess. But healing toward sustainable, rational funding for excellent companies.

Read More

Regulation

Meta wins US case that threatened split with WhatsApp and Instagram

Ft • the Federal Trade Commission • November 18, 2025

Regulation•USA•Antitrust•Big Tech•Meta


Overview of the Ruling

A US federal court has rejected an antitrust case seeking to force Meta to divest Instagram and WhatsApp, removing what had been described as an existential threat to the company’s current structure. The Federal Trade Commission (FTC) had argued that Meta maintained an illegal monopoly in social networking through a “buy‑or‑bury” strategy aimed at neutralising emerging rivals, but the judge concluded the agency failed to prove that Meta presently holds monopoly power in the relevant market. (ft.com)

The decision turns back one of the most high‑profile efforts by US regulators to unwind past Big Tech acquisitions and dramatically reshape a dominant platform’s business model. It follows a multi‑year legal battle initiated in 2020, reflecting the broader policy push to test aggressive antitrust enforcement against large technology companies. (en.wikipedia.org)

Court’s Reasoning and Market Definition

  • The court held that the FTC had not shown Meta currently wields monopoly power in social media or “personal social networking,” stressing that the competitive landscape has changed significantly since the lawsuit was first filed. (apnews.com)

  • Judge James Boasberg emphasised that platforms such as TikTok and YouTube must be treated as meaningful competitors, criticising the FTC’s narrow market definition that largely excluded these services. (ft.com)

  • The opinion stressed that market power must be evaluated in the present, not simply inferred from past dominance or historic acquisitions, and that apps like Facebook, Instagram, YouTube and TikTok now offer “reasonably interchangeable” features. (theverge.com)

In a notable passage, the judge compared the fast‑moving dynamics of social media to a river that cannot be stepped in twice, underlining that the environment in which the FTC originally brought the suit has “changed markedly,” particularly with TikTok now “center stage” as Meta’s fiercest rival. (wusf.org)

FTC’s Case and Meta’s Response

  • The FTC had accused Meta of pursuing “killer acquisitions” when it bought Instagram in 2012 and WhatsApp in 2014, alleging the company paid high prices specifically to neutralise nascent competitors and protect a social‑networking monopoly. (livemint.com)

  • As a remedy, the agency sought structural separation, pushing for the spin‑off of Instagram and WhatsApp into independent entities, arguing that only a break‑up could restore competition and user choice. (wusf.org)

  • Meta defended the deals as legitimate acquisitions of promising products in an intensely competitive market, contending that regulators approved both transactions at the time and that the company now faces robust competition for users’ attention and advertising budgets. (apnews.com)

Meta welcomed the ruling as recognition that it operates in a highly competitive environment and as validation that its products benefit consumers and businesses. The FTC expressed disappointment and said it is reviewing its options, leaving open the possibility of appeal or alternative enforcement strategies. (ft.com)

Broader Implications for Tech Antitrust

This judgment is widely seen as another setback for US efforts to use traditional antitrust law to break up or structurally reshape Big Tech firms. It follows other government losses, including in a recent case against Google, and underscores the difficulty of convincing courts that established platforms remain unlawful monopolies amid rapid entry and innovation by rivals. (ft.com)

Key implications include:

  • Regulators may need more up‑to‑date economic evidence and broader theories of competition that account for cross‑platform substitution in attention and advertising markets.

  • Courts appear reluctant to unwind decade‑old, previously approved mergers absent clear proof of sustained, present‑day monopoly power and consumer harm.

  • The decision may embolden other large platforms facing antitrust scrutiny, while pushing enforcers toward alternative tools such as conduct remedies, sector‑specific regulation, or new legislation targeting digital platforms.

Overall, the ruling preserves Meta’s integrated structure—keeping Facebook, Instagram and WhatsApp under one corporate roof—and signals that the bar for break‑up remedies against Big Tech remains very high under current US antitrust doctrine.

Read More

Opinion | The FTC’s Meta Antitrust Implosion

Wsj • The Editorial Board • November 19, 2025

Regulation•USA•Antitrust•Big Tech•Meta

Opinion | The FTC’s Meta Antitrust Implosion

Overview of the Case and Its Outcome

  • The piece argues that a major antitrust case brought by federal regulators against a large social media company has effectively collapsed after years of litigation.

  • Regulators had claimed the company maintained an illegal monopoly in social networking, largely through its acquisitions of two key platforms, a photo-sharing app and a messaging service.

  • The article notes that after roughly five years, the courts have not embraced this theory, especially in light of intense competition from newer platforms and shifting user behavior online.

  • The central theme is that antitrust enforcement built on static views of “monopoly” struggles to survive in fast-moving digital markets.

Regulators’ Legal Theory vs. Market Reality

  • Regulators framed the company as a dominant “social networking” platform that allegedly bought rivals to neutralize competitive threats.

  • The acquisitions of Instagram (photo-sharing) and WhatsApp (messaging) were characterized as “killer acquisitions” meant to preserve a monopoly.

  • The article stresses that these deals were reviewed by regulators at the time and allowed to proceed, which weakens the credibility of trying to unwind them many years later.

  • Since the case was filed, the online landscape has shifted: short‑video apps, messaging competitors, and niche communities have fragmented user attention.

  • This evolution undercuts the claim that one company can be treated as a durable, unchallenged monopolist in social networking.

Judicial Skepticism and Case Weaknesses

  • The piece highlights that the presiding judge repeatedly pushed back on the government’s market definitions and evidence.

  • A core weakness is said to be the attempt to carve out “personal social networking” as a distinct market in which only one or two firms allegedly compete, while ignoring broader online attention markets and adjacent services.

  • The judge’s skepticism suggests that courts demand concrete, quantifiable proof of monopoly power, not just narratives about “big tech” dominance.

  • The article implies that, as competition surged from newer entrants, regulators’ original market theory looked increasingly outdated, contributing to the implosion.

Implications for Antitrust Strategy and Big Tech

  • The outcome is framed as a major setback for current antitrust leadership that has pursued aggressive cases against large technology platforms.

  • It suggests that political enthusiasm for “reining in” big tech cannot substitute for rigorous economic analysis and credible legal theories.

  • The piece warns that retroactive attacks on previously cleared mergers create uncertainty for businesses: companies cannot reliably plan acquisitions if approvals can be reversed years later on shifting political tides.

  • It argues that overbroad or weak antitrust actions risk wasting public resources and may deter pro‑competitive investment and innovation in the technology sector.

Broader Lessons for Regulation in Digital Markets

  • The article contends that dynamic markets—especially online platforms—change too fast for regulation built on static snapshots of market share.

  • It suggests antitrust should focus on clear consumer harm, such as sustained higher prices or reduced output, rather than punishing size or success alone.

  • The collapse of this case is presented as evidence that general concerns about “bigness” and speculative theories about future harm are insufficient in court.

  • In conclusion, the piece argues that regulators should recalibrate their approach: concentrating on well‑grounded cases supported by robust economic evidence instead of high‑profile but fragile lawsuits against technology firms.

Read More

Ro Khanna on limiting AI via a Tax Code

X • friedberg • November 15, 2025

X•Regulation


Friedberg to Rep. Ro Khanna: Don’t Stall AI’s Organizational Evolution—It Can Unlock Higher-Paying Work

Key takeaway: David Friedberg argues that policies aimed at preventing AI-driven organizational change risk suppressing the very technological progress that can create higher-paying jobs and new entrepreneurial pathways for workers.

Context

In a reply to Rep. Ro Khanna, Friedberg challenges the premise of protecting existing jobs by constraining how organizations adopt AI. He contends that such constraints limit technology’s capacity to raise worker value and generate better opportunities, drawing a historical parallel to the adoption of tractors in agriculture.

Core Points from the Thread

  • AI as a job creator, not just a disruptor: Friedberg asks, “what if the AI creates/enables/unlocks new higher-paying jobs?” suggesting that policy should account for AI’s potential to expand opportunity rather than only protect the status quo.

  • Organizational evolution matters: He argues that trying to prevent organizational change due to technology “limits technology’s ability to create more value for workers.” In other words, value creation often requires companies to restructure workflows and roles around new capabilities.

  • Historical analogy—tractors: Friedberg suggests that if society had tried to protect jobs by blocking the tractor’s adoption, it would have curtailed productivity gains and the downstream economic benefits that later supported new kinds of work.

  • A concrete worker-to-founder pathway: He offers an example of a person earning $15/hour at a San Jose bicycle shop who aspires to build a custom bike company but lacks capital, a labor pool, and capabilities. Friedberg implies that “tomorrow” (with advancing technology like AI), those barriers could be lowered—enabling new forms of small-scale entrepreneurship. The thread hints at this future-state but does not complete the scenario in the excerpt provided.

Selected Quotes

“what if the AI creates/enables/unlocks new higher-paying jobs? by trying to prevent organizational evolution due to technology, you are limiting technology’s ability to create more value for workers.”

“if you had done this with the emergence of the tractor to protect loss of jobs”

“imagine: someone is working for $15/hr in a bicycle shop in san jose. they aspire to one day create their own bicycle company, selling custom bikes to customers. but how? today they don’t have the labor pool, startup capital, or capabilities. tomorrow,”

Why this matters

  • Policy timing and design: The argument underscores a classic innovation-policy tension: guard against harms without freezing the organizational changes that make productivity gains and new job categories possible.

  • Equity and opportunity: The bike-shop example frames AI as an equalizer—potentially reducing capital and capability barriers for workers to become founders.

Discussion Starters

  • What kinds of policy guardrails protect workers during transitions without stifling organizational adaptation to AI?

  • Which real-world examples show AI enabling new, higher-paying roles or solo entrepreneurship today?

  • How can training, tooling, and access to capital be aligned so more workers can leverage AI to move up the value chain.

Note: The second tweet excerpt ends mid-thought (“tomorrow,”). Follow the full thread for any additional details or examples from Friedberg.

Read More

GeoPolitics

Ben Horowitz & Marc Andreessen: Why Silicon Valley Turned Against Defense (And How We’re Fixing It)

Youtube • a16z • November 19, 2025

Venture•GeoPolitics


Overview

This content directs viewers to a conversation featuring Ben Horowitz and Marc Andreessen discussing why Silicon Valley historically distanced itself from the defense sector and what is changing now. The central themes include the cultural and political forces that pushed technologists away from working with the military, the strategic risks this created for the United States and its allies, and the emerging movement to rebuild a robust relationship between cutting‑edge startups, venture capital, and defense needs. The framing implies that the speakers see both a moral and strategic imperative for top talent and capital in Silicon Valley to re-engage with national security and defense innovation.

Historical Split Between Silicon Valley and Defense

  • The discussion explores how Silicon Valley, which originally had deep roots in defense and aerospace, gradually adopted the view that “defense = bad, consumer tech = good,” especially after the Cold War and the dot‑com era.

  • Cultural shifts in the tech industry, combined with political debates about war, surveillance, and civil liberties, led many founders and engineers to avoid defense work altogether.

  • This distance is presented as a break from earlier generations of technologists who saw national defense as a central mission and believed that advanced technology could be a stabilizing force in global security.

Consequences of Turning Away from Defense

  • The speakers emphasize that when Silicon Valley’s best minds and capital avoid defense, adversarial nations can gain relative advantages by investing heavily in their own military technologies.

  • They argue that the United States risks ceding leadership in key areas like AI, autonomy, cyber, and space if the highest-performing startup and VC ecosystem in the world remains focused solely on ad tech, entertainment, and consumer apps.

  • The conversation suggests that this gap has already contributed to slower adoption of new technologies inside Western defense institutions, creating vulnerabilities and “capability mismatches” on future battlefields.

Why the Relationship Is Changing Now

  • The video positions the present moment as an inflection point where geopolitical tensions, visible conflicts, and rapid advances in AI and robotics are forcing a reassessment of the tech industry’s role in defense.

  • Founders and investors are increasingly seeing defense as both a crucial mission and a massive, under-served market that requires modern software, agile hardware development, and new business models.

  • The speakers highlight a shift in mindset: instead of viewing defense contracts as slow, bureaucratic, and ethically fraught, more entrepreneurs are starting to see them as high-leverage opportunities to protect democratic societies and deter aggression.

How Silicon Valley Can Help Fix Defense

  • The conversation outlines how startups can bring speed, iteration, and software-first thinking to defense procurement and capability development.

  • Venture capital can fund dual-use technologies—AI, autonomy, sensors, cyber tools—that have both commercial and defense applications, allowing innovation to move faster than traditional government R&D pipelines.

  • The speakers advocate for rebuilding trust and collaboration between the Pentagon, Congress, and the tech ecosystem, including better procurement processes, clearer ethical frameworks, and more direct engagement between founders and defense leaders.

Implications and Broader Impact

  • Re-engagement between Silicon Valley and defense is framed as essential not just for U.S. security, but for preserving an open, rules-based international order in the face of authoritarian rivals.

  • The discussion implies that moral responsibility for technologists includes considering the consequences of inaction—i.e., what happens if democratic nations lack the technological capabilities to defend themselves.

  • If successful, this renewed alignment could reshape venture capital priorities, spur a new generation of “defense-first” startups, and redefine what it means to build consequential technology companies in the 21st century.

Read More

AI Fund’s GP, Andrew Ng: LLMs as the Next Geopolitical Weapon & Do Margins Still Matter in AI?

Youtube • 20VC with Harry Stebbings • November 17, 2025

AI•Tech•LLMs•Geopolitics•AI Startups•GeoPolitics


The content presents a YouTube conversation focused on large language models (LLMs), their geopolitical significance, and the changing economics of AI businesses. The central theme is that LLMs are becoming strategic assets for nations and corporations, comparable to previous “general purpose” technologies, and that this shift is forcing a re‑examination of what margins, defensibility, and value capture look like in AI-first companies. Alongside this, there is an exploration of how AI-native products differ from traditional SaaS and why old valuation frameworks may not fully apply.

LLMs as Geopolitical Infrastructure

  • LLMs are framed as a new layer of digital infrastructure with national security implications, similar to energy or semiconductors.

  • Control over compute, data, and cutting-edge models is discussed as a potential “geopolitical weapon,” influencing soft power, economic competitiveness, and cyber capabilities.

  • There is an implicit argument that countries able to develop and deploy advanced LLMs at scale will gain advantages in intelligence analysis, information operations, and automation of knowledge work.

  • The conversation highlights the risk of concentration: if only a few nations or firms control frontier models, others become dependent on them, which has strategic consequences.

Economics of AI and the Question of Margins

  • A major question posed is whether traditional software expectations of 70–80% gross margins remain realistic in an AI world dominated by high inference and training costs.

  • AI companies are described as sitting on a cost stack heavily influenced by GPU pricing, cloud contracts, and continuous retraining, making unit economics more volatile than classic SaaS.

  • The discussion emphasizes:

  • The tradeoff between model quality and cost per token or per API call.

  • The importance of optimizing infrastructure, model size, and architecture for specific use cases.

  • The role of custom models and fine‑tuning in improving both performance and cost efficiency.

  • There is a suggestion that investors and founders may need to accept lower gross margins in exchange for much larger addressable markets and deeper product integration into workflows.

Defensibility, Data, and Verticalization

  • Defensibility in AI is presented as moving away from pure algorithmic advantage toward a bundle of:

  • Proprietary or hard-to-replicate data.

  • Deep integration into customer workflows and systems.

  • Domain-specific models that outperform general-purpose LLMs on targeted tasks.

  • Vertical AI products (e.g., for healthcare, legal, or financial services) are highlighted as promising because they can combine specialized data, regulatory know‑how, and tailored models to create stickiness.

  • Network effects may emerge through feedback loops: more usage generates better fine-tuning data, which improves model performance, which in turn attracts more users.

Implications for Startups, Investors, and Policy

  • For startups, the key implication is that pure “wrapper” plays around foundation models are fragile; long-term value will accrue to those who own differentiated data, distribution, or domain expertise.

  • For investors, standard SaaS metrics must be adapted to evaluate:

  • Long-term trajectories of compute costs.

  • The defensibility of data pipelines and model IP.

  • How effectively AI is embedded in mission‑critical processes rather than offered as a superficial feature.

  • On the policy side, the framing of LLMs as geopolitical assets implies:

  • Growing pressure for export controls on advanced chips and models.

  • Potential requirements around data localization and model safety.

  • An arms-race dynamic where states fund and protect domestic AI ecosystems.

Overall Takeaways

  • LLMs are not just a technical breakthrough but a strategic resource reshaping economic competition and geopolitics.

  • The traditional software playbook—high margins, light infrastructure, and simple distribution—does not map neatly onto AI-native companies, which must manage heavy compute, data pipelines, and regulatory risk.

  • Long-term winners will likely combine strong technical capabilities with unique data, deep vertical focus, and thoughtful navigation of geopolitical and policy constraints.

Read More

The State of AI: the new rules of war

Ft • November 17, 2025

GeoPolitics•Defence•Autonomous Weapons•Military AI•Ethics In Warfare


Military planners are rapidly integrating artificial intelligence into warfare, hoping to create forces that are faster, more precise and less dependent on human frailty. The piece contrasts this ambition with mounting fears that AI-driven systems could escalate conflicts beyond human control, undermining ethical and legal safeguards and reshaping the global balance of power.

Imagined Conflict and Escalation Risks

  • The article opens with a near‑future scenario: in 2027, China moves to invade Taiwan using autonomous attack drones, AI‑enabled cyber operations that sever power and communications, and large‑scale AI‑driven disinformation campaigns to mute global outrage.

  • This vignette is used to illustrate the “dystopian horror” that AI brings to modern war: speed, scale and ambiguity can outstrip human decision‑making, increasing the risk of rapid, unintended escalation.

  • Military leaders want a “digitally enhanced” force, yet as AI takes a central role, they risk losing meaningful control over how conflicts unfold.

Ethical Red Lines and Regulatory Debates

  • The article frames AI in war as an “Oppenheimer moment,” arguing that understanding and mitigating the risks of AI is a defining strategic task of this era.

  • There is emerging consensus in Western policy circles that nuclear‑weapons launch decisions must never be delegated to AI systems.

  • UN secretary‑general António Guterres has called for an outright ban on fully autonomous lethal weapon systems, highlighting concerns over accountability, proportionality and civilian protection.

  • Researchers at Harvard’s Belfer Center caution that the public and some policymakers overestimate what current AI can do in combat, warning that sci‑fi narratives obscure technical limits and operational fragility.

Reality Check: Current Military Uses of AI

  • The article argues that “complete automation of war is an illusion.” Quoting Professor Anthony King, it notes that AI is more likely to augment than replace humans by sharpening analysis and improving situational awareness.

  • Three main military use cases are identified:

  • Planning and logistics: optimizing supply chains, maintenance and troop movements.

  • Cyber operations: supporting sabotage, espionage, hacking and information operations.

  • Targeting: the most controversial domain, where AI tools help select and prioritize targets.

  • In Ukraine, AI‑enabled software guides drones that can evade electronic jamming and continue toward pre‑planned targets.

  • In Gaza, Israel’s “Lavender” system reportedly identified tens of thousands of potential human targets. The article flags bias concerns but notes that some Israeli intelligence officers say they trust a “statistical mechanism” more than emotionally affected soldiers.

Legal, Moral and Technical Accountability

  • Developers and some military advocates contend that existing laws of armed conflict provide sufficient regulation, arguing that AI simply changes tools, not legal principles.

  • Critics respond that opacity, data bias, unpredictable failure modes and the difficulty of tracing responsibility in complex AI systems demand new oversight mechanisms.

  • The piece underscores the tension between the perceived promise of “cleaner,” more accurate warfare and the reality that AI errors or mis‑training can scale harm, not reduce it.

Economic Incentives and Industry Shifts

  • A powerful driver of militarized AI is money. Companies spending vast sums to train and run frontier models are increasingly looking to defense contracts as a lucrative revenue stream.

  • The Pentagon and European defense ministries are portrayed as deep‑pocketed buyers, eager to modernize and less hesitant than in the past to work with start‑ups.

  • Venture capital investment in defense tech has surged, with funding already outstripping the previous year, reflecting investors’ expectations of long‑term demand.

  • The article notes a cultural shift in parts of the tech industry: where some firms once rejected military work on ethical grounds, many now collaborate openly with defense contractors, influenced by both security arguments and financial incentives.

Implications for Future Warfare

  • The piece suggests that AI could lower the perceived political and human cost of using force if leaders believe strikes will be more precise, potentially making war more frequent.

  • It emphasizes that neither blanket optimism nor total prohibition is adequate; instead, societies must grapple with how to keep humans “in the loop” in meaningful ways, define red lines and impose transparency and accountability on developers and militaries.

  • Ultimately, AI is presented not as an inevitable path to autonomous killing machines, but as a powerful, error‑prone set of tools whose integration into war will reflect political choices, regulatory strength and public scrutiny.

Read More

Interview of the Week

What Yogi Berra can teach Silicon Valley: From Tulip and Railway Manias to Dotcom and AI Bubbles

Keenon • Andrew Keen • November 15, 2025

Venture•Interview of the Week

What Yogi Berra can teach Silicon Valley: From Tulip and Railway Manias to Dotcom and AI Bubbles

“Predictions are hard,” Yogi Berra once quipped, “especially about the future”. Yes they are. But in today’s AI boom/bubble, how exactly can we predict the future? According to Silicon Valley venture capitalist Aman Verjee, access to the future lies in the past. In his new book, A Brief History of Financial Bubbles, Verjee looks at history - particularly the 17th century Dutch tulip mania and the railway mania of 19th century England - to make sense of today’s tech economics. So what does history teach us about the current AI exuberance: boom or bubble? The Stanford and Harvard-educated Verjee, a member of the PayPal Mafia who wrote the company’s first business plan with Peter Thiel, and who now runs his own venture fund, brings both historical perspective and insider experience to this multi-trillion-dollar question. Today’s market is overheated, the VC warns, but it’s more nuanced than 1999. The MAG-7 companies are genuinely profitable, unlike the dotcom darlings. Nvidia isn’t Cisco. Yet “lazy circularity” in AI deal-making and pre-seed valuations hitting $50 million suggests traditional symptoms of irrational exuberance are returning. Even Yogi Berra might predict that.

Every bubble has believers who insist “this time is different” - and sometimes they’re right. Verjee argues that the 1999 dotcom bubble actually created lasting value through companies like Amazon, PayPal, and the infrastructure that powered the next two decades of growth. But the concurrent telecom bubble destroyed far more wealth through outright fraud at companies like Enron and WorldCom.

Bubbles always occur in the world’s richest country during periods of unchallenged hegemony. Britain dominated globally during its 1840s railway mania. America was the sole superpower during the dotcom boom. Today’s AI frenzy coincides with American technological dominance - but also with a genuine rival in China, making this bubble fundamentally different from its predecessors.

The current market shows dangerous signs but isn’t 1999. Unlike the dotcom era when 99% of fiber optic cable laid was “dark” (unused), Nvidia could double GPU production and still sell every chip. The MAG-7 trade at 27-29 times earnings versus the S&P 500’s 70x multiple in 2000. Real profitability matters - but $50 million pre-seed valuations and circular revenue deals between AI companies echo familiar patterns of excess.

Government intervention in markets rarely ends well. Verjee warns against America adopting an industrial policy of “picking winners” - pointing to Japan’s 1980s bubble as a cautionary tale. Thirty-five years after its collapse, Japan’s GDP per capita remains unchanged. OpenAI is not too big to fail, and shouldn’t be treated as such.

Immigration fuels American innovation - full stop. When anti-H1B voices argue for restricting skilled immigration, Verjee points to the counter-evidence: Elon Musk, Sergey Brin, Sundar Pichai, Satya Nadella, Max Levchin, and himself - all H1B visa holders who created millions of American jobs and trillions in shareholder value. Closing that pipeline would be economically suicidal.

Read More

Startup of the Week

Function Health raises $298M Series B at $2.5B valuation

Techcrunch • Kate Park • November 19, 2025

Venture•Startup of the Week


From electronic health records and blood tests to the stream of data from wearable devices, the amount of health information people generate is accelerating rapidly. Yet, many users struggle to connect this trove of data in a meaningful way and actually use it to improve their health.

Function Health, which offers a regular lab testing service to help people track their health, wants to change that by consolidating health data and making it usable for its customers by connecting that data to an AI model. To further that effort, the company recently raised $298 million in a Series B round led by Redpoint Ventures at a valuation of $2.5 billion.

The funding round also saw participation from a16z, Aglaé Ventures, Alumni Ventures, NBA athletes Allen Crabbe, Blake Griffin and Taylor Griffin, Battery Ventures, Nat Friedman and Daniel Gross’ investment firm, NFDG, and Roku founder Anthony Wood. The round brings the company’s total capital raised to $350 million.

Alongside the funding, Function unveiled Medical Intelligence Lab, an effort to build a “medical intelligence” generative AI model that can be used to provide personalized health insights based on users’ data, content and research. The company said the model is trained by doctors. For its customers, the company is offering an AI chatbot that can answer questions based on their health data, and can tap their previous lab results, doctor’s notes and scans to provide tailored guidance.

“It is not good enough to be in a world where AI exists and not be applying it to your health,” Jonathan Swerdlin, CEO and co-founder of Function, told TechCrunch. “You should be able to manage your biology. The objective of Function Health is to apply the best available technology to human health.”

Swerdlin noted the platform meets HIPAA standards, fully encrypts user data, and never sells personal information. “Your data and your identity are never for sale. Every bit of your information is fully encrypted and protected. We are committed to keeping you, and your data, safe.”

Function’s chief medical scientist, Dr. Dan Sodickson, and its co-founder and chief medical officer Dr. Mark Hyman, are together leading development of MI Lab and its team of doctors, researchers and engineers. The MI model is trained by doctors and they stay involved in the process, Swerdlin said.

While the space has many players, Function sets itself apart from competitors like Superpower, Neko Health and InsideTracker thanks to its device-agnostic approach, Swerdlin said, adding that the platform integrates lab testing, diagnostics and clinical insights to offer more than a typical AI coach or wellness app.

Function has 75 locations in the U.S., and plans to have almost 200 by the end of this year, he added. Function says it has completed more than 50 million lab tests since 2023.

Read More

Post of the Week

Elon Musk: In the future, ‘work will be optional’ and ‘currency becomes irrelevant’

Youtube • CNBC Television • November 19, 2025

AI•Work•Automation•PostScarcity•ElonMusk•Post of the Week


Overview

The content presents a very short video statement in which Elon Musk sketches a future shaped by advanced artificial intelligence and automation. He argues that, as AI and robots become capable of doing most economically valuable tasks, human labor will no longer be required for survival. In that world, he suggests, traditional notions of “work” and “money” are transformed: work becomes a choice rather than a necessity, and currency itself may lose its central role in organizing economic life.

Work Becomes Optional

  • Musk’s core claim is that in the long run “work will be optional,” implying that people will not need jobs to secure basic needs such as food, housing, and healthcare.

  • The idea rests on an assumption of near-universal automation, where machines produce goods and deliver services at such low marginal cost that scarcity is dramatically reduced.

  • In this vision, humans would be free to pursue activities for meaning, creativity, status, or personal fulfillment rather than to earn a living wage.

  • Work, in this scenario, resembles a hobby or vocation: something one does because one wants to, not because one must.

Currency Becomes Irrelevant

  • Musk also suggests that “currency becomes irrelevant,” signaling a future where money is no longer the primary medium of exchange or store of value.

  • If AI and automated systems can provide goods and services abundantly, the need to price and ration them via markets and money diminishes.

  • This implies either:

  • An economy of near-zero-cost provision, where access rather than payment becomes the main constraint; or

  • A post-scarcity or quasi–post-scarcity environment, in which basic goods are so plentiful that charging money for them loses meaning.

  • The statement hints at a radical restructuring of economic institutions—banking, wages, savings, and investment—because their logic depends on scarcity and the need to trade time and labor for income.

Social and Economic Implications

  • If work is optional, many existing social structures—career ladders, labor markets, education geared around employment—would need to be rethought.

  • Concepts like unemployment and job insecurity might fade, replaced by questions around purpose, identity, and how people choose to spend their time.

  • A future without central reliance on currency raises questions about:

  • How society allocates resources fairly.

  • What replaces monetary incentives for innovation and entrepreneurship.

  • How power and control over automated systems are governed.

  • Such a shift could reduce material inequality if access to automated production is widely shared, but could increase power concentration if a small group controls the AI and infrastructure.

Ethical and Governance Questions

  • Musk’s remarks implicitly raise ethical issues around who owns and manages the AI systems that make work and currency obsolete.

  • There would be debates over:

  • Ensuring universal access to the abundance created by automation.

  • Preventing new forms of digital or algorithmic inequality.

  • Designing governance frameworks that prevent misuse of extremely capable AI systems.

  • Society would need to redefine success and well-being, shifting emphasis from income and employment status to measures like happiness, creativity, relationships, and contribution to community.

Big-Picture Takeaways

  • The central message is a provocative, optimistic vision: advanced AI and automation could free humanity from the necessity of work for survival.

  • At the same time, it hints at profound disruptions to economic systems based on currency and wage labor.

  • The future Musk describes would require new models of distribution, new cultural norms about purpose and identity, and robust governance to ensure that the benefits of AI-driven abundance are widely shared rather than concentrated.

Read More


A reminder for new readers. Each week, That Was The Week, includes a collection of selected essays on critical issues in tech, startups, and venture capital.

I choose the articles based on their interest to me. The selections often include viewpoints I can't entirely agree with. I include them if they make me think or add to my knowledge. Click on the headline, the contents section link, or the ‘Read More’ link at the bottom of each piece to go to the original.

I express my point of view in the editorial and the weekly video.

Discussion about this video

User's avatar