Contents
AI Native Software and Hardware
Essays
Venture Capital
AI
The Next Trillion Dollar Marketplace Will Put SKUs on Services
OpenAI to Open-Source Some of the A.I. Systems Behind ChatGPT
ElevenLabs launches an AI music generator, which it claims is cleared for commercial use
GPT-5's Router: how it works and why Frontier Labs are now targeting the Pareto Frontier
OpenAI in talks for share sale valuing ChatGPT maker at $500bn
Box CEO on OpenAI's GPT-5 launch, AI use in the workplace and the future of the tech
The Cloud Wars Update: Who’s Winning the AI-Driven Growth Battle
AI and Publishers
AI and Jobs
Substack
Geopolitics
Defense Tech
China Tech
Stablecoins
Interview of the Week
Editorial
AI Native is Here
This Week AI Broke Our Software—and Hardware—Assumptions
This advancement establishes a new dimension beyond raw knowledge, allowing AI to move from advice-giving to direct action-taking within complex workflows.
Thus spoke Tomasz Tunguz in his essay From Knowledge to Action. The same theme is echoed in GPT-5 Hands-On: Welcome to the Stone Age on Latent Space":
The Stone Age marked the dawn of human intelligence, but what exactly made it so significant? What marked the beginning? Did humans win a critical chess battle? Perhaps we proved a very fundamental theorem, that made our intelligence clear to an otherwise quiet universe? Recited more digits of pi?
No. The beginning of the stone age is clearly demarcated by one thing, and one thing only: humans learned how to use tools.
What if the defining assumption of modern tech — that software is something humans write, and other humans use, and hardware is something we buy and use—just became wrong?
This week’s stories say exactly that. OpenAI’s GPT‑5 “just does stuff,” routes work to tools, and slashes latency and energy via a new router; Anthropic keeps shipping pragmatic upgrades that turn models into working colleagues. The net: software is becoming an actor, not an app, and hardware is going to have that embedded - imagine a child talking to a toy and getting answers back. I can already do that in my Tesla Model 3 using Grok. Or telling the lawn mower to go mow the lawn and it not only does it but reports back to you.
1) As Tomasz Tunguz states - Software has moved from knowledge to action—and budgets will follow
GPT‑5 is not a smarter autocomplete; it’s a workflow engine. Its router policy picks specialized modules, calls tools, and self‑verifies work, yielding ~4x lower latency and half the energy per token. Pair that with hands‑on reports of GPT‑5 proactively spinning up entire apps and collateral (“it just does stuff”), and Tom Tunguz’s framing is here: from advice to execution.
Anthropic’s tack is quieter but consequential. Claude Opus 4.1 isn’t splashy; it’s better at code refactors, reasoning, and “agentic” tasks—exactly where enterprises live. As Dario Amodei told John Collison (via Om Malik), code is the early indicator of what’s coming everywhere else.
The result hits pricing and procurement first. Jason Lemkin put it bluntly:
“Every developer… is going to get $10,000 a month of AI credits… Shopify is already there for some of its top developers.”
McKinsey’s 12,000 agents and Box’s “enhance, don’t replace” posture show where Fortune 500s are headed. Meanwhile, the ground truth: non‑coders are building internal tools in hours, ditching pricey SaaS (Every’s $50k‑in‑three‑hours story) as “vibe analysis” lets teams talk to their data and compress cycles 2x to 100x.
2) Hardware is the constraint—and it’s reorganizing the stack
Azure captured ~43% of net new cloud run‑rate, but all three hyperscalers say demand exceeds capacity. Power and chips—not sales—are the gating factor. GPT‑5’s router is thus not just clever; it’s a power policy.
The hardware response is national and local. Apple’s additional $100B U.S. investment includes a Houston facility to build AI servers—a reshoring bet that compute sovereignty becomes strategy.
Edge is back. OpenAI’s open‑weight “gpt‑oss” models run on a single 80GB GPU—or even a laptop (20B on 16GB). If your agents can act locally with near‑o4‑mini capability, you don’t just save cost; you change privacy, latency, and vendor risk. ElevenLabs’ licensed music generator hints at the parallel content supply chain: on‑device generation backed by explicit rights.
3) How can we help AI and also enable revenue streams?
Cloudflare’s charge that Perplexity used stealth, undeclared crawlers to evade robots.txt (Perplexity disputes it) is more than drama; it’s the fault line for an AI‑era web. ChatGPT, Cloudflare notes, respected robots.txt; the market will punish those who stall AI and reward those who help it but it will also reward AI for figuring out how to channel money to content producers. My 2c is that this needs more than training fees policed by robots.txt.
Capital is concentrating around those who can shoulder the capex. Carta shows valuations up 15–25% even as deals fall; Crunchbase highlights $70B flowing to just 11 companies. OpenAI’s prospective $500B secondary, 700M weekly users, and even my $10T hot‑take underscore the winner‑take‑most stakes.
Meanwhile, Meta’s “copy button” culture and Substack vs. Ghost reveal distribution power shifting from brands to platforms and—importantly—back to open infrastructure. China’s push for a global AI governance plan says the rules of this game won’t be written in one capital.
What others are missing: The software story isn’t “apps get AI.” That would simply plug AI use into existing user interfaces.
The real story is that agents plus tool‑calling invert enterprise design.
What to watch next
Will enterprises appoint “agent managers” and codify routing, spend caps, and human‑in‑the‑loop by function? (Compliance, claims, tax are already drawing legal lines.)
Do open‑weight models at the edge erode proprietary moats—or expand them via proprietary data and deep workflow integration?
Does the Cloudflare‑Perplexity fight catalyze a paid, auditable standard for AI access to the web—and who enforces it?
Can clouds expand power faster than AI demand? If not, expect more Apple‑style reshoring and product designs that privilege energy‑aware routing.
If last year was about demos, this week made it operational. Software does the work. Hardware rations the power. Our assumptions should update accordingly.
AI Native Software and Hardware
I Found 12 People Who Ditched Their Expensive Software for AI-built Tools
Every • August 4, 2025
Technology•AI•Software•Automation•NoCode•AI Native Software and Hardware
During Every's recently completed Think Week, the team addressed internal pain points by building tools with AI—often with no coding required. This approach, while not unique to Every, highlights a growing trend. Lewis Kallow found 12 examples of people saving six figures and launching products faster by prompting AI instead of hiring developers. These individuals built what they needed in hours, not months.
I recently heard a founder explain how he saved $50,000 in three hours. He didn’t achieve it by budget-cutting or layoffs but by prompting AI to build his own custom software tool, writing zero lines of code himself.
These stories show people creating powerful internal tools by prompting AI, replacing expensive software, automating workflows, and shipping products faster without writing code. These are internal tools for teams and organizations—the unglamorous software that actually runs businesses. AI has now made these accessible to anyone who can write prompts.
One standout story is Joshua Wöhle, a six-time founder and CEO of Mindstone, who nearly signed a $50,000 SaaS contract for a tool to connect community members but instead built the entire software himself in three hours using AI without manually writing or editing code. This saved him $50,000 with no compromise on quality.
Brian Christner replaced a costly online course platform Kajabi by building his own on Replit, cutting his costs to one-tenth and tailoring features exactly for his students’ needs.
Manny Bernabe built an enterprise-grade vendor portal that manages vendors and contracts—a tool that traditionally costs five figures and weeks to develop—mostly by directing AI to generate the code, saving substantial time and money.
Michael Luo, a product manager at Stripe, built a free Docusign alternative compliant with electronic signature laws in a weekend for less than $50, showcasing the potential for quick, affordable solutions.
Matt Palmer automated his tedious UTM tracking workflow in under an hour with zero manual coding, creating an app that manages perfectly formatted tracking parameters automatically.
Other stories include founders and teams building prototypes quickly, automating workflows that once took hours weekly, and transforming ideas into investor-backed startups using AI-powered no-code or low-code platforms. For example, Gustav Linder vibe-coded an AI-powered fashion website with e-commerce functionalities, which attracted investors and full-time attention.
Zinus, a mattress company, automated customer service quality assurance using AI in half the time and cost of traditional developer-built solutions, saving $140,000.
These case studies demonstrate that even those without technical backgrounds can build fully functioning applications by prompting AI, drastically reducing timeframes from months to hours or days, lowering costs, and gaining competitive advantages.
An internal tool doesn’t need to be complex or polished to be valuable. AI agents are proving effective by creating simple tools that save hours and reduce costs, enabling ideas to come to life faster than ever.
The State of AI-First Services Today
Medium • Florian Seemann • July 31, 2025
Technology•AI•AI•AI Native Software and Hardware
Over the past months, we’ve explored the rise of AI-first service businesses from several angles: Louis started by laying out why we believe foundation models are now performant enough to support full-stack services and that these businesses could become meaningful in ways traditional software can’t. We then mapped the early landscape and looked at how M&A might accelerate their path to scale.
In these earlier explorations, we argued that their operational playbook(s) and value proposition(s) often diverge meaningfully from traditional software businesses and that their addressable markets could often be multiple times larger.
What wasn’t clear to us was how these businesses could balance automation and service quality, how they could scale operations without linear cost growth, how they could build strong (data) moats, or how they could articulate value beyond cost savings.
After dozens of founder conversations, deep dives into emerging sub-verticals, and some early commercial signals from the market, we now have more evidence to revisit those questions. As part of this, we’re also sharing an updated version of the market map to reflect where we’re seeing the most activity and momentum.
We began by mapping verticals to understand where AI-first service businesses are most likely to succeed. The initial pattern suggested that the most promising opportunities lie in under-digitized industries with entrenched, low-NPS incumbents like property management. But what’s proven even more important is the shared structure of the workflows these businesses aim to replace.
Whether it’s insurance claims, tax filings, property management, or immigration law, the underlying work is often highly structured and repeatable, driven by documents, rules, and checklists rather than creative problem-solving. Most of it still runs on PDFs, spreadsheets, and legacy systems.
That shared anatomy makes these categories particularly well-suited to being rebuilt from the ground up:
Structured intake and triage: In insurance or property management, each case starts with a flood of semi-structured inputs, e.g., KYC packs, maintenance tickets, scanned forms. AI-native firms use LLM + OCR pipelines to classify and validate this data automatically, routing only edge cases to humans.
Document-heavy review and cross-referencing: Tax firms or claims processing specialists often rely on junior/lower-qualified staff to search through dense statutes, policy docs, or precedent letters. RAG and vector search now surface the relevant clause in seconds.
Repeatable, rules-based decisioning: Whether it’s approving a claim, closing an alert, or filing a compliance opinion, outcomes often follow fixed logic. AI-first teams train policy agents to apply those rules, explain the outcome, and log it immutably.
Low-value manual tasks at scale: Across verticals, teams still spend hours copying data across systems, reconciling ledgers, or filling out forms. End-to-end automation reduces marginal cost per case to near zero and scales throughput without linear hiring.
Massive untapped historical data: Legacy firms sit on decades of returns, claims, filings, and alerts, mostly untouched. AI-native companies structure this data to train vertical-specific models that improve accuracy and create defensibility.
What emerges is a new class of service business: data-in, judgment-out factories. They run on documents, structured data, and rules-based logic. The output is a decision, classification, or filing, increasingly handled by agents instead of human analysts.
Since our original market map, we’ve seen a noticeable uptick in activity across insurance-related services, e.g., Inca or Elysian in claims processing and Flow or Meshed in brokerage. Financial services remain a strong category, with steady expansion across tax, accounting, and compliance. And we’re beginning to see more unique and complex use cases emerge, like Convexia in drug discovery or Operand in management consulting.
Some things weren’t clear early on, i.e., how far automation could go without hurting quality, what scalable delivery would look like, or how these businesses would differentiate beyond price. We’re starting to see more answers take shape.
Across most verticals we’ve looked at, companies are not aiming for full automation from day one. Instead, they’re building workflows where AI handles the bulk of routine tasks and humans remain involved at key points. Specifically in:
Regulatory compliance: In many verticals, there are legal ceilings on automation. German customs brokers are legally required to manually review filings. Property managers must conduct in-person meetings annually. Tax advisors often need certified oversight. In these contexts, human accountability isn’t optional, and will not become so in the near future.
Accuracy assurance: In high-stakes workflows (claims, filings, tax, security), automation errors are costly. Some of the companies we’ve talked to invest heavily in custom verification layers, reviewer training, and tightly controlled workflows. Control sheets, QA loops, and task-specific overrides ensure that speed doesn’t come at the expense of accuracy.
Trust and relationship management: Some industries are still fundamentally human, e.g., brokers, real estate agents, wealth advisors. These customers often care more about trust and service than technical elegance. Integral, an AI-native tax advisory for German SMBs, doesn’t mention AI once on its homepage. Their customers aren’t looking for sophistication; they’re looking for confidence.
We’re particularly excited about companies that manage to productize parts of their service early, without rushing into full automation too soon, or defaulting to stitching together off-the-shelf tools without real leverage. Getting automation right is less about maximizing coverage and more about sequencing it properly, starting with the aspects of the business that provide the biggest operational leverage.
In AI-first services, growth without automation just means more people. And more people likely lead to margin compression, coordination risk, and brittle operations. We’ve been excited to see some companies starting out by building the systems that allow margins to expand with volume by encoding expertise into infrastructure.
Vibe Analysis
Danhock • Dan Hockenmaier • July 31, 2025
Technology•AI•Data Analysis•Analytics•Automation•AI Native Software and Hardware
Vibe coding is for creating software. Vibe analysis is for creating insights.
Vibe analysis could be an even bigger deal: there are about 2 million software engineers in the US, but at least 5 million people who use data to answer questions every day. That means that in the US alone, we’re spending 10 billion hours a year reporting on business performance, assessing new products and features, and deciding which experiments to ship and which growth opportunities to pursue.
I worked with some of the team at Faire who are at the edge of applying AI to analytical work — Alexa, Ali, Blake, EB, Jolie, Max, Sam, Tim, and Zach — to shed light on the change that is coming.
We’ll look at both the bull case (how AI could massively increase the efficiency and quality of data analysis) and the bear case (why it will be harder than many people think).
What becomes clear is that no matter how conservative your assumptions, within a few years the way analysis is done and who does it will be unrecognizable from today.
The bull case
Data analysis is full of the kinds of things that humans are bad at and machines are great at. There are basically four components.
The hardest part is often simply knowing what data to use - understanding the schema, how different tables and fields interact, and what is up to date. If you hook up ChatGPT to a data warehouse today, you get a tool that is pretty dumb out of the gate but gets smart quickly as it develops a semantic model of the dataset. Instead of asking each team member to learn this for themselves, you can ask a model to learn it once.
Another component of analysis is writing SQL queries themselves. This is such an obvious use case that off-the-shelf LLMs are already very helpful at quickly cleaning up and generating queries. Cursor is pretty good at it too. Products that are purpose-built for data analysis will be excellent at it.
The third component is manipulating data into a useful format. Some of this can be done through the query, but many forms of analysis require a secondary tool like spreadsheets. New solutions for this are exploding, such as the viral launch of Shortcut (a “superhuman Excel agent”) just a few weeks ago.
Finally, there is visualizing and dashboarding data. This is probably where the tools are weakest today, but there are sparks of genius. All of the charts below were one-shotted by Claude based on some data and a quick description of the format:
As incumbents and startups race to build solutions to these problems, two distinct UIs are emerging:
“Cursor for analytics” where the core workflow is autocomplete, editing, or refactoring of existing code. Startups like NAO and Galaxy are building for this use case and incumbents like Mode are incorporating it into their products.
“Data chatbots” where the core workflow is natural language conversations that output basic datasets and charts. Many incumbents are building this, including Snowflake and Looker.
Today the former is accessible only to more sophisticated users, and the latter just have very limited capabilities. There are strong incentives for a tool that does both, because this would allow the work of power users to tune the semantic model for the benefit of everyone else.
This tool will also need built-in visualizations in order to avoid analysts constantly jumping between workflows, and to enable static dashboards and charts for sharing.
The space is begging for a full stack solution which:
Is native to your data warehouse and holds as much of it in context as possible
Constantly updates its semantic model of the data schema as it is used
Provides data and visualizations by default, but exposes SQL to those who want it
Has a built-in visualization tool that allows rapid iteration on charts
Essays
The risk of letting AI do your thinking
Ft • July 31, 2025
Technology•AI•CognitiveOffloading•CriticalThinking•Education•Essays
Artificial intelligence (AI) has become deeply integrated into daily life, with tools like ChatGPT boasting 700 million weekly users worldwide. These technologies offer significant advantages, such as time savings, accelerated research, and enhanced productivity. However, this widespread adoption raises concerns about "cognitive offloading"—the tendency to rely on AI for tasks like writing and problem-solving—which may lead to diminished memory and critical thinking skills, similar to the “Google effect.”
Early studies indicate that frequent AI use can negatively impact users' intellectual engagement and performance. For instance, research from MIT found that students using AI underperformed at cognitive and linguistic levels. Another study linked heavy AI reliance to weaker critical thinking abilities.
To mitigate these risks, experts recommend reinforcing critical thinking skills in education, encouraging users to treat AI as a tool rather than an infallible authority, and designing AI responses that foster human deliberation. The conclusion urges users to actively engage with AI, using the technology as a collaborator rather than a crutch.
Meta’s Favorite Product Isn’t AI. It’s the Copy Button.
Om • July 31, 2025
Technology•Software•Innovation•BusinessStrategy•IntellectualProperty•Essays
An hour after the initial commentary on Meta's Super Intelligence memo, attention was drawn to a recurring theme in Mark Zuckerberg’s approach: the idea of "copying" or subsuming others’ ideas and phrases as his own. This pattern aligns closely with the company’s internal culture and branding, where replication and iteration seem more core to their strategy than original invention. The article highlights how Zuckerberg himself has embraced concepts like "personal super intelligence" by assimilating terminology introduced by others, emphasizing the broader culture at Meta of integrating external innovations into their product and narrative framework.
The focus then shifts intriguingly to Meta’s actual favorite product, which, contrary to public assumptions around AI advancements, is revealed to be the fundamentally simple “copy button.” This metaphorical highlight points to Meta’s underlying operational philosophy: to replicate, adapt, and incorporate innovations—whether in technology, features, or ideas—from others rapidly and position them centrally within their ecosystem. Such a stance underlines a strategic choice to prioritize incremental improvement and widespread adoption rather than risky original invention. This approach allows Meta to maintain dominance in product functionality and user experience by embedding tested and popular functionalities into their platforms.
This emphasis on copying as a strategic asset carries significant implications for the tech industry and innovation debates. It challenges the conventional emphasis on groundbreaking originality as the sole path to industry leadership. Instead, Meta’s strategy shows that systematically copying and refining can be just as powerful. It raises complex questions about ethics, intellectual property, and the balance between inspiration and outright replication. Critics argue this may stifle true innovation by discouraging smaller creators and startups, while supporters might claim it accelerates technological progress by disseminating successful ideas widely and efficiently.
Furthermore, this logic reflects a broader trend in modern tech giants who leverage vast resources to incorporate emerging technologies, like AI, only after they become proven elsewhere. Meta’s cautious yet assertive replication strategy could be seen as a pragmatic method to manage risks associated with unproven technologies while still leading in market relevance and user engagement.
In summary, Meta’s favorite product being the "copy button" is more than just a symbolic statement—it is a revealing insight into how the company operates and innovates. By centering their strategy around rapidly adopting, adapting, and scaling existing ideas, Meta ensures its platforms remain competitive and feature-rich. While this approach stirs debates over originality and ethics, it undeniably shapes the competitive landscape of technology development, reflecting a nuanced reality where copying can be an effective tool for technological and business success.
Tech Insider Claims OpenAI Will Be Worth $10 Trillion: Has Silicon Valley Finally Gone Totally Bonkers?
Keenon • Andrew Keen • August 1, 2025
Technology•AI•Valuation•Innovation•MarketTrends•Essays
The article explores a strikingly ambitious valuation forecast for OpenAI and its rival Anthropic, as suggested by a tech insider, predicting that OpenAI could soon be worth $10 trillion, with Anthropic valued at $5 trillion. These figures are extraordinary, surpassing the GDP of every country globally except the United States and China, signaling extraordinary optimism or perhaps delusion in Silicon Valley's current valuations. The article questions whether these predictions represent visionary insight or if the tech world has "gone totally bonkers," likening this speculation to the excesses seen during the dot-com bubble.
Key Insights and Market Context
AI Valuations in Fantasy Territory: The valuation estimates place these two AI companies collectively at $15 trillion—a mind-boggling number that far exceeds traditional benchmarks in tech valuation and global economy comparisons. Such figures emphasize the enormous financial expectations being placed on artificial intelligence as a transformative technology.
Tipping Point in AI-Driven Search: The rise of AI-powered search alternatives like Perplexity's Comet browser marks a fundamental shift from traditional search engines dominated by Google. Approximately a quarter of internet users reportedly now prefer AI for search functions, posing a direct threat to Google's advertising-driven business model. This shift could disrupt how search and information retrieval operate across the internet.
San Francisco’s Tech Boom Revived: The AI revolution has reignited San Francisco’s status as the epicenter of tech innovation. Real estate prices and rental demand have surged, paralleling the frenetic tech hiring environment reminiscent of the late 1990s. This surge reflects intense competition among AI firms for the best engineering talent and office spaces, underscoring the AI sector’s rapid growth and influence.
AI Race Is Not Winner-Take-All: Unlike previous tech battlegrounds where dominance belonged to a single company (e.g., Google in search, Amazon in e-commerce), the AI market appears expansive enough to support multiple major players. Anthropic is noted as a formidable competitor to OpenAI, and Chinese AI models are emerging as serious global contenders, suggesting a multipolar competitive landscape rather than a monopoly.
Big Tech’s AI Industry Anxiety: Established tech giants exhibit varied strategies and concerns regarding AI. Facebook is investing billions in retaining AI talent, motivated by recent model shortcomings. Apple, less publicly aggressive, opts to integrate external AI into its products rather than building expensive infrastructure itself. Meanwhile, the U.S. government's conscious choice to avoid regulating AI development reflects a laissez-faire approach that could have broad implications for industry dynamics and societal impact.
Implications and Analysis
The proposed valuations and market dynamics highlight both the transformative potential and the speculative risks surrounding AI today. Investors and companies are betting heavily on AI’s future economic impact, driving valuations that may or may not be sustainable. The shift in how people search for information could revolutionize digital advertising and reshape internet commerce, with far-reaching consequences.
Moreover, the reinvigoration of San Francisco as a tech hub signals renewed economic and social pressures but also opportunities tied directly to AI’s growth trajectory. The competitive landscape’s diversity suggests innovation could accelerate, but it also raises questions about geopolitical technology races, especially with Chinese AI advancements gaining prominence.
Big Tech’s varied responses—ranging from heavy investment to strategic caution—reflect the uncertainty and high stakes in AI development. The absence of government regulation might expedite innovation but could also raise ethical, security, and economic concerns as AI proliferates.
In summary, the article presents a provocative snapshot of Silicon Valley’s current state in AI valuation and competition, capturing the mix of optimism, hype, and strategic positioning that defines this critical moment in technology history.
The Peculiar Persistence of the AI Denialists
Persuasion • Yascha Mounk • August 6, 2025
Technology•AI•ArtificialIntelligence•EconomicImpact•AIdenialism•Essays
A prosthetic hand playing the piano during the World Artificial Intelligence Conference 2025 in Shanghai, China, on July 27, 2025. (Photo by Ying Tang/NurPhoto via Getty Images.)
Some momentous historical events, like the French Revolution or the demise of communism, come with little warning. Few contemporaries were able to predict that they were about to happen, or to foresee how fundamentally they would transform the world.
Other momentous historical events, like the fall of the Roman Empire or the Industrial Revolution, loudly announce their imminent arrival. Once those first factories in the north of England started to appear, the productive capacities of the spinning jenny and the steam engine were so evident that they augured disruption on a mass scale. Any contemporary observer who treated these technological developments as but one among many interesting social, cultural and political developments taking place in early 19th century Europe was, in a manner of speaking, so busy studying molehills that he failed to notice the sudden appearance of a towering mountain.
What we are going through at the moment is, at a conservative estimate, analogous to the Industrial Revolution. The rapid emergence of sophisticated models of artificial intelligence has enormous implications for the future of the human race. If they are harnessed for good, they could liberate humans from hard toil, end material scarcity, and facilitate enormous breakthroughs in areas from medicine to the arts. If they are harnessed for ill, they could lead to mass immiseration, cause war or pestilence on an unprecedented scale, or even make obsolete the human race.
But while all of this is as obvious as the significance of the Industrial Revolution should have been in the Manchester of the early 19th century, an astonishing number of people are choosing to keep studying their little molehills. Yes, every fashionable conference has some panel on AI. Yes, social media is overrun with hypemen trying to alert their readers to the latest “mind-blowing” improvements of Grok or ChatGPT. But even as the maturation of AI technologies provides the inescapable background hum of our cultural moment, the mainstream outlets that pride themselves on their wisdom and erudition—even, in moments of particular self-regard, on their meaning-making mission—are lamentably failing to grapple with its epochal significance.
A recent viral essay in The New Yorker provides an extreme, but not an altogether atypical, illustration of the problem. “A.I. is frankly gross to me,” its author, Jia Tolentino, avows. “It launders bias into neutrality; it hallucinates; it can become ‘poisoned with its own projection of reality.’ The more frequently people use ChatGPT, the lonelier, and the more dependent on it, they become.” At least Tolentino has the honesty to acknowledge the astonishing fact that “I have never used ChatGPT.” Though the author considers herself a progressive, her basic attitude to new technologies resembles that of a reactionary 19th century priest who denounces the railways as the devil’s work—before proudly mentioning that he himself has, of course, never engaged in the sin of riding one.
Mainstream outlets from The New York Times to NPR do have some smart assessments of the state, the stakes, and the likely future of artificial intelligence. But a depressingly large share of the AI coverage you are likely to encounter in those storied publications comes in three graduated forms of what I’ve come to think of as “AI denialism.”
There are the articles which dismiss AI as incompetent, portraying chatbots as perennially prone to hallucinations and incapable of delivering on basic tasks like fact-checking. Then, there are the articles which claim that, far from being truly intelligent, AI is merely a pattern-matching machine, a sort of “stochastic parrot.” And finally, there are the articles which argue that the impact of AI on the economy has been vastly overstated, since its promised productivity gains have not yet materialized.
Hear no progress, speak no progress, see no progress.
“AI is incompetent.”
The first of these three genres constitutes the purest form of denialism, in that it at this stage has to stipulate things which are plainly wrong (as anyone who has actually bothered to use ChatGPT or Claude or Grok or Gemini or Deep Seek would well know). It just about remains true that there are certain specific tasks at which AI chatbots remain surprisingly inept. If you are searching for a particular quote you half remembered (as I often do), it is usually a mistake to ask them for help. For if they are unable to locate the true quotation, they somehow cannot resist the temptation to please you by making up a perfect—albeit fake—little soundbite.
But in most fields of endeavor, AI engines now rival all but the most gifted humans. They are astonishingly good at translating texts and at playing chess, at writing poetry and at teaching you new skills, at coding and at making illustrations, at diagnosing a medical condition and at summarizing a technical research paper in the form of a podcast. To dismiss this astonishing box of varied wonders on the basis of a few tasks the technology has not yet cracked is reminiscent of the well-worn joke about two old Jews who go to the circus. An acrobat crosses a high wire on a unicycle while juggling seven flaming torches and playing a virtuoso piece on the violin. Dismissively, one Jew turns to the other and laments: “Paganini, he isn’t.”
“AI is just a stochastic parrot.”
The second genre of denialism is at once more sophisticated and more hollow. It invokes a supposedly profound technical insight about the nature of AI—but ultimately amounts to little more than dismissive sloganeering, shrewdly disguised behind the cover of a half-understood incantation.
According to an influential 2021 paper, the problem with large language models is that they don’t truly understand the world; rather, they are merely parroting back human language based on a stochastic model of which words are usually associated with which other words in the large data sets on which they are trained. Far from being “intelligent,” AI chatbots turn out, upon further inspection, to be mere “stochastic parrots.”
The idea that AI chatbots are merely “stochastic parrots” is rooted in an uncontested truth about the nature of these technologies: the algorithms really do draw on vast data sets to predict what the next word in a text, or pixel in a painting, or sound in a piece of music might be. But evocative though the invocation of this fact may sound, it does not magically make the prodigious abilities of artificial intelligence disappear. If chatbots fulfill tasks in the blink of an eye over which skilled humans used to labor for weeks, this advance will transform the world—whether for good or ill—irrespective of how the bots are able to do so.
Nor is the observation that chatbots use stochastic reasoning as disqualifying as it first appears. We are about as far from understanding how the human mind works as we are from understanding what exactly makes ChatGPT tick. But there is good reason to believe that our own astonishing ability to comprehend and manipulate the world is itself rooted in our pattern-matching abilities. Indeed, the pattern-matching that supposedly makes artificial intelligence a mere “stochastic parrot” might actually make it more similar to humans than its high-minded critics want to admit.
In May 1997, Garry Kasparov, then the best chess player in human history, lost to Deep Blue, a vast IBM machine spanning several refrigerator-sized cabinets. As he later recounted, he was particularly shaken by one move made by the machine. Kasparov believed that Deep Blue would make a move which offered a big tactical advantage even though he could sense, based on his vast experience, that doing so would ultimately weaken its position. But Deep Blue, which was but a giant calculating machine playing out as many scenarios as far out as possible, did not fall for the trap. Its move was shocking to Kasparov because he realized that a machine was able to come up with the intuitively best option—something that felt quintessentially human—by mere calculation.
Now, what’s fascinating about today’s chatbots, which vastly outperform Deep Blue, is that they work in a completely different manner. Deep Blue “knew” the rules of chess, which allowed the machine to consider millions of possible scenarios through brute-force calculation, and arrive at the right conclusion through a sheer act of calculative might. Today’s large language models, by contrast, draw on their vast database of past chess games to predict which move feels right. In other words, the fact that, unlike Deep Blue, ChatGPT operates like a “stochastic parrot” makes it more, not less, similar to the way in which astonishingly accomplished humans like Garry Kasparov play the game.
“AI won’t have that much impact, anyway.”
The final form of denialism is about the economic impact of this technology. When OpenAI released ChatGPT 3.5 in November 2022, some observers predicted an immediate and devastating effect on white-collar jobs. A few industries have already been hard hit. While economists over the last decade urged career-minded students to learn coding in order to future-proof their careers, computer programmers have rapidly gone from commanding astonishing wages to being more likely than recent graduates of far less “safe” fields like art history or philosophy to be out of a job. But on the whole, the wholescale disruption of white-collar workplaces is so far conspicuous by its absence—as are the promised gains in productivity.
This makes it tempting to predict that the invention of artificial intelligence will, at least in economic terms, turn out to be much less important than meets the eye. Some distinguished economists argue that the job market will for the foreseeable future hardly be impacted by AI. Others argue that the sky-high valuations of companies like OpenAI will prove to be a giant mistake, with the ever-growing costs of training ever-more sophisticated AI models not sufficiently offset by future revenues. In the end, they argue, this moment will be remembered for the irrationality of its collective hubris, just as the DotCom Bubble of 2000 was.
The obvious way to rebut this argument is to point out that the DotCom bubble turned out to be but a temporary downturn. Yes, plenty of useless companies were vastly overvalued before the bubble burst in March 2000. But the hype about the internet has since turned out to be fully justified. A quarter century on from the “bubble,” the NASDAQ is four times higher than it was before it burst, and tech companies make up a huge share of the world’s stock market capitalization. It has become undeniable that the world economy has been fundamentally transformed by digital technology.
The deeper way to rebut skepticism about the economic impact of AI is to point out that technology-induced improvements in productivity require a combination of two things: new technologies which can augment or substitute for human labor; and the organizational changes which allow firms to harness them. Technologies which produce incremental increases in productivity in particular industries are often easy to implement, in part because they tend to be the result of concerted efforts by incumbent firms to expedite existing production processes. Technologies which produce large increases in productivity across industries are often hard to implement, in part because they—as in the case of artificial intelligence—usually come from outside the existing industrial structure and require much more fundamental organizational changes before they can be implemented.
Take one example: Studies suggest that AI bots are now as effective as the most skilled doctors at many key medical tasks, such as interpreting sophisticated test results or diagnosing a patient’s condition based on a diffuse set of symptoms. But because of the extremely strict regulations which govern the health care system—and the power of medical professionals, who have every incentive to avoid being replaced—the actual practice of medicine has so far changed little. This tells us less about the long-term potential of new technologies than it does about how slow complex systems are to adapt to them, especially when the salaries of well-connected professionals in highly protected industries are on the line. As in many previous instances of technological disruption, these forces are proving capable of containing the rising tide for a surprisingly long period of time; but it would be foolish to predict that the dam can hold forever.
Ten years ago, the conventional wisdom held that technological advances would imperil many blue-collar jobs, like those of truck drivers. Now, the astonishing advances in text-based AI have convinced many commentators that white-collar professionals, from paralegals to HR professionals, will be the first to lose their job. But it is worth noting that there is another very large hammer which has not yet fallen. While it has turned out to be more difficult to build robots which can maneuver around the physical world with dexterity than to build chatbots that can perform high-level cognitive tasks, there will come a time in the relatively near future in which machines capable of doing both tasks simultaneously will be produced in large numbers. At that point, both white-collar and blue-collar jobs will be imperilled en masse.
This makes me skeptical of the argument that even sophisticated economists now seem to fall back on to downplay the likely impact of artificial intelligence. They like to point out that, despite dire predictions by contemporaries, past technological transformations from the invention of the printing press to the automatization of factory work did not lead to mass unemployment. While certain categories of workers were indeed decimated by these developments, they also gave rise to the need for wholly new categories. There may no longer be scribes who copy books by hand; but (as the state of my inbox can attest) there are now plenty of marketing professionals who earn their living by pitching authors to podcast hosts. Similarly, the number of coal miners may have plummeted over the last decades; but there is now a significantly greater number of professional yoga teachers in the United States.
That argument has so far proven correct at every historical juncture. But that is because we have never before in the history of humanity been faced with an embodied form of general intelligence that outshines the vast majority of humans at the vast majority of tasks. Whether the principle of historical replacement of job categories which has held for past technological innovations can persist in the face of this unprecedented innovation remains at best an open question. Personally, I suspect that the people now claiming that the impact of AI on the job market will resemble that of the steam engine will suffer the same fate as Malthus, whose theory about the dangers of unchecked population growth proved astonishingly informative in describing every historical moment up until the very juncture at which he wrote—but turned out to be badly wrong about everything that happened after.
I have an admission to make.
Intellectually, I have become deeply convinced that the importance of AI is, if anything, under hyped. The sorry attempts to pretend we don’t stand at the precipice of a technological, economic, social and cultural revolution are little more than cope. In theory, I have little patience for the denialism about the impact of artificial intelligence which now pervades much of the public discourse.
But in practice, I too find it hard to act on that knowledge. I am not a computer programmer, so I don’t have all that many useful things to say about the technology. I am not deeply enmeshed in tech circles, so I struggle to identify the best people with whom to talk about these topics. Most articles we publish in Persuasion don’t touch on AI, and the ones that do often get surprisingly little pickup.
But if there is one thing I have learned in my writing career so far, it is that it eventually becomes untenable to bury your head in the sand. For an astonishingly long period of time, you can pretend that democracy in countries like the United States is safe from far-right demagogues or that wokeness is a coherent political philosophy or that financial bubbles are just a figment of pessimists’ imagination; but at some point the edifice comes crashing down. And the sooner we all muster the courage to grapple with the inevitable, the higher our chances of being prepared when the clock strikes midnight.
AI disagreements
Bloodinthemachine • Brian Merchant • August 7, 2025
Technology•AI•ArtificialGeneralIntelligence•AIAlignment•AIDoomer•Essays
Hello all,
Well, here’s to another relentless week of (mostly bad) AI news. Between the AI bubble discourse—my contribution, a short blog on the implications of an economy propped up by AI, is doing numbers, as they say—and the AI-generated mass shooting victim discourse, I’ve barely had time to get into OpenAI. The ballooning startup has released its highly anticipated GPT-5 model, as well as its first actually “open” model in years, and is considering a share sale that would value it at $500 billion. And then there’s the New York Times’ whole package of stories on Silicon Valley’s new AI-fueled ‘Hard Tech’ era.
That package includes a Mike Isaac piece on the vibe shift in the Bay Area, from the playful-presenting vibes of the Googles and Facebooks of yesteryear, to the survival-of-the-fittest, increasingly right-wing-coded vibes of the AI era, and a Kate Conger report on what that shift has meant for tech workers. A third, by Cade Metz, about “the Rise of Silicon Valley’s Techno-Religion,” was focused largely on the rationalist, effective altruist, and AI doomer movement rising in the Bay, and whose base is a compound in Berkeley called Lighthaven. The piece’s money quote is from Greg M. Epstein, a Harvard chaplain and author of a book about the rise of tech as a new religion. “What do cultish and fundamentalist religions often do?” he said. “They get people to ignore their common sense about problems in the here and now in order to focus their attention on some fantastical future.”
All this reminded me that not only had I been to the apparently secret grounds of Lighthaven (the Times was denied entry), late last year, where I was invited to attend a closed door meeting of AI researchers, rationalists, doomers, and accelerationists, but I had written an account of the whole affair and left it unpublished. It was during the holidays, I’d never satisfactorily polished the piece, and I wasn’t doing the newsletter regularly yet, so I just kind of forgot about it. I regret this! I reread the blog and think there’s some worthwhile, even illuminating stuff about this influential scene at the heart of the AI industry, and how it works. So, I figure better late than never, and might as well publish now.
The event was called “The Curve” and it took place November 22-24th, 2024, so all commentary should be placed in the context of that timeline. I’ve given the thing a light edit, but mostly left it as I wrote it late last year, so some things will surely be dated.
A couple weeks ago, I traveled to Berkeley, CA, to attend the Curve, an invite-only “AI disagreements” conference, per its billing. The event was held at Lighthaven, a meeting place for rationalists and effective altruists (EAs), and, according to a report in the Guardian, allegedly purchased with the help of a seven-figure gift from Sam Bankman-Fried. As I stood in the lobby, waiting to check in, I eyed a stack of books on a table by the door, whose title read Harry Potter and the Methods of Rationality. These are the 660,000-word, multi-volume works of fan fiction written by rationalist Eliezer Yudkowsky, who is famous for his assertion that tech companies are on the cusp of building an AI that will exterminate all human life on this planet.
The AI disagreements encountered at the Curve were largely over that very issue—when, exactly, not if, a super-powerful artificial intelligence was going to arise, and how quickly it would wipe out humanity when it did so. I’ve been to my share of AI conferences by now, and I attended this one because I thought it might be useful to hear this widely influential perspective articulated directly by those who believe it, and because there were top AI researchers and executives from leading companies like Anthropic in attendance, and I’d be able to speak with them one on one.
I told myself I’d go in with an open mind, do my best to check my priors at the door, right next to the Harry Potter fan fiction. I mingled with the EA philosophers and the AI researchers and doomers and tech executives. Told there would be accommodations onsite, I arrived to discover that, my having failed to make a reservation in advance, meant either sleeping in a pod or shared dorm-style bedding. Not quite sure I could handle the claustrophobia of a pod, I opted for the dorms.
I bunked next to a quiet AI developer who I barely saw the entire weekend and a serious but polite employee of the RAND corporation. The grounds were surprisingly expansive; there were couches and fire pits and winding walkways and decks, all full of people excitedly talking in low voices about artificial general intelligence (AGI) or super intelligence (ASI) and their waning hopes for alignment—that such powerful computer systems would act in concert with the interests of humanity.
I did learn a great deal, and there was much that was eye-opening. For one thing, I saw the extent to which some people really, truly, and deeply believe that the AI models like those being developed by OpenAI and Anthropic are just years away from destroying the human race. I had often wondered how much of this concern was performative, a useful narrative for generating meaning at work or spinning up hype about a commercial product—and there are clearly many operators in Silicon Valley, even attendees at this very conference, who are sharply aware of this particular utility, and able to harness it for that end. But there was ample evidence of true belief, even mania, that is not easily feigned. There was one session where people sat in a circle, mourning the coming loss of humanity, in which tears were shed.
The first panel I attended was headed up by Yudkowsky, perhaps the movement’s leading AI doomer, to use the popular shorthand, which some rationalists onsite seemed to embrace and others rejected. In a packed, standing-room only talk, the man outlined the coming AI apocalypse, and his proposed plan to stop it—basically, a unilateral treaty enforced by the US and China and other world powers to prevent any nation from developing more advanced AI than what is more or less currently commercially available. If nations were to violate this treaty, then military force could be used to destroy their data centers.
The conference talks were held under Chatham House Rule, so I won’t quote Yudkowsky directly, but suffice to say his viewpoint boils down to what he articulated in a TIME op-ed last year: “If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter.” At one point in his talk, at the prompting of a question I had sent into the queue, the speaker asked everyone in the room to raise their hand to indicate whether or not they believed AI was on the brink of destroying humanity—about half the room believed on our current path, destruction was imminent.
This was no fluke. In the next three talks I attended, some variation of “well by then we’re already dead” or “then everyone dies” was uttered by at least one of the speakers. In one panel, a debate between a former OpenAI employee, Daniel Kokotajlo, and Sayash Kapoor, a computer scientist who’d written a book casting doubt on some of these claims, the audience, and the OpenAI employee, seemed outright incredulous that Kapoor did not think AGI posed an immediate threat to society. When the talk was over, the crowd flocked around Kokotajlo, to pepper him with questions, while just a few stragglers approached Kapoor.
I admittedly had a hard time with all this, and just a couple hours in, I began to feel pretty uncomfortable—not because I was concerned with what the rationalists were saying about AGI, but because my apparent inability to occupy the same plane of reality was so profound. In none of these talks did I hear any concrete mechanism described through which an AI might become capable of usurping power and enacting mass destruction, or a particularly plausible process through which a system might develop to “decide” to orchestrate mass destruction, or the ways it would navigate and/or commandeer the necessary physical hardware to wreak its carnage via a worldwide hodgepodge of different interfaces and coding languages of varying degrees of obsolescence and systems that already frequently break down while communicating with each other.
I saw a deep fear that large language models were improving quickly, that the improvements in natural language processing had been so rapid in the last few years that if the lines on the graphs held, we’d be in uncharted territory before long, and maybe already were. But much of the apocalyptic theorizing, as far as I could tell, was premised on AI systems learning how to emulate the work of an AI researcher, becoming more proficient in that field until it is automated entirely. Then these automated AI researchers continue automating that increasingly advanced work, until a threshold is crossed, at which point an AGI emerges. More and more automated systems, and more and more sophisticated prediction software, to me, do not guarantee the emergence of a sentient one. And the notion that this AGI will then be deadly appeared to come from a shared assumption that hyper-intelligent software programs will behave according to tenets of evolutionary psychology, conquering perceived threats to survive, or desirous of converting all materials around it (including humans) into something more useful to its ends. That also seems like a large and at best shaky assumption.
There was little credence or attention paid to recent reports that have shown the pace of progress in the frontier models has slowed—many I spoke to felt this was a momentary setback, or that those papers were simply overstated—and there seemed to be a widespread propensity for mapping assumptions that may serve in engineering or in the tech industry onto much broader social phenomena.
When extrapolating into the future, many AI safety researchers seemed comfortable making guesses about the historical rate of task replacement in the workplace begot by automation, or how quickly remote workers would be replaced by AI systems (another key road-to-AGI metric for the rationalists). One AI safety expert said, let’s just assume in the past that automation has replaced 30% of workplace tasks every generation, as if this were an unknowable thing, as if there were not data about historical automation that could be obtained with research, or as if that data could be so neatly quantified into such a catchy truism. I could not help but think that sociologists and labor historians would have had a coronary on the spot; fortunately, none seem to have been invited.
A lot of these conversations seemed to be animated displays of mutual bias confirmation, in other words, between folks who are surely quite good at computational mathematics, or understanding LLM training benchmarks, but who all share similar backgrounds and preoccupations, and who seem to spend more time examining AI output than how it’s translating into material reality. It often seemed like folks were excitedly participating in a dire, high-stakes game, trying to win it with the best-argued case for alignment, especially when they were quite literally excitedly participating in a game; Sunday morning was dedicated to a 3-hour tabletop role-playing game meant to realistically simulate the next few years of AI development, to help determine what the AI-dominated future of geopolitics held, and whether humanity would survive.
(In the game, which was played by 20 or so attendees divided into two teams, AGI is realized around 2027, the US government nationalizes OpenAI, Elon Musk is put in charge of the new organization, a sort of new Manhattan Project for AI, and competition heats up with China; fortunately, the AI was aligned properly, so in the end, humanity is not extinguished. Some of the players were almost disappointed. “We won on a technicality,” one said.)
The tech press was there, too—Platformer’s Casey Newton, myself, the New York Times’ Kevin Roose, and Vox’s Kelsey Piper, Garrison Lovely, and others. At one point, some of us were sitting on a couch surrounded by Anthropic guys, including co-founder Jack Clark. They were talking about why the public remained skeptical of AI, and someone suggested it was due to the fact that people felt burned by crypto and the metaverse, and just assumed AI was vaporware too. They discussed keeping journals to record what it was like working on AI right now, given the historical magnitude of the moment, and one of the Anthropic staff mentioned that the Manhattan Project physicists kept journals at Los Alamos, too.
It was pretty easy to see why so much of the national press coverage has been taken with the “doomer” camps like the one gathered at Lighthaven—it is an intensely dramatic story, intensely believed by many rich and intelligent people. Who doesn’t want to get the story of the scientists behind the next Manhattan Project—or be a scientist wrestling with the complicated ethics of the next world-shattering Manhattan Project-scale breakthrough? Or making that breakthrough?
Not possessing a degree in computer science, or having studied natural language processing for years myself, if even a third of my AI sources were so sure that an all-powerful AI is on the horizon, that would likely inform my coverage, too. No one is immune to biases; my partner is a professor of media studies, perhaps that leads me to skew more critical to the press, or to be overly pedantic in considering the role of biases in overly long articles like this one. It’s even possible I am simply too cynical to see a real and present threat to humanity, though I don’t think that’s the case. Of course I wouldn’t.
So many of the AI safety folks I met were nice, earnest, and smart people, but I couldn’t shake the sense that the pervasive AI worry wasn’t adding up. As I walked the grounds, I’d hear snippets of animated chatter; “I don’t want to over-index on regulation” or “imagine 3x remote worker replacement” or “the day you get ASI you’re dead though.” But I heard little to no organizing. There was a panel with an AI policy worker who talked about how to lobby DC politicians to care about AI risk, and a screening of a documentary in progress about SB 1047, the AI safety bill that Gavin Newsom vetoed, but apart from that, there was little sense that anyone had much interest in, you know, fighting for humanity. And there were plenty of employees, senior researchers, even executives from OpenAI, Anthropic, and Google’s Deepmind right there in the building!
If you are seriously, legitimately concerned that an emergent technology is about to exterminate humanity within the next three years, wouldn’t you find yourself compelled to do more than argue with the converted about the particular elements of your end times scenario? Some folks were involved in pushing for SB 1047, but that stalled out; now what? Aren’t you starting an all-out effort to pressure those companies to shut down their operations ASAP? That all these folks are under the same roof for three days, and no one’s being confronted, or being made uncomfortable, or being protested—not even a little bit—is some of the best evidence I’ve seen that all the handwringing over AI Safety and x-risk really is just the sort of amped-up cosplaying its critics accuse it of being.
And that would be fine, if it wasn’t taking oxygen from other pressing issues with AI, like AI systems’ penchant for perpetuating discrimination and surveillance, degrading labor conditions, running roughshod over intellectual property, plagiarizing artists’ work, and so on. Some attendees openly weren’t interested in any of this. The politics in the space appeared to skew rightward, and some relished the way AI promises to break open new markets, free of regulations and constrictions. A former Uber executive, who admitted openly that what his last company did “was basically regulatory arbitrage” now says he plans on launching fully automated AI-run businesses, and doesn’t want to see any regulation at all.
Late Saturday night, I was talking with a policy director, a local freelance journalist, and a senior AI researcher for one of the big AI companies. I asked the AI developer if it bothered him that if everything said at the conference thus far was to be believed, his company was on the cusp of putting millions of people out of work. He said yeah, but what should we do about it? I mentioned an idea or two, and said, you know, his company doesn’t have to sell enterprise automation software. A lot of artists and writers were already seeing their wages fall right now. The researcher looked a little pained, and laughed bleakly. It was around that point that the journalist shared that he had made $12,000 that year. The AI researcher easily might have made 30 times that.
It echoed a conversation I had with Jack Clark, of Anthropic. It was a bit strange to see him here, in this context; years ago, he’d been a tech journalist, too, and we’d run in some of the same circles. We’d met for coffee some years ago, around when he’d left journalism to start a comms gig at OpenAI, where he’d do a stint before leaving to co-found Anthropic. At first I wonder if it’s awkward because I’m coming off my second mass layoff event in as many media jobs, and he’s now an executive of a $40 billion company, but then I recall that I’m a member of the press, and he probably just doesn’t want to talk to me.
He said that what AI is doing to labor might get government to finally spark a conversation about AI’s power, and to take it seriously. I wondered—wasn’t his company profiting from selling the automation services that was threatening labor in the first place? Anthropic does not, after all, have to partner with Amazon and sell task-automating software. Clark says that’s a good point, a good question, and they’re gathering data to better understand exactly how job automation is unfolding, and he hopes to be able to make it public. “I want to release some of that data, to spark a conversation,” he said.
I press him about the AGI business, too. Given he is a former journalist, I can’t help but wonder if on some level he doesn’t fully buy the imminent super-intelligence narrative either. But he doesn’t bite. I ask him if he thinks that AGI, as a construct, is useful in helping executives and managers absolve themselves and their companies of actions that might adversely effect people. “I don’t think they think about it,” Clark said, excusing himself.
The contradictions were overwhelming, and omnipresent. Yet relatively few people here were disagreeing. AGI was an inexorable force, to be debated, even wept over, as it risked destroying us all. I do not intend to demean these concerns, just question them, and what’s really going on here. It was all thrown into even sharper relief for me, when, just two weeks after the Curve, I attended a conference in DC on nuclear security, and listened to a former Commander of Stratcom discuss plainly how close we are to the brink of nuclear war, no AI required, at any given time. A phone call would do the trick.
I checked out of the Curve confident that there is no conspiracy afoot in Silicon Valley to convince everyone AI is apocalyptically powerful. I left with the sense that there are some smart people in AI—albeit often with apparently limited knowledge of real-world politics, sociology, or industrial history—who see systems improving, have genuine and deep concerns, and other people in AI who find that deep concern very useful for material purposes. Together, they have cultivated a unique and emotionally charged hyper-capitalist value system with its own singular texture, one that is deeply alienating to anyone who has trouble accepting certain premises. I don’t know if I have ever been more relieved to leave a conference.
The net result, it seems to me, is that the AGI/ASI story imbues the work of building automation software with elevated importance. Framing the rise of AGI as inexorable helps executives, investors, and researchers, even the doom-saying ones, to effectively minimize the qualms of workers and critics worried about more immediate implications of AI software.
You have to build a case, an AI executive said at a widely attended talk at the conference, comparing raising concerns over AGI to the way that the US built its case for the invasion of Iraq.
But that case was built on faulty evidence, an audience member objected.
It was a hell of a demo, though, the AI executive said.
Why AI Disrupts Software First
Om • Om Malik • August 6, 2025
Technology•AI•Software•Startups•Innovation•Essays
A few days ago, when decoding Microsoft CEO Satya Nadella’s memo to his company about layoffs and artificial intelligence, I said that “this is not just about Microsoft, but pretty much every software company will be hit hard by this wave of transformation.” The point I was making in the piece was that AI is coming, and the first domino to fall will be software.
When watching this excellent conversation between Stripe co-founder John Collison and Anthropic CEO Dario Amodei, the latter eloquently outlines non-financial reasons as to why the software sector is getting (and will keep getting) disrupted by AI. Anthropic is one of the fastest-growing businesses — with over $4 billion in annual recurring revenue — and one of the key cornerstones of its growth is Claude Code. “We’ve actually managed to make Claude good in a way that’s relevant to what people actually use,” Amodei said in the conversation.
“The people who write code are very socially and technically adjacent to the folks who develop AI models, and so the diffusion is very fast,” Amodei told Collison. “They’re also the kind of people who are early adopters, who are used to new technology.”
“The big growth in code, you know, I would say the biggest cause of that is just that the people doing it and the startups devoted to it are fast adopters who understand the technology super well,” Amodei said.
“I think code is maybe an early indicator,” he said, viewing coding/software as “an early indicator, like a premonition of what’s going to happen everywhere else.” Amodei argued that traditional enterprises are slower to change. Large entities such as banks, insurance companies, and pharmaceutical firms have organizational inertia, but in time they will adapt. This vision of widespread AI adoption fuels Amodei’s ambition. He dreams of making Anthropic into a “one-stop shop for AI,” adding, “We think of ourselves as a platform company first.”
“One of the fundamental experiences and uncertainties of working at or running something like Anthropic is you kind of don’t know. You make this exponential projection. It sounds crazy. It might be crazy. But also, it might not be crazy because that trend line has followed before.”
Of all Amodei’s recent interviews, this conversation clearly outlines where we stand with AI and what the long-term vision is for Anthropic, the company behind Claude and OpenAI’s main competitor. It’s worth watching!
From Knowledge to Action
Tomtunguz • August 7, 2025
Technology•AI•Machine Learning•Workflow Automation•Tool Integration•Essays
GPT-5 represents a significant evolution in artificial intelligence, marking a shift from purely knowledge-based capabilities toward actionable intelligence. While previous models excelled at retrieving and reorganizing information, GPT-5's revolutionary strength lies in its ability to execute tasks through tool-calling and strategic model selection. This advancement establishes a new dimension beyond raw knowledge, allowing AI to move from advice-giving to direct action-taking within complex workflows.
The benchmarks GPT-5 has achieved—94.6% on AIME 2025 and 74.9% on SWE-bench—highlight its exceptional knowledge prowess, but these benchmarks also signal an approaching saturation point where future incremental improvements in knowledge alone yield diminishing returns. The real differentiator for GPT-5 and future models is their capacity to orchestrate workflows and integrate with external systems through tool-calling. This feature mitigates two fundamental limitations of pure language models. First, workflow orchestration: unlike single-shot responses, GPT-5 can manage multi-step, stateful processes, maintain context over multiple operations, handle errors, and keep track of progress. Second, system integration: by calling external tools like databases, APIs, and enterprise software, it can translate natural language commands into executable actions outside the text-only environment of language models.
The ability to select the appropriate tool quickly and correctly is critical for robust AI performance. Missteps in tool routing can derail entire workflows, so precision and sophistication in managing tool usage underpin productivity gains. The author, who has personally built 58 different AI tools ranging from email processors and CRM integrations to research assistants, underscores the transformative potential of given the scale of such workflows. Simple commands, such as analyzing an email to identify startups not in a CRM, can trigger complex automated sequences replacing what once required lengthy manual workflows.
Further enhancing reliability is GPT-5’s self-verification loop, which ensures that tasks are completed on time and correctly, an innovation that injects consistency to automated processes that is otherwise difficult to achieve. When deployed across large organizations with thousands of workflows and employees, the productivity impact multiplies exponentially, allowing companies to scale operations efficiently.
The article concludes that the future AI landscape will reward those who master tool orchestration and query routing, effectively turning managers into "agent managers" who coordinate AI agents rather than manually handling every task. This shift promises not only enhanced efficiency but a fundamental reshaping of how human labor and AI collaboration operate in practice.
Key takeaways:
GPT-5 surpasses traditional benchmarks through strategic tool-calling and multi-model selection.
Knowledge alone is reaching a performance ceiling; actionability distinguishes next-gen AI.
Tool-calling allows AI to orchestrate complex, multi-step workflows and interface with non-text systems.
Precision in selecting the right tool is crucial to avoid workflow failures.
Self-verification loops enhance reliability and trust in AI-driven processes.
Productivity gains amplify dramatically at scale, managing thousands of automated workflows.
Future AI success depends on tool orchestration sophistication and operational predictability, ushering in a new era of AI agent management.
Enough
Benn • Benn Stancil • August 8, 2025
Technology•AI•Wealth•Social Impact•Finance•Essays
At a party given by a billionaire on Shelter Island, Kurt Vonnegut informs his pal, Joseph Heller, that their host, a hedge fund manager, had made more money in a single day than Heller had earned from his wildly popular novel Catch-22 over its whole history. Heller responds, “Yes, but I have something he will never have … enough.”
– from Morgan Housel’s The Psychology of Money, recounting a story from Vanguard founder John Bogle
It’s both everywhere and, somehow, still, nobody knows how to talk about it.
I don’t know what else we would say. I don’t know what we can say, other than what everyone already says: “It’s gotten so crazy,” and, “can you imagine?,” and, “man, that is a lot of money.”
But man, that is a lot of money.
Which one? I can’t keep track. It’s OpenAI, raising money that values the company at $500 billion, which is $200 billion more than its valuation just five months ago—which was, then, the largest private fundraise in history. It’s Meta, adding almost $200 billionto its market cap in a day, only to be outdone by Microsoft going up by $265 billion on the same day. It’s Microsoft, becoming a $4 trillion company, less than seven years after Apple became the first company to reach a measly one trillion. (Even Broadcom—whose website looks like a regional home security provider1—is worth more than that today.) And that all happened over the last week.
The week before, it was Ramp, raising $500 million—million, with an M, how quaint—at a $22.5 billion valuation, less than two months after they raised $200 million at a $16 billion valuation. It was Meta, buying Scale AI’s CEO for $15 billion, or OpenAI, buying Jony Ive for $6 billion. It was Meta again, trying to buy an engineer from Thinking Machines for $250 million a year, and not only getting rejected, but getting rejected for economically rational reasons, because Thinking Machines is currently worth $12 billion, and their executives’ pay packages might already be worth more than Meta’s offers.2 It’s $200 million to poach an Apple executive, and stories about $18 million offers, getting relegated to the final line of a daily beat report.
You become numb to it, until some fresh blockbuster jars you loose again. In one day, Microsoft grew by more than all of Roche ($247 billion), Toyota ($237 billion), IBM ($233 billion), and just a couple AI engineers less than LVMH ($266 billion). In one day, Mark Zuckerberg’s net worth grew by $27 billion—a full Rupert Murdoch; a full Peter Thiel; a full Steve Cohen; a full Jerry Jones and a full Marc Benioff. In a recent column about the OpenAI fundraise, Matt Levine reminded us that 1 basis point of OpenAI—one-hundredth of one percent; 0.01 percent; the amount an average employee getswhen they join an average late-stage startup—is worth $50 million.
Those are the numbers now, but I don’t know what to do with any of them.
GPT-5 Hands-On: Welcome to the Stone Age
Latent • August 7, 2025
Technology•AI•Software•Agents•ToolUse•Essays
OpenAI’s long-awaited GPT-5 is here, and early access partners have been testing it in various applications such as raindrop.ai, Cursor, Codex, and Canvas. This model is seen as a significant leap towards artificial general intelligence (AGI), especially excelling in software engineering by solving complex problems and managing large codebases effectively.
However, GPT-5 is not simply "better" at everything. It surprisingly performs worse at writing than previous versions like GPT-4.5 and GPT-4. Instead of fitting conventional expectations, these flaws have reshaped the understanding of AGI development by highlighting the importance of tool use. The Stone Age marked the dawn of human intelligence because humans learned to use tools—extending their capabilities externally, trading internal memory for external aids like writing.
GPT-5 marks a new era for large language models (LLMs) and agents by not just using tools but thinking and building with them. Unlike earlier models that used web search simply as a tool call, GPT-5 conducts deep research by iterating, planning, and exploring online information as part of its thinking process. It can leverage any tool if given the right access, and these tools can be categorized as internal retrieval, web search, code interpreters, or actions that trigger side effects.
A critical feature of GPT-5 is its ability to use tools in parallel effectively, allowing it to operate on longer time horizons and reduce latency, which opens doors for new product possibilities. It requires structured guidance, or a "compass," rather than heavy context loading. This means providing GPT-5 with clear instructions about its environment, such as project purpose, file organization, and evaluation criteria, which helps it onboard complex tasks efficiently.
In coding, GPT-5 demonstrated notable prowess by resolving intricate dependency conflicts quickly and accurately, showcasing behaviors akin to deep research and iterative problem-solving. It also easily generated complex websites and applications with fully functioning features like persistence of user data, outperforming competitors and past models in speed and accuracy.
While GPT-5 is a leap forward in coding and practical tool use, it is not yet strong in creative writing, where preceding models still excel. Overall, GPT-5 represents a foundational step toward more autonomous, intelligent agents that do not just respond but actively use and build with tools, pushing the frontier of what AI can accomplish in real-world applications.
These College Professors Will Not Bow Down to A.I.
Nytimes • Jessica Grose • August 6, 2025
Education•Technology•Artificial Intelligence•Academic Integrity•Teaching Methods•Essays
In recent years, the integration of artificial intelligence (AI) into higher education has sparked significant debate among educators. While some embrace AI as a tool to enhance teaching and learning, others express concerns about its impact on academic integrity and the traditional role of educators.
A notable example is Professor Antony Aumann of Northern Michigan University, who discovered that a student had used ChatGPT, an AI chatbot, to write an essay on the morality of burqa bans. This incident led Professor Aumann to reconsider his teaching methods, opting for in-class writing assignments and incorporating AI evaluations into his lessons. He emphasized the need to adapt classroom discussions to include AI perspectives, stating, "What also does this alien robot think?" (newyorkdawn.com)
The rise of AI tools like ChatGPT has prompted universities nationwide to reevaluate their teaching strategies. Institutions such as George Washington University, Rutgers University, and Appalachian State University are shifting away from take-home assignments, favoring in-class tasks, handwritten papers, group work, and oral exams. This transition aims to mitigate the potential for AI-assisted cheating and to foster more authentic student engagement. (newyorkdawn.com)
However, the rapid adoption of AI in education has also led to challenges in maintaining academic integrity. Some educators report instances of students using AI to complete assignments, raising concerns about the authenticity of student work and the effectiveness of traditional assessment methods. This situation has led to a reevaluation of teaching practices and the development of new strategies to ensure that education remains meaningful and effective. (theatlantic.com)
In response to these challenges, some educators advocate for a more critical approach to AI integration. They suggest that while AI can be a valuable tool, it should not replace the critical thinking and creativity that human educators bring to the classroom. The emphasis should be on using AI to complement and enhance traditional teaching methods, rather than supplanting them. (insidehighered.com)
The debate over AI's role in education continues to evolve, with ongoing discussions about its potential benefits and drawbacks. As AI technology advances, it is crucial for educators and institutions to find a balance that leverages AI's capabilities while preserving the essential human elements of teaching and learning.
Venture Capital
VC in 2025 So Far Per Carta: Valuations Are Up 15-25% … But Deals Are Down -13% at Seed. Deals Are Down Everywhere.
Saastr • Jason Lemkin • August 1, 2025
Business•Startups•VentureCapital•Valuations•InvestmentTrends•Venture Capital
Carta’s latest data through the second quarter of 2025 highlights a venture capital market that is stabilizing after the turbulence of the past few years. Valuations across seed, Series A, and Series B stages have notably increased by 15 to 25 percentage points compared to 2024, signaling a significant recovery in market confidence and company worth. However, this recovery in valuation comes alongside a contraction in deal volume, with seed-stage deals down by 13%, Series A deals down by 10%, and Series B deals down by 5%. This indicates that while investors are willing to pay more for companies, they are increasingly selective about which companies to back.
When viewed in a broader timeline, the rebound in valuations from the lows of 2023 is even more striking. Series B valuations, for example, plunged as much as 50% below prior highs in late 2023 but have since bounced back to being only 20% lower, marking a 30-point swing upward. Series A has recovered from a 25% drop in 2023 to being slightly positive (+2%) in 2025. The seed stage showed resilience throughout, never falling below baseline comparisons from 2021. This recovery trend proves that the market has adjusted its expectations—transitioning from the deep discounts of 2022-2023 to a more balanced pricing regime where quality companies command fair valuations.
What’s driving this recovery centers on a recalibration of investor expectations. VCs now focus on paying market rates for companies that demonstrate strong fundamentals and clear potential, rather than chasing volume or opting for discounted deals. The deal volume decline reflects a stringent quality bar: investors prioritize exceptional companies and are not incentivized to pursue riskier opportunities. Series B companies, which have weathered the downturn with 2-3 years of operational performance data, lead this valuation recovery. Their demonstrated resilience and sustainable business models make them especially attractive at this stage.
The current venture capital market in 2025 operates under a distinctly different paradigm than the years prior. Key characteristics include much higher valuations reserved for companies that pass rigorous scrutiny, a sharp reduction in overall deal volume—about half compared to 2023—and faster decision-making on clear market winners paired with more deliberation on borderline cases. Importantly, investors exhibit a stronger emphasis on profitability timelines and capital efficiency, signaling a shift away from growth at any cost toward sustainable business models that can generate returns without excessive capital burn.
Looking forward, the data points to a “new normal” in venture investing. This environment features selective yet fair pricing models, a focus on quality over quantity in deal-making, stabilized valuation standards that, though higher than recent lows, remain cautious, and a permanent downsizing of total deal activity. For founders, the most relevant comparison is now between 2024 and 2025, reflecting an environment where valuations are recovering robustly but only for a smaller pool of ventures. The market rewards companies with proven strengths and punishes those that fall short, marking a mature and more disciplined phase in venture capital’s evolution.
Key Takeaways:
Valuations for seed, Series A, and Series B stages have increased 15-25 percentage points from 2024 to 2025.
Deal volume has decreased significantly across all stages; seed deals down 13%, Series A down 10%, Series B down 5%.
Series B companies lead recovery due to demonstrated resilience and multi-year performance data.
The venture market is more selective, emphasizing quality and profitability over volume and aggressive growth.
Faster decisions for clear winners coexist with longer evaluations for borderline investments.
The trend suggests a permanent "new normal" of fewer deals but fairer, more stable valuations.
This shift illustrates a more mature venture ecosystem post-correction, which balances optimism with discipline and underscores the importance of operational performance, efficiency, and sustainable growth.
Visualizing Unicorns by Country in 2025
Visualcapitalist • Dorothy Neufeld • August 1, 2025
Technology•Startups•Unicorns•AI•Global Markets•Venture Capital
Visualizing Global Unicorn Hotspots in 2025
In 2025, the global landscape of unicorn companies—private firms valued at $1 billion or more—is strongly dominated by the United States, which houses 793 unicorns, a figure that surpasses the combined total of the next 19 countries. The U.S. remains the premier hub for high-value startups, driven by technology innovation and significant investor interest, particularly in emerging fields such as artificial intelligence (AI). Leading the pack are industry giants SpaceX and OpenAI, valued at $350 billion and $300 billion respectively, underscoring the country’s capability to nurture startups into global leaders.
China is the second-largest hub with 284 unicorns, led by major companies like ByteDance, the parent of TikTok, ranked as the world’s third most valuable unicorn. Ant Group, a key player in mobile payments, is the fourth largest globally, emphasizing China’s strength in digital finance. India comes in third with 88 unicorns, highlighted by Reliance Retail, valued at $100 billion, showcasing the country’s expanding retail sector underpinned by strong economic growth and favorable investor sentiments. The UK occupies fourth place with 64 unicorns, with fintech firm Revolut standing out at $32 billion valuation.
Key Highlights and Trends
The AI sector has rapidly accelerated unicorn creation globally in 2025. At least 36 new unicorns have emerged this year in tech sectors like robotics (Dexterity) and AI platforms (Thinking Machines, founded by former OpenAI exec Mira Murati). Thinking Machines notably reached a $10 billion valuation within six months, backed by heavyweight investors such as Nvidia and Andreessen Horowitz (a16z).
The leading countries by number of unicorns beyond the top three are the UK (64), Germany (40), France (30), and Canada (30), revealing a strong European presence in the unicorn ecosystem.
Other notable unicorn hubs include Singapore and Israel (each with 22), South Korea (21), Brazil (20), and Japan (16), indicating an increasingly global spread, though still far behind U.S., China, and India.
Implications for the Tech and Business Ecosystem
The dominance of the U.S. in the unicorn landscape reflects its mature startup ecosystem, access to capital, leading universities, and a culture fostering innovation and risk-taking. The rising influence of AI companies highlights the tech sector’s evolving priorities, shifting investment flows toward frontier technologies with high growth potential.
China’s significant presence illustrates its ongoing transformation into a digital economy powerhouse despite regulatory challenges. India’s fast appearance in the top three spots identifies it as a critical growth frontier with burgeoning consumer markets and digital infrastructure accelerating startup valuations.
Fintech and retail sectors are prominent among these unicorns, suggesting that technology adoption in traditional sectors continues to be a rich ground for value creation, along with AI and blockchain innovations expanding funding opportunities.
Summary
The 2025 landscape of unicorns globally confirms the United States as the unrivaled leader in private tech firms valued over $1 billion, followed by China and India. The AI boom is a notable driver of new unicorns this year, while other sectors like fintech and retail maintain strong traction. Europe, Asia, and emerging markets show steady growth, contributing to a more diversified but still U.S.-centric global unicorn ecosystem. This distribution underscores the importance of innovation hubs, economic growth, and venture capital as the key engines powering the rise of these high-valuation private companies.
The World’s 50 Most Valuable Private Companies in 2025
Visualcapitalist • Marcus Lu • July 31, 2025
Business•Startups•PrivateCompanies•Valuations•AI•Venture Capital•Venture Capital
The race to build the next generation of global giants is on.
While public markets get most of the spotlight, private companies are quietly building massive valuations and shaping the future of industries.
This visualization ranks the world’s 50 most valuable private companies in 2025, highlighting emerging powerhouses from different countries and sectors.
Key Takeaways
31 of the 50 most valuable private companies are based in the United States.
AI-focused companies such as OpenAI, Anthropic, xAI, and Safe Superintelligence are among the most highly valued.
China has 8 entries, including ByteDance, Xiaohongshu, DJI, and Yuanfudao, showing strong representation in consumer tech and hardware.
The data for this visualization comes from CB Insights. It ranks private companies globally by their most recent reported valuations.
Top rankings include SpaceX (United States) at $350 billion, ByteDance (China) and OpenAI (United States) both at $300 billion. Other notable companies include Stripe ($70 billion, US), SHEIN ($66 billion, Singapore), Databricks ($62 billion, US), Anthropic ($62 billion, US), and xAI ($50 billion, US).
Artificial Intelligence is Taking Over
AI startups are increasingly populating the top 10, with OpenAI in third ($300 billion), Anthropic in seventh ($62 billion), and xAI in eighth ($50 billion). All three of these companies have produced some of the world’s smartest AI models in recent years.
Further down the ranking, Safe Superintelligence ($30 billion) was created by former employees of OpenAI and Anthropic, and Scale AI, in which Meta recently acquired a 49% stake. This deal wasn’t captured in the source dataset, but Scale AI is now valued at roughly $29 billion, which would place it 14th in this ranking.
Additionally, many companies have AI applications as part of their products but not as their core offering. This includes Databricks (a data analytics platform), Grammarly (uses generative AI for writing assistance), and Colossal (a de-extinction biotech company).
Why Venture’s Future Is Being Decided By A Select Few
Crunchbase • July 31, 2025
Business•Venture Capital•AI•StartupFunding•MarketConsolidation•Venture Capital
The venture capital landscape in 2025 is witnessing a significant shift, characterized by a pronounced concentration of funding towards ultra-unicorns—startups valued at $5 billion or more. This trend is reshaping the dynamics of the market, leaving mid-stage companies grappling for resources.
Data from Crunchbase reveals that a mere 13% of unicorns now account for over half of the total valuation of The Crunchbase Unicorn Board, a curated list of the most valuable private companies globally. In the first half of this year, a staggering $70 billion was allocated to just 11 companies. Notably, OpenAI secured $40 billion, and Scale AI raised $14.3 billion, marking the largest private-venture deals ever. This concentration indicates a market trend where capital is increasingly funneled into a select few, creating a barbell effect.
At one end of this spectrum, early-stage founders are piecing together pre-seed and seed funding from angels and microfunds. Conversely, ultra-unicorns are absorbing capital at unprecedented rates. This bifurcation is leaving the middle market underserved, as venture dollars are not flowing evenly but are pooling at the extremes.
The allure of artificial intelligence (AI) is a significant driver of this trend. Investors are gravitating towards AI companies due to their expansive vision, massive total addressable market (TAM), and the perception of inevitability in their success. This focus on AI has led to a self-reinforcing cycle: companies with momentum attract more capital, further enhancing their growth, while solid companies with actual revenue and burn discipline struggle to secure substantial follow-on funding.
This capital concentration raises concerns about market fragility. Historically, placing substantial bets on a limited number of players has led to instability. If one of these giants falters or if their valuations are disconnected from fundamental performance, the repercussions could be widespread and severe. This scenario mirrors past market cycles, such as the dot-com boom, where overinvestment in infrastructure without clear monetization strategies led to significant corrections.
The current environment also presents opportunities for lean, capital-efficient companies that build real-world tools on top of overbuilt AI infrastructure. These companies can achieve impact and exits with more modest funding and a clear go-to-market strategy, contrasting with the massive investments directed towards foundational AI platforms.
Looking ahead, the latter half of 2025 is expected to reveal which investments are grounded in reality and which are based on speculative narratives. Firms that maintain operational efficiency and demonstrate tangible return on investment are likely to navigate this cycle successfully. Investors who identify and support these companies early may secure outsized returns. This period calls for conviction and a focus on sustainable growth, as the capital markets may appear robust at the top but are undernourished in the middle.
In summary, while the venture capital market in 2025 shows signs of recovery, it is, in reality, undergoing consolidation. The dominance of ultra-unicorns, particularly in the AI sector, is reshaping the funding landscape, posing challenges for mid-stage companies and raising questions about market stability and the future of innovation.
July Recap: LP Signals, DPI, & Seed Round Rethink
The fund cfo • Doug Dyer • July 31, 2025
Business•Startups•VentureCapital•Fundraising•Liquidity•Venture Capital
This month wasn’t about velocity. It was about filtration.
Across 8 posts, we tracked how LPs are sharpening their filters, how GPs are shifting from storytelling to structure, and how niche strategies—like QSBS optimization or operational readiness—are going mainstream.
Here’s what got read the most—and why it resonated.
🧞♂️ QSBS Went Mainstream (And Tax Alpha Became Strategy)
QSBS used to be a founder-side bonus. Now, GPs are structuring for it at the fund level—speeding up exits, smoothing alignment, and maximizing after-tax outcomes. In 2025, clean cap tables and tax alpha ≠ optional.
💸 Fundraising Rebounded (For the Right Funds)
LPs are deploying again—but selectively. Secondaries, AI-adjacency, and DPI visibility are winning the day. If you’re not hitting any of those buckets, your raise is slower (and more expensive). Dry powder ≠ easy capital.
📖 Operational Readiness Became a First Filter
LPs don’t wait for diligence to assess your firm. They’re reading signals before you pitch: staff depth, fund pacing, audit history, and tech stack. Being ready isn’t about the data room. It’s about the posture.
📈 Liquidity Windows Cracked Open
Exits are inching back. Secondaries are no longer taboo. And DPI isn’t dead—it’s just harder to manufacture. Liquidity planning is now a core part of pacing, not just an outcome.
🔍 Valuations Stabilized (But Didn’t Reprice)
Firms are still using “good enough” valuation methods. And LPs are still accepting them—at least when marks match story. Backsolves are up. Conviction is not.
🤯 AI Is Leading—But IPOs Still Lag
AI now dominates late-stage VC, with CoreWeave, xAI, and Anthropic leading the way. But the IPO window? Still cracked, not open. Private capital still beats public markets for premium deals.
💻 Data Became the Stack
Reporting is no longer a quarterly ritual—it’s a live dashboard. We explored how funds are moving from spreadsheets to real-time ops, reshaping how GPs work with LPs. The best CFOs today think like product managers.
🌿 The New Seed Round Is Pre-Series A
The classic $1M seed is gone. Founders are raising $4M–$6M with embedded milestones and pre-baked growth targets. GPs are adapting—or losing allocation. Underwriting velocity is the new edge.
Final Take:
The July posts weren’t hype-driven—they were signal-driven. What resonated most: how to raise, how to structure for liquidity, and how to operate like a firm LPs trust from day one.
What’s working now:
– QSBS-forward deal design
– DPI-aligned secondaries
– Institutional signals, not checklists
– Seed rounds priced like Series A bets
– Fund stacks run on data, not PDFs
The firms outperforming this cycle aren’t louder. They’re structurally tighter.
🔧 Turn Data Into Edge: Tools for Fund Builders
If you’re a fund manager building your next model—or an LP evaluating small fund strategies—our premium toolkit is designed to bring clarity to your process and precision to your decisions.
These are the same internal tools we use when advising funds and underwriting early-stage venture.
Miles Dieffenbach: Inside Carnegie Mellon’s $4BN Endowment & The Math Behind DPI, TVPI, Illiquidity
Youtube • 20VC with Harry Stebbings • August 4, 2025
Finance•Investment•Endowments•PrivateEquity•Fund Performance•Venture Capital
Miles Dieffenbach, Chief Investment Officer at Carnegie Mellon University, oversees the university’s $4 billion endowment. In this conversation, he breaks down key investment metrics such as DPI (Distributed to Paid-In), TVPI (Total Value to Paid-In), and the concept of illiquidity as it pertains to managing a large institutional portfolio.
The discussion provides insights into how mathematics and financial modeling are used to evaluate private equity investments, understand fund performance, and guide asset allocation decisions. Dieffenbach explains the practical impact of these metrics on assessing fund returns over time and making strategic commitments in the context of a long-term endowment investing across multiple asset classes.
By unpacking these foundational concepts, he sheds light on the complexities behind managing a university endowment, balancing growth objectives with liquidity needs and risk considerations. The walkthrough helps demystify the calculations behind fund returns and offers a framework for investors navigating private market investments.
Seed rounds are just the beginning. How many get from Seed to Series B?
X • PeterJ_Walker • August 7, 2025
Venture Capital
Key Takeaway: Navigating the startup funding landscape is challenging, with only a fraction of companies advancing beyond the Seed round. Market conditions greatly influence these transition rates, highlighting the importance of timing and environment in startup growth.
The journey from Seed funding to a Series B round is a critical milestone for startups, often seen as a litmus test for viability and growth potential. According to @PeterJ_Walker, the proportion of startups achieving this leap varies significantly depending on market conditions:
In a super-frothy (highly active and liquid) investment market: Approximately 34% of companies move from Seed to Series B. This elevated rate reflects an environment with abundant capital and investor appetite for growth-stage investments.
In more typical or 'normal-ish' markets: The transition rate drops to about 15-20%. These conditions suggest a more cautious investing climate, where fewer startups secure follow-on funding beyond the initial Seed round.
This data underscores the volatility and risk inherent in early-stage venture funding, emphasizing that while Seed rounds are accessible, scaling to Series B remains a significant hurdle. Founders and investors must take these market dynamics into account when planning fundraising strategies and growth trajectories.
AI
The Latest 20VC + SaaStr: The $600B AI Capex Boom, Zuck’s Unlimited Talent Budget, and the Coming Developer Spend Explosion
Saastr • Jason Lemkin • July 31, 2025
Technology•AI•VentureCapital•DeveloperTools•Monetization
We’re back on 20VC, with Harry, Rory from Scale and SaaStr’s Jason Lemkin. On the $600B AI capex boom that’s already surpassed dot-com levels, why every developer will soon spend $10K monthly on AI tools, and whether Big Tech has become too powerful to regulate.
The Bottom Line Upfront
Jason Lemkin (SaaStr): “I think every developer at a top tech company growing quickly is going to get $10,000 a month of AI credits. Everyone’s going to get $10,000, not $200, which is what many are doing today. We thought that was a lot a couple months ago. They’re all going to get $10,000 a month. Shopify is already there for some of its top developers.”
Harry Stebbings (20VC): “Speed of deployment equals relevance. And I think about this because we’re very disciplined on three-year fund cycles. Temporal diversification… but I see actually cadence of deployment can lead to relevance in a way that really benefits that manager.”
Rory O’Driscoll (Scale Venture Partners): “Sometimes rules of thumb or heuristics are in place because across cycles they’ve been proven to be correct… my guess is some part of this trend makes total sense and some part of it you’ll look back after the next down and turn and say, ‘Oh yeah, that’s why we had that rule.'”
The venture world is experiencing its most dramatic structural shift in decades. From Benchmark’s partner exodus to Elad Gil’s one-man $5 billion fund, from Anthropic’s meteoric $4B ARR trajectory to the coming $10K monthly AI spend per developer—the old rules are crumbling in real-time.
In the latest SaaStr + 20VC deep dive, Jason Lemkin, Harry Stebbings, and Rory O’Driscoll dissected the seismic changes reshaping venture capital, AI monetization, and the future of B2B software. What emerged was a masterclass in recognizing when fundamental assumptions need updating—and when they don’t.
When the Best Gig in Venture Isn’t Enough: The Benchmark Reboot
Benchmark, the gold standard of venture partnerships for three decades, now has just three partners remaining after Victor Lazarte’s departure. For context: being a Benchmark partner was previously considered one of the most coveted positions in all of venture capital.
“The stunning fact now is someone can be in the best gig in venture and decide no it’s not enough,” observed O’Driscoll. “I need something more and they can actually probably pull that off.”
Lemkin offered a contrarian take: “If it were me, if I were myself at that time at a similar spot, I wouldn’t leave Benchmark… it’s so powerful to meet a founder and say you’re from Benchmark.”
But Stebbings pushed back on the brand-first mindset: “What you find attractive about being on your own… is the ability to pour large amounts of cash into great companies. Scale of cash is almost a more attractive magnet than brand.”
The deeper insight? The market has fundamentally shifted. Top-tier talent can now raise massive funds independently, bypassing traditional partnership structures entirely. This isn’t just about individual preferences—it’s about LPs financing exactly the behaviors they spent decades telling VCs not to do.
The Elad Gil Phenomenon: Breaking Every LP Rule—And Getting Rewarded
Elad Gil represents the perfect case study of this new reality. His track record speaks for itself: seed investor in Airbnb and Stripe, with access to category-defining companies across multiple sectors.
“What it says is we will now finance you to do all the things that we spent 20 years telling every other venture firm not to do,” O’Driscoll noted. “What they’re showing by their actions… is they also like a product that says, ‘Thank you for your input on being focused. I’m ignoring it.'”
Gil’s approach defies traditional LP wisdom:
Multi-stage investing (seed to growth)
Concentrated sector betting
Single decision-maker structure
Massive fund sizes
Yet it’s working spectacularly. The reason? “Idiosyncratic success triumphs bland mediocre standard advice,” as O’Driscoll put it. “Winners win.”
Lemkin added crucial context about Gil’s strategy: “He’s very similar in terms of making sure that whatever that category leader is… he is in that leader and he’s in it big… whether it’s Stripe he or whether it’s Harvey he is in that leader.”
The $10K Developer Spend Revolution: AI’s True TAM Emerges
Perhaps the most eye-opening discussion centered on AI monetization—specifically, how drastically the venture community has underestimated per-developer spending potential.
Lemkin shared his personal experience: “I was on a path to spend $8,000 a month vibe coding.” But that was just the beginning. “Now the context windows have gotten longer… you could have up to a 15-minute long context window while it’s thinking through debugging or complex stuff… Run four of them at the same time or maybe even 15. That 8K bill could easily be 10K.”
The implications are staggering. Traditional software assumptions about developer tools pricing—think $200-500 per seat—are being obliterated.
“Every leading tech company, it’s cheaper than hiring any human and you can’t find humans,” Lemkin explained. “So, if that means a 50x growth per developer spend, it’s going to be a CFO’s nightmare. But… that’s 50x growth from where we are today.”
This isn’t theoretical. Shopify’s CTO already gives developers unlimited AI credits, with top performers using $10K monthly. The CEO of Replit confirmed: “Any great developer will consume almost unlimited tokens no matter how cheap they are.”
Anthropic’s $4B Trajectory: The Reacceleration Story
Anthropic’s jump from $1B to $4B ARR—and its valuation increase from $100B to $180B—exemplifies this new reality.
“The growth rate appears to have accelerated which is stunning,” O’Driscoll observed. “Most your expectation is you start off growing at 300% then 200% then 100%. These guys appear to have reaccelerated at scale.”
The market is pricing in not just growth, but accelerating growth at massive scale—a phenomenon that validates the $10K developer spend thesis.
Stebbings noted the token economics: “There’s infinite demand for this product right now at the price point of 200 bucks and the cost of generating those tokens the economics doesn’t work for the model provider.” But Moore’s Law provides the solution: costs decline while demand and usage explode.
Mark Zuckerberg Finally Submits His AI Manifesto
Nymag • July 31, 2025
Technology•AI•Innovation•Ethics•Corporate Strategy
Mark Zuckerberg’s recently released AI manifesto attempts to join the chorus of tech leaders outlining a vision for artificial intelligence’s future. However, his articulation comes across as both derivative and somewhat awkward, lacking the originality and clarity seen in similar statements by peers in the field. The manifesto showcases Zuckerberg’s intention to project a profound understanding of AI’s transformative potential, but it often leans on familiar clichés and broad generalities without delving deeply into novel insights or concrete strategies.
The manifesto seeks to address how AI can enhance human life and society, emphasizing the promise of groundbreaking technological advancements. Yet, instead of carving out a unique stance or presenting a strong, innovative perspective, Zuckerberg’s language mirrors common AI futurism rhetoric. His writing contains grand and poetic ambitions, yet the ambitious tone feels disconnected from the practical realities and nuanced challenges of AI development and governance today.
Despite the lack of distinctive substance, the document underscores a few critical themes: the importance of responsible AI development, opportunities for AI to improve global connectivity and information access, and the necessity for collective ethical stewardship. Zuckerberg reiterates Meta’s commitment to pushing AI boundaries while maintaining safety, fairness, and privacy—a narrative familiar to those tracking the company’s strategic positioning amid rising regulatory and societal scrutiny.
The manifesto highlights some broad principles, such as collaboration between governments, industry, and academia to shape AI’s trajectory. Yet, it offers little in the way of specific policies or actionable frameworks. This generality means the statement risks being overlooked in an increasingly crowded field of AI manifestos and policy papers that provide more precise guidance or bold calls to action. Zuckerberg’s message seems to aim more at reassuring stakeholders and aligning Meta with mainstream AI discourse rather than challenging or advancing it.
In conclusion, Zuckerberg’s AI manifesto reflects an effort to articulate a visionary and ethical AI future but ultimately falls short of delivering a compelling, original narrative. It functions more as a reiteration of existing ideas with a poetic but vague tone than as a definitive statement that could influence broader conversations on AI governance and innovation. For Meta and Zuckerberg, this document may serve as a baseline signal of engagement with AI’s future but leaves the demand for a more substantive, actionable vision unmet.
The Next Trillion Dollar Marketplace Will Put SKUs on Services
Danhock • Dan Hockenmaier • July 31, 2025
Business•Marketplace•Services•AI•Innovation
The services industries should be home to massive marketplaces. They are enormous markets: annual freelance labor spend is 1.3T and home improvement spend is 600B in the US alone. They have highly fragmented buyers and sellers that would benefit greatly from a better way to find and transact with each other.
So after thousands of attempts by some of the smartest teams in the world, where are they? You might argue that Uber is a service marketplace. But other than that, none of the ~10 US public marketplaces with market caps over $10B are in services. None of the top 10 private marketplaces are either.
This essay explores what is holding services marketplaces back, how they might get unstuck, and why this would produce some of the largest businesses in the world.
You can buy a lightbulb in one click on Amazon, but hiring an electrician is still about as hard as it was 100 years ago. You don’t really know what you need, how long it will take, or how much it should cost, and you have to just start talking to electricians to find out.
Marketplaces dealing in physical things like products and properties solved this problem by bringing an incredible amount of information online. Everything they sell has a “SKU” - a unique identifier attached to descriptions, photos, reviews, and prices. In other words, they make everything they sell legible to customers in real time.
This was not a trivial undertaking. Amazon built its own classification system for products called ASIN (Amazon Standard ID number) and has used over 1 trillion of them. Doordash pulled every menu onto its platform, first by literally driving around and picking them up, and later with the help of restaurants. Airbnb famously provided photography services to help bring new units of supply online for the first time.
This is much harder to do with services, because they are so diverse. Each one of those electrical projects (or web design or tutoring or almost every other service) is one of a kind, resisting categorization.
As I outlined in my last essay, marketplaces have evolved in four stages, which increasingly make it easier for buyers and sellers to transact, and usually get much bigger in the process:
Lead gen marketplaces help sellers list their good or services and help buyers discover them
Checkout marketplaces provide prices, terms, and reviews upfront, allowing for real time checkout
Managed marketplaces bear the risk of something going wrong with guarantees
Heavily managed marketplaces directly participate in distribution
Legibility is the great filter between Lead Gen and everything else. Without it, you can’t provide enough information for customers to be comfortable purchasing in real time, so the best you can do is make introductions between buyers and sellers and let them take it from there.
Purchasing services requires a lot of steps today: searching for suppliers, scoping the project, negotiating the price, making the hire, completing the job, and sending final payment.
Today, everything after the search happens offline, usually over multiple days or weeks. Ultimately, everything except for the job itself must happen online, in real time.
Evolving to a Checkout Marketplace requires enabling a customer to hire in real time by surfacing sufficient information about the project, the potential suppliers, and their quotes. Today, buyers get this information through a messy chain of messages, phone calls, and in person visits that is highly costly for both them and potential suppliers.
Many marketplaces are trying to streamline this process by collecting as much information from suppliers in advance about what they offer, how they price, and when they are available, and then collecting as much information from buyers in real time through a series of scoping questions. For example, here are a few of the intake questions to post a web design project on Upwork:
Usually, these flows essentially make the marketplace more effective at search, by filtering the potential set of suppliers that buyers can explore. Occasionally it starts to do some of the work of scoping and negotiating, by delivering a price range or the ability to schedule a meeting. But very rarely does it result in a “hire now” button that a customer will be comfortable clicking.
This is because the messy back and forth between buyers and sellers is a feature, not a bug. In most cases, the customer doesn’t actually know what they want, and talking to service providers is part of the process to figure it out. And exploring possible options is a decision tree with so many branches that you can’t write all of the if/then logic in advance.
What is really good at exploring a messy decision space in back and forth dialogue? Humans. And increasingly, LLMs trained on humans doing a lot of that in the past.
The solution is going to look more like that original messy back and forth, but instead of talking to a bunch of service providers, the customer will chat with an AI trained by the marketplace.
OpenAI to Open-Source Some of the A.I. Systems Behind ChatGPT
Nytimes • August 5, 2025
Technology•AI•OpenSource•ArtificialIntelligence•OpenAI
OpenAI has announced the release of two new open-weight AI models, marking its first open releases since GPT-2 in 2019. These models, named "gpt-oss-120b" and "gpt-oss-20b," are optimized for advanced reasoning tasks and capable of running on laptops. While not fully open-source, as they lack training data and full code, these models are free for developers to use and customize. (reuters.com)
The larger model can run on a single GPU, while the smaller one is suitable for personal computers. They deliver performance similar to OpenAI’s proprietary o3-mini and o4-mini models, excelling in coding, math competitions, and health-related queries. The models were trained on a text-only dataset with an emphasis on science, math, and coding. (reuters.com)
This release positions OpenAI amidst a competitive landscape where rivals like Meta and China’s DeepSeek have released strong open models. The move responds to growing demand for transparent AI tools and China’s lead in open-source AI, highlighted by systems from DeepSeek, Alibaba, and Moonshot. (ft.com)
OpenAI emphasized safety, noting the models underwent rigorous testing for potential misuse. The release had been delayed twice for additional safety assessments. Open-weight models, while more flexible, pose higher risks, which OpenAI addressed by simulating malicious use cases and consulting external experts. (ft.com)
This strategic move reflects OpenAI's commitment to developing artificial general intelligence that benefits humanity, aiming to distribute advanced AI more broadly and accommodate users seeking alternative, self-hosted solutions. (axios.com)
ElevenLabs launches an AI music generator, which it claims is cleared for commercial use
Techcrunch • August 5, 2025
Technology•AI•MusicGeneration•Licensing•Innovation
ElevenLabs, a leader in AI audio technology, has expanded its offerings with the launch of Eleven Music, an AI music generator designed for commercial use. This development signifies a strategic move beyond their initial focus on AI audio tools, including text-to-speech products and conversational bots. (techcrunch.com)
The Eleven Music platform enables users to generate studio-grade music across various genres and styles, with or without vocals, in multiple languages, all within minutes. This service is accessible to businesses, creators, artists, and everyday users, aiming to democratize music creation. (spacedaily.com)
To address potential legal and ethical concerns associated with AI-generated music, ElevenLabs has secured licensing agreements with Merlin Network and Kobalt Music Group. These partnerships grant ElevenLabs access to extensive music libraries for training its AI models, ensuring that the generated content is cleared for commercial use. Notably, artists represented by these organizations must voluntarily opt-in for their music to be licensed for AI use. (techcrunch.com)
The introduction of Eleven Music comes amid ongoing legal challenges in the AI music generation space. Companies like Suno and Udio have faced lawsuits from the Recording Industry Association of America (RIAA) for allegedly training their models on copyrighted material without proper authorization. By proactively establishing licensing agreements, ElevenLabs aims to navigate these challenges and set a precedent for responsible AI development in the creative sector. (techcrunch.com)
In summary, ElevenLabs' launch of Eleven Music represents a significant advancement in AI-driven music generation, emphasizing legal compliance and ethical considerations. Through strategic partnerships and a commitment to protecting intellectual property rights, ElevenLabs is positioning itself as a responsible leader in the evolving landscape of AI-generated content.
GPT-5's Router: how it works and why Frontier Labs are now targeting the Pareto Frontier
Latent • August 7, 2025
Uncategorized•AI
GPT-5 introduces a sophisticated "router" architecture designed to enhance efficiency and performance. This router functions as a supervisory policy network, dynamically directing sub-requests to specialized expert modules tailored for specific tasks, such as code generation, vision processing, and mathematical computations. Additionally, it facilitates native tool-calling, enabling the model to execute external code or access databases autonomously, without explicit user instructions. (thelegaljournalontechnology.com)
The integration of this router architecture significantly reduces latency by approximately fourfold and decreases energy consumption per token by half. This efficiency is achieved through the model's ability to intelligently allocate resources, ensuring that each sub-task is handled by the most appropriate specialized module. (thelegaljournalontechnology.com)
Furthermore, GPT-5's router architecture unifies multimodal branches—text, image, audio, and short video—into a single latent space. This consolidation allows the model to process and generate diverse forms of data seamlessly, fulfilling OpenAI's objective of "unified cognition." (thelegaljournalontechnology.com)
In summary, GPT-5's router architecture represents a significant advancement in AI model design, offering enhanced efficiency, reduced latency, and a unified approach to multimodal data processing.
GPT-5: It Just Does Stuff
Oneusefulthing • August 7, 2025
Technology•AI•MachineLearning•Automation•Innovation
GPT-5 represents a significant advancement in artificial intelligence, demonstrating capabilities that extend beyond previous models. Its ability to autonomously select appropriate models and adjust processing time based on task complexity enhances user experience by streamlining interactions. This feature allows users to engage with AI more intuitively, without needing to understand the underlying model selection process.
The model's proactive nature is evident in its tendency to suggest additional tasks or improvements beyond the initial prompt. For instance, when tasked with generating startup ideas, GPT-5 not only provided the requested ideas but also created related materials such as landing pages, LinkedIn copy, and financial documents. This initiative reduces the cognitive load on users, enabling them to focus on higher-level decision-making.
In creative applications, GPT-5's capacity to generate complex outputs from minimal input is particularly impressive. A user prompted the model to create a procedural brutalist building generator with drag-and-edit functionality. Within minutes, GPT-5 produced a fully functional 3D city builder application, complete with features like neon lights, dynamic camera angles, and a save system. This level of autonomy in software development signifies a transformative shift in how AI can assist in creative and technical endeavors.
Despite these advancements, GPT-5's decision-making process regarding task complexity remains somewhat opaque. Users may find it challenging to predict when the model will opt for a more time-intensive reasoning approach, which can lead to variability in response times and output quality. While premium subscribers have the option to select more powerful models directly, this feature is not universally accessible, potentially limiting the consistency of user experience.
Overall, GPT-5's ability to "just do stuff" signifies a substantial leap in AI capabilities, offering users a more seamless and efficient interaction with technology. Its proactive and autonomous features have the potential to revolutionize various fields, from business strategy to creative design, by reducing the need for manual input and enhancing productivity.
OpenAI: ‘Introducing gpt-oss’
Openai • John Gruber • August 6, 2025
Technology•AI•LanguageModels•OpenSource•MachineLearning
We’re releasing gpt-oss-120b and gpt-oss-20b — two state-of-the-art open-weight language models that deliver strong real-world performance at low cost. Available under the flexible Apache 2.0 license, these models outperform similarly sized open models on reasoning tasks, demonstrate strong tool use capabilities, and are optimized for efficient deployment on consumer hardware. They were trained using a mix of reinforcement learning and techniques informed by OpenAI’s most advanced internal models, including o3 and other frontier systems.
The gpt-oss-120b model achieves near-parity with OpenAI o4-mini on core reasoning benchmarks, while running efficiently on a single 80 GB GPU. The gpt-oss-20b model delivers similar results to OpenAI o3‑mini on common benchmarks and can run on edge devices with just 16 GB of memory, making it ideal for on-device use cases, local inference, or rapid iteration without costly infrastructure. Both models also perform strongly on tool use, few-shot function calling, CoT reasoning (as seen in results on the Tau-Bench agentic evaluation suite) and HealthBench (even outperforming proprietary models like OpenAI o1 and GPT‑4o).
Simon Willison highlights that the long-promised OpenAI open weight models are "very impressive." He notes that o4-mini and o3-mini are really good proprietary models and did not expect the open weights releases to be anywhere near that class, especially given their small sizes. He points out that the gpt-oss-20b model should run quite comfortably on a Mac laptop with 32GB of RAM.
OpenAI in talks for share sale valuing ChatGPT maker at $500bn
Ft • August 5, 2025
Business•Technology•ArtificialIntelligence•OpenAI•ShareSale•AI
OpenAI is reportedly in early discussions to conduct a secondary share sale that could value the company at approximately $500 billion. This move aims to provide liquidity for current and former employees, allowing them to sell shares and realize gains from the company's rapid growth. The proposed valuation marks a significant increase from OpenAI's previous valuation of $300 billion, achieved during a $40 billion funding round led by SoftBank Group Corp. (business-standard.com)
The secondary sale is intended to reward staff and bolster retention amid a competitive market for AI talent. Companies like Meta Platforms have been actively recruiting OpenAI employees, offering substantial compensation packages. By facilitating this share sale, OpenAI aims to retain its top talent and maintain its position in the rapidly evolving AI industry. (business-standard.com)
In addition to the secondary sale, OpenAI has secured significant funding in recent months. The company finalized an $8.3 billion tranche of its $40 billion financing round, with investors such as Dragoneer, Altimeter Capital, and D1 Capital Partners contributing to the funding. This capital infusion underscores the strong investor confidence in OpenAI's future prospects and its leadership in the AI sector. (theinformation.com)
OpenAI's rapid expansion is also evident in its user base. The company announced that ChatGPT now has 700 million weekly active users, up from 500 million in March, and processes more than three billion messages per day. This growth highlights the widespread adoption and reliance on OpenAI's AI technologies across various industries. (business-standard.com)
As OpenAI continues to innovate and expand its offerings, including the upcoming release of GPT-5, the company's valuation and strategic initiatives reflect its pivotal role in shaping the future of artificial intelligence.
OpenAI could soon be worth a half-trilly
Cautiousoptimism • Alex Wilhelm • August 6, 2025
Technology•AI•Investment•Startup•Valuation
OpenAI is reportedly in early discussions to conduct a secondary share sale that could value the company at approximately $500 billion. This move would allow current and former employees to sell their shares, providing them with an opportunity to realize gains from the company's rapid growth. Venture capital firm Thrive Capital is among the investors exploring potential purchases. (business-standard.com)
This potential valuation marks a significant increase from OpenAI's previous $300 billion valuation, achieved in a $40 billion funding round led by SoftBank Group. The proposed transaction aims to reward staff and bolster retention amid a competitive market for AI talent. Notably, Meta Platforms has reportedly hired at least eight OpenAI employees this year for its 'Superintelligence' division, offering substantial compensation packages. (business-standard.com)
In recent developments, OpenAI secured an additional $8.3 billion in funding from a group of private equity and venture capital investors, bringing its valuation to $300 billion. This funding includes significant contributions from Blackstone, TPG, Fidelity, T Rowe Price, and a $2.8 billion investment from Dragoneer Investment Group. The company is also in negotiations with Microsoft to revise their partnership terms, potentially allowing Microsoft to acquire up to a third of OpenAI's equity. (ft.com, reuters.com)
OpenAI's revenue has surged to $12 billion annually, and the company plans to release GPT-5 soon. Additionally, OpenAI has introduced two open-source models capable of human-like reasoning, further solidifying its position in the AI sector. (business-standard.com)
Anthropic Releases Claude Opus 4.1
Anthropic • John Gruber • August 5, 2025
Technology•AI•Software Development•Product Management•AI Model Updates
GitHub notes that Claude Opus 4.1 improves across most capabilities relative to Opus 4, with particularly notable performance gains in multi-file code refactoring. Rakuten Group finds that Opus 4.1 excels at pinpointing exact corrections within large codebases without making unnecessary adjustments or introducing bugs, with their team preferring this precision for everyday debugging tasks. Windsurf reports Opus 4.1 delivers a one standard deviation improvement over Opus 4 on their junior developer benchmark, showing roughly the same performance leap as the jump from Sonnet 3.7 to Sonnet 4.
Nothing spectacular here, but incremental improvements add up. Mike Krieger—best known as a co-founder of Instagram, now chief product officer at Anthropic—in an interview with Bloomberg:
“In the past, we were too focused on only shipping the really big upgrades,” said Anthropic Chief Product Officer Mike Krieger. “It’s better at coding, better at reasoning, better at agentic tasks. We’re just making it better for people.” [...]
“One thing I’ve learned, especially in AI as it’s moving quickly, is that we can focus on what we have—and what other folks are going to do is ultimately up to them,” Krieger said when asked about OpenAI’s upcoming release. “We’ll see what ends up happening on the OpenAI side, but for us, we really just focused on what can we deliver for the customers we have.”
I’m on board with the idea that Apple need not acquire any of these AI startups, but if they do, Anthropic—not Perplexity—seems the one most aligned with Apple’s values. And I don’t mean values in just an ethical sense, but their entire approach to product development in general.
OpenAI launches GPT-5 model
Youtube • CNBC Television • August 7, 2025
Technology•AI•Machine Learning•Natural Language Processing•Innovation
OpenAI has officially launched its much-anticipated GPT-5 model, marking a significant milestone in the evolution of artificial intelligence. The new model boasts substantial improvements in understanding and generating human-like text, promising to enhance applications across various industries from customer service to creative content creation.
GPT-5 has been developed to address key limitations seen in previous versions, including better contextual comprehension, reduced biases, and more accurate responses. The model leverages an advanced architecture that enables deeper understanding of nuanced queries and more coherent, contextually relevant outputs.
OpenAI emphasizes that GPT-5 is designed to be safer and more aligned with user intentions. Enhanced safety features include mechanisms to prevent misuse and reduce the generation of harmful or misleading content. This upgrade reflects OpenAI's commitment to responsible AI deployment, recognizing the increasing influence such models have on society.
The launch of GPT-5 also underscores OpenAI's continued focus on collaboration with external developers and researchers. The model is being made available via API, allowing seamless integration into existing platforms and fostering innovation across sectors including education, healthcare, and business intelligence.
With GPT-5, OpenAI aims to push the boundaries of natural language processing, enabling machines to better assist with complex problem-solving, brainstorming, and decision-making tasks. This advancement is expected to accelerate AI adoption and open new pathways for automated workflows and interactive experiences.
Box CEO on OpenAI's GPT-5 launch, AI use in the workplace and the future of the tech
Youtube • CNBC Television • August 7, 2025
Technology•AI•WorkplaceAutomation•GPT
The CEO of Box, a cloud content management and file sharing service, discusses the launch of OpenAI’s GPT-5 and the growing role of artificial intelligence in the workplace. The conversation focuses on how AI technologies like GPT-5 are influencing business operations, improving productivity, and transforming how companies manage and use information.
He emphasizes the importance of integrating advanced AI tools responsibly to enhance employees' capabilities rather than replace them. AI is seen as a partner that can handle routine tasks, analyze vast amounts of data, and provide insights that help workers make better decisions. This synergy between AI and humans aims to create a more efficient and innovative work environment.
The CEO also highlights Box’s own initiatives in embedding AI within their platform, aiming to leverage these technologies to automate workflows, improve content searchability, and maintain security and compliance. According to him, the future of tech revolves around combining AI with cloud infrastructure to deliver smarter, safer, and more user-friendly tools for businesses of all sizes.
Looking ahead, he envisions continued rapid advancements in AI capabilities, with upcoming models like GPT-5 expanding the possibilities for automation and creativity in the workplace. These technologies hold the promise to reshape industries by unlocking new levels of productivity and enabling workers to focus on higher-value tasks.
The Cloud Wars Update: Who’s Winning the AI-Driven Growth Battle
Tanayj • Tanay Jaipuria • August 4, 2025
Technology•Cloud Computing•Artificial Intelligence•AWS•Azure•Cloud•AI
The cloud computing sector is witnessing intensified competition among the three major hyperscalers: Amazon Web Services (AWS), Microsoft Azure, and Google Cloud. Recent earnings reports highlight how artificial intelligence (AI) is reshaping growth trajectories and market share dynamics.
AWS remains the largest cloud provider, generating approximately $30.9 billion in revenue last quarter, equating to an annualized run rate of about $124 billion, with a 17.5% year-over-year growth. Microsoft Azure is rapidly closing the gap, with an estimated quarterly revenue of $21 billion, leading to an annualized run rate near $84 billion and a 39% year-over-year growth, marking it as the fastest-growing among the three. Google Cloud reported $13.6 billion in revenue, translating to an annualized run rate exceeding $54.4 billion and a 32% year-over-year growth. It's noteworthy that Google's figures include Workspace (G Suite), which may inflate the headline compared to AWS and Azure's pure cloud businesses.
Focusing on the net new run rate added over the past twelve months, the three providers collectively added approximately $55 billion in incremental annualized run rate. Azure captured the largest share, adding about $23.6 billion, representing roughly 43% of the incremental spend. AWS followed with around $18.4 billion, equating to 34% of the net new pool, indicating a smaller share of growth despite its substantial base. Google Cloud secured about $13 billion, accounting for 24% of the new dollars, demonstrating its growing presence in the market.
Management insights reveal that AI demand is a significant driver of cloud growth, with each provider facing capacity constraints. AWS acknowledges that demand currently exceeds capacity, citing power and chip constraints. Microsoft emphasizes its leadership in AI infrastructure, noting continuous market share gains, while also facing capacity constraints despite expanding data center capacity. Google highlights strong customer demand driven by product differentiation and a comprehensive AI product portfolio, with significant growth in large deal sizes and new customer acquisitions.
In summary, Azure is leading in capturing new market share, while AWS continues to maintain the highest overall revenue. Google Cloud is rapidly expanding its footprint, indicating a dynamic and evolving cloud landscape influenced heavily by AI advancements.
AI and Publishers
Cloudflare: ‘Perplexity Is Using Stealth, Undeclared Crawlers to Evade Website No-Crawl Directives’
Cloudflare • John Gruber • August 5, 2025
Technology•AI•Web•Ethics•Transparency•AI and Publishers
Cloudflare has publicly accused Perplexity, an AI-powered answer engine, of engaging in stealth crawling behaviors that deliberately evade website no-crawl directives. According to Cloudflare’s observations, Perplexity initially identifies itself through declared user agents but upon encountering network blocks, it switches to undisclosed user agents, including generic browsers impersonating Google Chrome on macOS, to continue crawling undetected. This behavior includes repeated modifications of user agents and changes in Autonomous System Numbers (ASNs), as well as ignoring or failing to even fetch websites' robots.txt files, which are standard mechanisms for sites to communicate crawler access preferences.
Cloudflare emphasizes that the internet functions on trust, with web crawlers expected to be transparent and respectful of website owner directives such as robots.txt. Because Perplexity’s actions violate these norms, Cloudflare has removed Perplexity’s status as a verified bot and introduced heuristics in their web application firewall (WAF) to block this stealth crawling behavior. Their testing was conducted on domains explicitly forbidding automated access and applying WAF rules blocking Perplexity’s known crawlers. Despite this, Perplexity allegedly resorted to stealth tactics to bypass the blocks, shifting from declared user agents to stealth user agents to maintain scraping.
Perplexity responded to Cloudflare’s report with strong rebuttals, accusing Cloudflare of incompetence and seeking publicity at their expense. They argue that Cloudflare’s analysis is flawed, stating that Cloudflare might have misattributed millions of daily automated requests belonging to a third-party service, BrowserBase, to Perplexity. Perplexity labels Cloudflare’s technical explanations as embarrassing and disqualifying, asserting that Cloudflare fundamentally misunderstands how modern AI assistants operate and the scale of web traffic they generate.
However, Cloudflare contrasts Perplexity’s approach with that of OpenAI’s ChatGPT, which, according to their tests, respects robots.txt directives by stopping crawling entirely when disallowed and halting further attempts after encountering blocks. This behavior from ChatGPT is presented as a model of crawler transparency and respect for website owner preferences, which Perplexity allegedly fails to match. Cloudflare highlights that Perplexity’s failure to explain their use of false generic user agents when blocked undermines the company’s credibility and raises questions about the ethical implications of such stealth crawling.
The conflict reflects broader tensions around AI and web data usage, as AI systems depend heavily on web data but must balance aggressive data gathering with respect for site owners' policies. Cloudflare’s stance is rooted in maintaining trust and transparency on the web, suggesting that stealth crawling not only violates ethical norms but also risks damaging relationships between AI service providers and the web ecosystem. On the other hand, Perplexity’s defensive reaction frames the issue as a misunderstanding or mischaracterization of their operations, underscoring potential conflicts between business interests, technical implementation, and web standards.
In summary, Cloudflare’s detailed critique of Perplexity points to a significant challenge in regulating AI web crawlers: how to ensure compliance with site directives while supporting emerging AI technologies. The case underscores the need for clearer norms and potentially new frameworks that address stealth crawling and respect for digital content ownership in the AI era.
Some people are defending Perplexity after Cloudflare ‘named and shamed’ it
Techcrunch • August 5, 2025
Technology•AI•Web Scraping•Ethics•Content Access•AI and Publishers
Cloudflare has accused AI search engine Perplexity of "stealthily scraping websites" by circumventing site-specific blocking methods. In a recent blog post, Cloudflare detailed how Perplexity's crawlers, initially identifying as "PerplexityBot," altered their user agent strings to mimic legitimate browsers like Google Chrome on macOS when faced with network blocks. This behavior was observed across tens of thousands of domains and millions of requests daily. (blog.cloudflare.com)
In response, Perplexity denied the allegations, suggesting that Cloudflare's claims were a "sales pitch" and that the identified bots were not theirs. They argued that their platform uses "user-driven agents" that fetch content only when a person asks a question requiring real-time information, and that the fetched data is not stored or used for training AI models. (indiatoday.in)
The controversy has sparked debate within the tech community. Some defend Perplexity, contending that accessing public websites on behalf of users should not be equated with bot activity. They draw parallels to human users accessing content through browsers, questioning why AI assistants should be treated differently. (techcrunch.com)
This incident highlights the evolving challenges in AI and web content access. As AI agents become more prevalent, the lines between legitimate user requests and automated data scraping are increasingly blurred, raising questions about ethical standards and the future of web content accessibility.
Perplexity accused of scraping websites that explicitly blocked AI scraping
Techcrunch • August 4, 2025
Technology•AI•Web Scraping•Data Privacy•Content Protection•AI•AI and Publishers
Perplexity, an AI-powered answer engine, has been accused by Cloudflare of circumventing website restrictions to scrape content from sites that have explicitly blocked such activities. Cloudflare's research indicates that Perplexity employs stealth techniques, including altering its user agent and changing its autonomous system networks (ASNs), to evade detection and bypass robots.txt directives. This behavior has been observed across tens of thousands of domains and millions of requests per day. (blog.cloudflare.com)
In response to these findings, Cloudflare has de-listed Perplexity's bots from its verified list and implemented new measures to block their crawling activities. The company emphasizes the importance of respecting website preferences and maintaining trust in internet interactions. (blog.cloudflare.com)
Perplexity's spokesperson, Jesse Dwyer, dismissed Cloudflare's claims as a "sales pitch," asserting that the screenshots provided show no content was accessed. He further stated that the bot identified in Cloudflare's report is not associated with Perplexity. (techcrunch.com)
This incident highlights the ongoing tension between AI companies seeking vast amounts of data for training models and website owners aiming to protect their content. Cloudflare's actions reflect a broader industry effort to balance the needs of AI development with the rights of content creators. (wired.com)
AI and Jobs
As AI Comes for Consulting, McKinsey Faces an “Existential” Shift
Wsj • Conor Grant • August 8, 2025
Business•Management•Artificial Intelligence•Consulting•Innovation•AI and Jobs
McKinsey & Company is undergoing a significant transformation as it integrates artificial intelligence (AI) into its consulting operations. The firm has deployed approximately 12,000 AI agents to assist consultants in tasks such as building PowerPoint presentations, taking notes, and summarizing research documents. This shift aims to enhance efficiency and reduce reliance on junior staff for routine tasks. (livemint.com)
The integration of AI has led to a reduction in McKinsey's workforce, with the firm trimming its headcount from about 45,000 to 40,000 employees. This downsizing reflects the firm's efforts to adapt to the evolving consulting landscape and improve profitability. (ft.com)
AI now contributes approximately 40% of McKinsey's revenue, signaling a strategic shift towards technology-driven consulting services. The firm is focusing on outcomes-based arrangements, where payment depends on achieving specific results rather than billable hours. This approach aligns with the growing demand for consultants who can implement and manage change, not just provide strategic advice. (livemint.com)
Despite these advancements, some industry leaders express skepticism about AI fully replacing consultants. Elon Musk, CEO of xAI, argues that AI will not render consultants obsolete, as they provide CEOs with validation and accountability. (webpronews.com)
In response to geopolitical tensions, McKinsey has halted its China-based operations from engaging in consultancy work related to generative AI. This decision reflects the firm's cautious approach to AI deployment in sensitive regions. (ft.com)
Overall, McKinsey's embrace of AI represents a significant shift in the consulting industry, highlighting the need for firms to adapt to technological advancements to remain competitive.
Opinion | AI Is Here, and a Quiet Havoc Has Begun
Wsj • Peggy Noonan • August 7, 2025
Technology•AI•JobMarket•Automation•Reskilling•AI and Jobs
Artificial intelligence (AI) is rapidly transforming various sectors, leading to significant shifts in the job market. While many anticipate that AI will eventually replace numerous jobs, the timeline for this change remains uncertain. A 2017 study surveyed AI experts, revealing that they predict AI will outperform humans in tasks like language translation by 2024 and writing high-school essays by 2026. However, the complete automation of all human jobs is not expected for another 120 years. (arxiv.org)
Despite these projections, recent developments suggest that AI's impact on employment is already underway. Companies are increasingly integrating AI into their operations, leading to the displacement of certain roles. For instance, AI is being utilized to analyze competitors' earnings reports and generate quarterly earnings calls, tasks traditionally performed by human employees. (podcastworld.io)
The sectors most affected by AI are those involving routine and repetitive tasks. Entry-level positions in customer service, data entry, and basic analysis are particularly susceptible. This trend is evident in the surge of unemployment among new college graduates, as AI technologies increasingly replace these roles. (linkedin.com)
However, AI is also creating new job opportunities. Roles such as AI trainers, human-AI interaction specialists, and forward-deployed engineers are emerging, requiring different skill sets than traditional entry-level positions. This shift underscores the importance of reskilling and continuous learning to adapt to the evolving job landscape. (linkedin.com)
In summary, while AI is poised to revolutionize the workforce, its full impact on job displacement and creation will unfold over time. Staying informed and adaptable will be crucial for workers navigating this technological transformation.
Substack
★ The Substack Branding and Faux Prestige Trap
Daringfireball • John Gruber • August 2, 2025
Technology•Web•Substack•Branding•IndependentPublishing
Among the handful of oft-discussed problems with Substack:
Their 90/10 subscription revenue split isn’t usurious, but it’s high compared to competing platforms — especially once you reach an even mid-tier level of popularity.
The only way this independent publishing game works, by any credible definition of “the long run”, is to build your own audience. Substack has indies convinced that they build an audience for you through some sort of secret-sauce network effects, but I’ve seen no evidence that’s true. Writers with established reputations and readerships joining Substack are helping build Substack’s brand, not their own.
The whole, you know, Nazi thing.
Less commented upon but just as bad is the branding trap. Substack is a damn good name. It looks good, it sounds good. It’s short and crisp and unique. But now they’ve gotten people to call publications on Substack not “blogs” or “newsletters” but “substacks”. Don’t call them that. And as I griped back in December, even the way almost all Substack publications look is deliberately, if subtly, Substack-branded, not per-publication or per-writer branded.
Consider Paul Krugman. Krugman was an op-ed columnist for The New York Times from 2000–2024. But last year the Times wanted to cut him back from writing two columns a week plus his Times-hosted blog/newsletter to writing just one column per week and killing the blog. In an interview with Columbia Journalism Review early this year, Krugman also revealed that after over two decades, he’d started butting heads with his editors over style, tone, and even subject matter:
“I’ve always been very, very lightly edited on the column,” he said. “And that stopped being the case. The editing became extremely intrusive. It was very much toning down of my voice, toning down of the feel, and a lot of pressure for what I considered false equivalence.” And, increasingly, attempts “to dictate the subject.”
So Krugman rightfully and righteously told the Times, politely, to fuck off and struck out on his own. Reading Krugman this year, on his own site, has been like rediscovering cane sugar Coca-Cola after drinking the cheaper-to-produce corn syrup variant for a few years. This is The Real Thing — the unadulterated tart-tongued and sharp-elbowed Krugman I remember devouring during both the Bush and Obama administrations. The only hitch: Krugman hung out his independent shingle at Substack — which makes it a shingle under a shingle.
My suspicion is that for a certain class of writers and media commentators who, heretofore, have spent their careers at big-name publications — newspapers and magazines dating back to the print era, TV networks from the cable-is-king era — they actually find comfort writing under the auspices of Substack. See also: Terry Moran, who bounced to Substack after ABC News declined to renew his contract — despite 28 years at the network, including this recent classic — because of a tweet decrying Donald Trump and White House ghoul Stephen Miller. I suspect Moran, and perhaps even Krugman, perceive Substack as conveying a sort of badge of legitimacy. Self-published books, for example, used to be the refuge for kooks and no-talent hacks. I think some who spent their careers working at prestige outlets — especially those like Krugman and Moran, who are a bit older (than me) — feel a bit naked without one. But there’s no real prestige at Substack and never will be. I, for one, am fine with Substack’s liberal philosophy of letting anyone write there, but that means, well, anyone can write there.
….
★ Substack Raised Another $100 Million, Which, I Bet, Is Already Being Flushed Down the Same Toilet as Their First $100 Million
Daring fireball • John Gruber • August 3, 2025
Business•Startups•Strategy•Funding•Subscription Models•Substack
One last post on my recent Substack kick. Yesterday I linked to Ana Marie Cox’s scathing analysis of Substack’s financials. She published that on June 23, and wrote about Substack’s $100 million funding round, with a $1 billion valuation, in the future tense. Substack indeed closed that round in mid July, raising $100 million, with a valuation of $1.1 billion. After which their triumvirate of cofounders sat for an apparently brief “interview” with Benjamin Mullin and Jessica Testa of The New York Times:
Substack’s business model is simple: Users subscribe to follow creators on the platform, and the company takes a 10 percent cut of the revenue when those creators charge for a newsletter subscription or access to a podcast. That approach initially made Substack a writer’s haven, resulting in more than five million paid subscriptions and a stable of publishers, including the short story master George Saunders, the historian Heather Cox Richardson and an exodus of journalists from traditional newsrooms.
But the latest investors are betting on an emerging product that could amplify its business. Substack’s app, introduced in 2022, allows users to chat with their favorite creators, watch live video conversations and write and share posts on their own feeds through Notes, a feature similar to X or Bluesky.
If their business model were actually as simple as described, they’d already be profitable and wouldn’t have needed to raise another $100 million. They’ve already got a lot of subscribers. They’ve already got a stable of high-profile writers. They already keep 10 percent of what subscribers pay. And pointing to Twitter/X as the future model doesn’t exactly say “Well that’s the path to enormous profits.”
The sharp increase in Substack’s valuation — nearly 70 percent higher than its 2021 valuation of $650 million — is a validation of that strategy from Substack’s investors.
Or, this could be like when a guy who just lost every dollar in his pocket playing blackjack withdraws a few more grand from the ATM in the casino. That doesn’t “validate the strategy”. What would validate Substack’s strategy is showing proof of actual profits and profitable growth. And if they had actual profits and profitable growth they wouldn’t have needed to raise another $100 million.
“The network is growing,” Mr. McKenzie said. “We’re in this new phase where people can come to Substack and not just publish, but also find new audiences and find new opportunity.” The company today is more interested in taking on YouTube than MailChimp.
This, to me, is the nut graf of the whole NYT piece. Substack since its debut has presented itself as a platform for writers to build publications for readers. That’s how everyone I know who would endorse Substack would describe it. Hamish McKenzie, the cofounder quoted here, proudly claims the job title “Chief Writing Officer”. Does a company “more interested in taking on YouTube than MailChimp” sound like a company focused on writers as talent and readers as users to you? (And it’s a little thing, but Mailchimp doesn’t style their name camel-case. You’ll be unsurprised to be reminded that The New York Times dismantled its previously crackerjack copy desk in 2017.)
….
The Why of Substack
Om • August 3, 2025
Technology•Media•Digital Platforms•Content Creation•Substack
Substack, the newsletter platform, has recently faced criticism for hosting and inadvertently promoting Nazi content, with John Gruber of Daring Fireball leading the charge. This controversy is not new; Substack has encountered similar issues before, leading to public outrage and some users leaving the platform. Despite these challenges, Substack continues to grow. This raises the question: why does Substack persist and expand?
To understand Substack's resilience, it's essential to examine several key aspects:
What is Substack really selling?
Why do they have 5 million people paying for subscriptions on their platform?
Why are they growing?
What have they built?
Why did they receive $100 million in new funding at a valuation of $1 billion?
In 2011, Eric Schmidt, former CEO of Google, predicted that advancements in fiber optics and wireless technology would disrupt traditional media companies. He stated that over five to ten years, these technologies would "completely crush the business models of old media companies and industries." This insight highlights the evolving landscape of media consumption and the challenges faced by traditional outlets.
Substack has effectively capitalized on this shift by aggregating "reading attention" on its platform. Similar to how Tumblr created a network effect through interconnected blogs, Substack has built a network that drives significant user engagement. By attracting independent writers and their audiences, Substack has fostered a community where readers willingly share their email addresses and payment details, inadvertently promoting the platform's growth.
Critics often overlook Substack's network effects. Despite the influx of unsolicited newsletter invitations, this strategy, known as growth hacking, has proven effective. Substack's network now drives 50% of all new subscriptions and 25% of new paid subscriptions on the platform.
The growth trajectory is impressive: Substack's paid subscriber base expanded from 11,000 in July 2018 to over 500,000 by November 2021, doubling to 1 million by the same month, reaching 2 million by February 2023, and hitting 5 million paid subscriptions by March 2025. This represents a nearly 500-fold increase in less than seven years…
Ghost 6.0
Ghost • John Gruber • August 5, 2025
Technology•Software•Open Source•Independent Publishing•Business Model•Substack
When we announced Ghost 5.0 a few years ago, we were proud to share that Ghost’s revenue had hit $4M — while publisher earnings had surpassed $10M. It felt great to have such a clear sign that our goal to create a sustainable business model for independent creators was succeeding.
Today, Ghost’s annual revenue is over $8.5M while total publisher earnings on Ghost have now surpassed $100M.
Unlike our venture-backed peers obsessed with growth at all costs, we’re structured as a non-profit foundation that serves publishers directly with open source software. We believe independent media cannot be beholden to proprietary tech companies, so Ghost publishers don’t just “own their email list” — they own the entire software stack that underpins their business, end to end.
Not a centralized platform controlled by a single corporation, but open infrastructure that’s shared by everyone.
Aside from my feelings about Substack — clearly the main target of Ghost’s shade-throwing here — it’s just great to see so many indie publishers and writers thriving on Ghost.
Geopolitics
Marc Andreessen: The US is in a AI Arms Race & It Decides The World's Future
Youtube • a16z • August 5, 2025
Technology•AI•Innovation•National Security•Geopolitics
The video features Marc Andreessen discussing the critical role of artificial intelligence (AI) in shaping the future of the United States and the world. He frames the current global landscape as an "AI arms race," emphasizing how the development and deployment of AI technologies are now pivotal factors that will determine geopolitical and economic leadership for decades to come.
Andreessen highlights that AI is more than just a technological innovation; it represents a fundamental shift in capabilities across multiple sectors including defense, commerce, education, healthcare, and beyond. This AI arms race is not just about military superiority but also about economic dominance and societal transformation. He suggests that the country which leads in AI technology will likely influence the global order, set standards for responsible AI usage, and drive future innovation cycles.
Key points from his discussion include the urgent need for the US to double down on AI investments. Andreessen stresses that the US must mobilize government, academia, and private sector resources efficiently to maintain its edge. He warns against complacency because other nations, specifically China, are aggressively accelerating their AI research and integration programs, potentially outpacing the US if it does not respond robustly.
Andreessen shares insights into the nature of AI technology’s disruptive potential. He notes how recent advancements in AI models have accelerated innovation speed dramatically, leading to novel applications previously thought implausible. This transformation demands new regulatory frameworks that balance fostering innovation with addressing ethical and security concerns. He advocates for proactive policy making that encourages innovation while managing the risks AI poses, such as misuse or unintended consequences.
Further, he discusses the societal implications of AI, underscoring both the opportunities and risks. On the opportunity side, AI can revolutionize productivity, enhance scientific discovery, and democratize access to knowledge and services globally. On the risk side, without strategic leadership and international cooperation, there could be geopolitical instability, economic inequality, and major disruptions in labor markets.
Andreessen’s remarks also touch on the importance of education and workforce development to prepare society for the AI-driven future. He argues that equipping individuals with digital literacy and AI-related skills should be a national priority to ensure inclusive growth and prevent widening disparities in opportunity.
In conclusion, the video underscores a call to action for the United States to recognize AI’s transformative power and the strategic imperative of leadership in this technology race. Andreessen frames AI not just as a commercial opportunity but as a national security priority and a determinant of global influence. Successful navigation of this AI arms race will require coordinated efforts across sectors and proactive governance to harness AI’s benefits while mitigating its risks.
Key Takeaways:
The US is engaged in an AI arms race critical to global economic and geopolitical leadership.
AI’s impact spans military, economic, and societal domains, requiring urgent investment and innovation.
China is a significant competitor, accelerating AI research with strong government backing.
New policies are needed to balance innovation with ethical and security concerns.
Societal impacts include potential productivity gains and challenges like inequality and workforce disruption.
Education and skills development are essential to prepare the future workforce for an AI-driven world.
Leadership in AI technology will shape future global standards, national security, and innovation ecosystems.
Trump to Announce Additional $100 Billion Apple Investment in U.S.
Nytimes • August 6, 2025
Business•Manufacturing•Investment•Technology•Geopolitics
Apple has announced a significant expansion of its U.S. manufacturing investments, committing an additional $100 billion over the next four years. This brings the company's total U.S. investment to $600 billion, marking one of the largest corporate commitments in American history. (cbsnews.com)
The new investment is part of Apple's "American Manufacturing Program," which aims to enhance the U.S. supply chain and attract more global companies to manufacture critical components domestically. The initiative includes collaborations with key industrial partners such as Corning, Coherent, Applied Materials, Texas Instruments, Samsung, GlobalFoundries, Amkor, and Broadcom. Additionally, Apple plans to invest in rare earth magnets from MP Materials to bring critical component manufacturing back to the U.S. (ainvest.com)
As part of this expansion, Apple will build a new 250,000-square-foot manufacturing facility in Houston, Texas, set to open in 2026. This facility will produce servers that support Apple's AI services, marking a significant step in reshoring critical technology manufacturing to the U.S. (washingtonpost.com)
The investment is expected to create 20,000 new jobs across the country, focusing on research and development, silicon engineering, software development, and AI and machine learning. Apple's CEO, Tim Cook, expressed optimism about the future of American innovation, stating, "We are bullish on the future of American innovation, and we're proud to build on our longstanding U.S. investments with this $500 billion commitment to our country's future." (axios.com)
President Donald Trump praised Apple's decision, asserting that it reflects the company's confidence in his administration's policies. He highlighted the significance of the investment in bolstering America's dominance in artificial intelligence and advanced manufacturing. (whitehouse.gov)
This announcement follows Apple's previous commitment in February to invest $500 billion in the U.S. over the next four years, which included plans to build a production line for AI computer servers in Houston and a multibillion-dollar commitment to buy computer chips built at an Arizona factory run by Taiwanese chipmaker Taiwan Semiconductor Manufacturing Co. (washingtonpost.com)
Apple's expanded investment underscores the company's dedication to strengthening its manufacturing capabilities within the United States and supporting the growth of the domestic technology sector.
Defense Tech
Palantir reports $1 billion in revenue for the first time
Youtube • CNBC Television • August 4, 2025
Business•Strategy•DataAnalytics•Growth•Technology•Defense Tech
Palantir Technologies reported surpassing $1 billion in revenue for the first time, marking a significant milestone in the company's financial growth. This achievement reflects the increasing demand for Palantir's data analytics software, which is widely used by government agencies and enterprises for extracting actionable insights from large datasets.
The firm's expansion has been driven by robust sales in both public and commercial sectors. Palantir's platform helps organizations improve decision-making in areas such as defense, intelligence, healthcare, and finance. The company has been steadily growing its customer base and expanding the applications of its technology.
Key highlights include substantial contract renewals and new partnerships that underscore the company's strategic positioning in the data intelligence market. Palantir's CEO emphasized the focus on developing scalable, integrated solutions that address complex challenges faced by clients across various industries.
This $1 billion revenue milestone underscores Palantir's transition from a startup to a mature company with a global footprint and a diversified customer portfolio. It also signals strong investor confidence in Palantir's long-term growth prospects as demand for sophisticated data analytics continues to rise worldwide.
China Tech
China Proposes Global AI Governance Plan as U.S. Pursues Dominance
Medium • ODSC - Open Data Science • July 31, 2025
Technology•AI•Governance•InternationalRelations•Innovation•China Tech
China has put forward a proposal for a global artificial intelligence (AI) governance framework, aiming to establish international standards and lead the global conversation on AI regulation. This initiative was announced at the World AI Conference (WAIC) in Shanghai by Premier Li Qiang, who highlighted the fragmented and uneven nature of current AI regulatory efforts worldwide. He stressed the importance of global coordination and forming a consensus-based governance structure to address these disparities. Although Li did not explicitly mention the United States, his comments came amidst escalating tensions between the two powers, particularly driven by the U.S. unveiling its own AI strategy meant to enhance domestic leadership by reducing regulatory barriers.
A key concern outlined by Li was the concentration of critical AI resources and capabilities in a small number of countries and corporations, which could lead to monopolistic control that excludes developing nations from the benefits of AI progress. This statement underlines the geopolitical stakes surrounding AI technology development, especially given ongoing disputes over U.S. sanctions on AI chip exports to China. Recent trade talks in Stockholm produced limited progress, including a partial easing of U.S. restrictions on Nvidia chip sales and a pause on a Chinese antitrust probe into DuPont, signaling tentative moves toward easing trade frictions but underscoring the persistence of challenges.
China's AI ambitions are backed by significant investments and innovation. By April 2025, China had over 5,000 AI companies with the domestic industry valued at 600 billion yuan (approximately $84 billion). Public sector funding alone is expected to exceed 400 billion yuan ($56 billion) this year. Despite the U.S. outspending China more than tenfold in private AI investments during 2024, China leads globally in generative AI patent filings, publishing more patents annually since 2017 than all other countries combined, according to the World Intellectual Property Organization. Chinese startups like DeepSeek and Moonshot underscore the nation’s rapid technical progress, with DeepSeek’s R1 AI model and Moonshot’s Kimi K2 model receiving international acclaim for their performance relative to those developed by Meta, Anthropic, OpenAI, and Google. Investment analysts such as Morgan Stanley forecast a 52% return on investment by 2030 in China’s AI sector, suggesting the country could close the innovation gap with the West faster than anticipated.
Global leaders attending WAIC emphasized the need for collaborative international governance to effectively manage AI’s risks, including misinformation and cybersecurity threats. ASEAN Secretary-General Dr. Kao Kim Hourn pointed to the potential of AI to boost ASEAN’s GDP by 10-18%, contingent on robust governance frameworks. Former Google CEO Eric Schmidt warned of the global security risks from fragmented governance and underscored the necessity of cooperation between powers like the U.S. and China to maintain world stability and human control over AI technologies. AI pioneers like Geoffrey Hinton and French AI envoy Anne Bouverot also voiced support for unified oversight, framing such cooperation as critical to balancing rapid AI development with ethical and societal responsibilities.
In conclusion, China’s global AI governance proposal presents a contrasting vision to the U.S.’s more dominance-focused approach, highlighting a growing rift in the management of AI’s future. The outcome of this contest—whether collaborative frameworks can be established or whether rivalries deepen—will likely shape the trajectory of international AI innovation, security, and ethics over the coming decade.
Stablecoins
Circle's CEO on the Booming Business of Stablecoins
Omny • July 31, 2025
Finance•Cryptocurrency•Stablecoins•Regulation•DigitalCurrency
Stablecoins have emerged as a dynamic segment within the cryptocurrency landscape, capturing the attention of both traditional financial institutions and policymakers. The recent passage of the GENIUS Act, which establishes a regulatory framework for stablecoins, underscores this growing interest. To delve into the opportunities and operational models of stablecoin providers, we spoke with Jeremy Allaire, co-founder and CEO of Circle, the company behind USDC, the second-largest stablecoin in the market.
In our discussion, Allaire elaborated on Circle's business model, emphasizing the company's commitment to transparency and regulatory compliance. He highlighted that Circle's revenue primarily stems from interest earned on short-term securities, a model that has been both a strength and a challenge, especially during periods of market volatility. This approach has exposed Circle to significant interest rate risk and revenue fluctuations, as the company does not offer yield to USDC holders. (ft.com)
Addressing concerns about financial stability, Allaire discussed the measures Circle has implemented to safeguard its reserves. Following the collapse of Silicon Valley Bank in early 2023, Circle partnered with BlackRock to create a secure reserve fund for USDC. Currently, 90% of USDC reserves are held in an SEC-registered fund, with the remaining 10% in cash at globally significant banks. This strategy aims to provide a robust foundation for USDC, ensuring its stability and reliability in the market. (financefeeds.com)
Looking ahead, Allaire expressed optimism about the potential of stablecoins to revolutionize payments and commerce. He envisions a future where stablecoins serve as the primary medium for internet transactions, facilitating instant, low-cost, and borderless payments. This vision aligns with Circle's efforts to expand USDC's reach, particularly in regions like Europe and Japan, where regulatory clarity is advancing. Allaire noted that Circle is the first to issue a legal digital dollar under European stablecoin laws, positioning the company at the forefront of this financial evolution. (financefeeds.com)
In summary, Circle's strategic initiatives and Allaire's insights highlight the company's proactive approach to navigating the complexities of the stablecoin market. Through a focus on regulatory compliance, financial stability, and global expansion, Circle aims to leverage the transformative potential of stablecoins in the evolving digital economy.
Interview of the Week
A reminder for new readers. Each week, That Was The Week, includes a collection of selected essays on critical issues in tech, startups, and venture capital.
I choose the articles based on their interest to me. The selections often include viewpoints I can't entirely agree with. I include them if they make me think or add to my knowledge. Click on the headline, the contents section link, or the ‘Read More’ link at the bottom of each piece to go to the original.
I express my point of view in the editorial and the weekly video.
Share this post