Hi from deep July. I just returned from two weeks in Tuscany and thought some of you might be missing That Was The Week due to our July Hiatus. So, just for you, here are the things that would have appeared in the past 3 weeks had we been publishing. Plus the bonus of a Notebook LM podcast.
Best
Keith
Contents
Venture Capital
AI
Vibe Coding is the Future. But “Roll Your Own?” That’s More Complicated.
How Anthropic Rocketed to $4B ARR — And Why Your B2B Playbook May Already Be Obsolete
xAI's Grok 4: The tension of frontier performance with a side of Elon favoritism
NotebookLM adds featured notebooks from The Economist, The Atlantic and others
AI filmmaker Kavan released a trailer of "Untold - The Immortal Blades Saga", an
🧠David Friedberg explains the great hope of Artificial Superintelligence
When AI Agents Knock, Will Your Data Platform Answer? – Venrock’s investment in Collate.
OpenAI to take cut of ChatGPT shopping sales in hunt for revenues
Cognition Buys Windsurf, Nvidia Can Sell to China, Grok 4 and Kimi
Trump AI Czar David Sacks Defends Reversal of China Chip Curbs
OpenAI Just Released ChatGPT Agent, Its Most Powerful Agent Yet
It’s Time to Take Anthropic and OpenAI’s Wild Revenue Projections Seriously
Tokenization
Essays
GeoPolitics
Attention Economy
Legislation
Self Driving Cars
IPO
Automotive
Browser Wars
Startup of the Week
Education
Regulation
Venture Capital
SpaceX heads to $400bn valuation in share sale
Ft • July 8, 2025
Business•Aerospace•SpaceX•Starlink•Valuation•Venture Capital
SpaceX, the aerospace company founded by Elon Musk, is preparing for a $1 billion share sale that would value the company at $400 billion. This valuation marks a significant increase from the $210 billion valuation in mid-2024 and the $350 billion valuation in December 2024. The current share sale, priced at $212 per share, includes a tender offer for employee shares, allowing employees to sell their holdings to a select group of investors. SpaceX plans to purchase some shares as part of the transaction, similar to the $500 million worth of employee shares bought back in December. Despite political risks stemming from Musk's support of former President Donald Trump, investor confidence remains high, reflecting the company's strong position in the aerospace industry.
Founded in 2002 with $100 million from Musk's PayPal proceeds, SpaceX aims to revolutionize space travel with reusable rockets and envisions enabling human colonization of Mars. The company's rapid growth is driven by its Starlink satellite internet service, which has deployed around 7,000 satellites and serves approximately 5 million subscribers across 114 countries. Starlink is projected to generate $6.6 billion in hardware and subscription revenue in 2024, contributing significantly to SpaceX's valuation. Additionally, SpaceX's Starship program continues to advance, with recent test flights demonstrating progress toward developing a next-generation reusable rocket system.
The upcoming share sale positions SpaceX among the top 20 companies in the S&P 500, surpassing major corporations like Bank of America and Procter & Gamble. This valuation underscores the company's dominant position in the space industry and its potential for future growth. While the company did not immediately respond to requests for comment, the share sale reflects ongoing investor confidence in SpaceX's vision and achievements.
VC Dollars Are Up, Yes. But VC Rounds? They Are At a 7+ Year Low
Saastr • July 7, 2025
Business•VentureCapital•FundingTrends•InvestmentStrategies•Venture Capital
Recent data from Carta reveals a significant decline in the number of venture capital (VC) funding rounds, reaching a seven-year low, despite an increase in total VC dollars raised. This trend indicates a shift in the venture capital landscape, with fewer but larger investments being made.
Between Q1 2018 and Q2 2025, early-stage VC rounds per day decreased from 9.1 to 7.4, marking a 53% decline from the 2021 peak. Growth-stage rounds fell by 53%, and late-stage rounds decreased by 33% during the same period. This reduction suggests a more selective investment approach, with VCs concentrating on fewer, larger deals.
The decline in the number of funding rounds has led to longer fundraising cycles, as startups face increased competition for available capital. Additionally, the funding funnel has narrowed, resulting in fewer companies entering the venture-backed ecosystem, which may impact growth and late-stage activity in the future.
Despite the decrease in the number of rounds, the total dollar volume has remained robust, driven by mega-rounds and record fund sizes. However, this concentration of capital means that while some startups secure substantial funding, many others find it more challenging to raise capital.
In summary, the venture capital market is experiencing a structural shift towards fewer, larger investments, making it more competitive for startups to secure funding. Entrepreneurs should be prepared for longer fundraising processes and consider alternative funding strategies to navigate this evolving landscape.
Revolut in talks to raise new funding at $65bn valuation
Ft • July 9, 2025
Business•Fintech•Revolut•Funding•Expansion•Venture Capital
Revolut, Europe's most valuable start-up, is in discussions to raise approximately $1 billion in a new funding round that would value the company at $65 billion. This fundraising effort, involving newly issued shares and the sale of some existing stock, is aimed at supporting the company's global expansion. US investment firm Greenoaks is expected to lead the round, while Mubadala, Abu Dhabi's sovereign wealth fund and a previous investor, is also in discussions to participate. The valuation is a blended figure, with a higher valuation for new capital and a lower one for existing share sales.
The fintech, which gained a UK banking licence in 2024 after a prolonged regulatory process, plans to bolster its presence in the US market leveraging its user-friendly app. Despite regulatory challenges, including unresolved licensing for credit services in the UK, Revolut has posted strong financial results, with pre-tax profits doubling to £1 billion and revenues increasing to £3.1 billion in 2024, largely driven by cryptocurrency trading. Customer growth has reached 50 million globally, though the company continues to face difficulties attracting users to deposit their wages into Revolut accounts, affecting potential fee income. However, Revolut maintains that holding primary accounts is not central to its strategy.
A Crisis Moment for Seed VC
Nextview • July 16, 2025
Technology•Startup•SeedFunding•VentureCapital•AI•Venture Capital
The article "A Crisis Moment for Seed VC" explores a pivotal challenge facing seed-stage venture capital, driven by four interconnected forces reshaping the industry landscape. These forces threaten the traditional model and dynamics of seed investing, suggesting the need for adaptation and strategic reevaluation.
Industry Maturation
The seed venture capital space is undergoing significant maturation as the startup ecosystem evolves. With time, early-stage investing has become more competitive and sophisticated, leading to a more crowded market. This maturation means that returns have started aligning with a more consistent landscape, reducing the ease of outsized gains previously seen in this stage. As the industry matures, early-stage funds are facing pressure to deliver performance comparable to later-stage and growth funds, despite inherently higher risk and uncertainty.
The Two Unstoppable Forces
The article identifies two major forces that seed VC must navigate: the increasing specialization and scale of technology companies, and the macroeconomic environment that influences capital allocation. The rise of dominant platforms and ecosystems concentrates value and power in fewer winners, pushing seed investors to be more discerning and precise in their bets. Simultaneously, broader economic conditions such as tightening capital markets and shifts in interest rates impact fund availability and valuations. These forces create an environment where only the most strategic and well-positioned investors will thrive.
Power Law as Consensus
A fundamental aspect highlighted is the centrality of the "power law" in venture outcomes—the notion that a small number of investments generate the vast majority of returns. This consensus shapes investor behavior and fund strategy. However, the increasing maturity of the seed market means that the power law dynamic is evolving, with amplified competition for the few potential breakout successes. Investors are challenged to identify and back these rare high-potential startups early in their journey, requiring deeper diligence and differentiated networks.
The AI Platform Shift
One of the most transformative trends disrupting seed VC is the rapid emergence of AI as a foundational platform. AI technologies are not only creating new categories of startups but also impacting how startups operate, enabling faster innovation cycles and broader market opportunities. This AI platform shift demands new expertise and investment theses, pushing seed investors to incorporate AI understanding into their evaluation and support processes. It also intensifies competition as scale AI companies dominate attention and capital distribution, influencing where seed capital flows.
Implications and Strategic Considerations
Together, these factors signify a critical juncture for seed VC, where industry participants must rethink their approaches. Traditional seed strategies might no longer suffice; emphasizing specialization, AI expertise, and adaptable fund structures become crucial. Moreover, investors should prepare for a potentially more consolidated landscape where fewer, larger winners emerge from seed rounds. Recognizing power law dynamics and the AI platform's influence may allow VCs to better position themselves for sustained success despite growing headwinds.
In conclusion, the article underscores an existential moment for seed venture capital driven by industry maturation, macroeconomic dynamics, evolving power law realities, and the AI revolution. Successful seed investors will be those who can anticipate these trends, innovate their investment models, and maintain agility in a rapidly shifting ecosystem.
A media future to believe in
Post • Chris Best • July 17, 2025
Technology•Software•CreatorsEconomy•Funding•MediaInnovation•Venture Capital
Today, we’re announcing $100 million in Series C funding, led by investors at BOND and The Chernin Group (TCG), with participation from Andreessen Horowitz, Rich Paul, CEO and founder of Klutch Sports Group, and Jens Grede, CEO and co-founder of SKIMS. BOND’s Mood Rowghani will join our board. We’re thrilled to partner with these investors, who bring a wealth of experience across tech, media, and culture, as we put this capital to work serving creators and their communities.
We’re living through a time of rapid technological change, one that’s reshaping how we communicate, create, and live. Every leap forward brings both promise and peril. The tools we hoped would uplift and enrich us have too often degraded or dehumanized us instead. Now, as powerful new technologies emerge daily, they arrive freighted with both hope and anxiety. The challenges ahead are real.
But this time of flux also holds tremendous opportunity. A growing number of people are navigating the chaos by choosing independence. Audiences are investing their attention and money in what they value, not just what addicts them. Creators are building livelihoods based on trust, quality, and creative freedom. They know the future belongs to those who build it.
At Substack, we believe the heroes of culture are the ones who shape it. Technology should serve them, not the other way around. That’s why we’re building tools and a network to protect their independence, amplify their voices, and foster deep and direct relationships. These are the people who will lead us to a better culture, and a future we can believe in.
This funding is our chance to get behind them. We’ll invest in better tools, broader reach, and deeper support for the writers and creators driving Substack’s ecosystem. Already, hundreds of millions of dollars flow from audiences to creators there every year. Millions use the app weekly, and pay for the work they discover. But this is just the beginning.
The model is working—across writing, audio, video, and communities—and this funding lets us go further. We’re doubling down on the Substack app, which is designed to help audiences reclaim their attention and connect with the creators they care about. We aim to prove that a media app can be fun and rewarding without melting your brain. An escape from the doomscroll, and a place to take back your mind.
We’re also building tools that give superpowers to anyone who has something important to say. Creators face enough challenges without juggling logistics and expenses. Substack should feel like a studio in your pocket—we take care of everything except the hard part: the creative work itself.
Most importantly, we’re building an economic engine to power this entire cultural ecosystem. Our model is simple: creators make money by serving their communities, and Substack succeeds only when they do. Audiences vote with attention and money for the culture they want, acting as collaborators in shaping a media ecosystem rooted in intention and connection. And everyone is part of a network that rewards trust, not manipulation. Substack is growing fast around the world, and we’re accelerating our work to bring the platform to new markets, so more people everywhere can support the creators they care about.
Independence shouldn’t mean going it alone. We’re building technologies that work for you, not against you—helping you carve out your own space on the internet, where you set the rules. It’s a system that rewards integrity, curiosity, and courage.
None of this happens without you. To everyone publishing on, subscribing to, or just exploring Substack: thank you. We’re honored to play a part, and excited for what this funding will unlock. The future of media belongs to you, and it can’t come soon enough.
Only 11% of Unicorn Exits Are IPOs Now (Down from 53%)
Saastr • July 9, 2025
Business•Startups•UnicornExits•IPOs•VentureCapital•Venture Capital
Recent research by Professor Ilya Strebulaev at Stanford University's Graduate School of Business reveals a significant shift in how unicorns—startups valued at over $1 billion—exit the market. The data, compiled by Stanford's Venture Capital Initiative, indicates that the proportion of unicorn exits via initial public offerings (IPOs) has dropped dramatically from 83% in 2010 to just 11% in 2024. This trend suggests a fundamental restructuring of the exit landscape with lasting implications for SaaS founders.
The decline in IPOs is attributed to several key factors:
Abundant Private Funding: The availability of substantial late-stage private funding has reduced the pressure on companies to go public for growth capital.
Increased Compliance Costs: The financial and regulatory burdens associated with going public have escalated, making IPOs less attractive.
Emergence of Secondary Markets: Secondary markets now offer liquidity options for employees and early investors without the need for a public offering.
Rise of Strategic Acquisitions: Large corporations are increasingly acquiring unicorns, providing an alternative exit strategy.
The volatility in IPO exits from 2021 to 2024 underscores this shift:
2021: 39% of exits were IPOs.
2022: Only 8% of exits were IPOs.
2023: A brief recovery to 24% IPO exits.
2024: A return to 11% IPO exits.
This pattern suggests that the era of IPOs as the default exit strategy for unicorns may have ended, even before the market downturn.
Secondary markets have played a pivotal role in this transformation. They provide liquidity to stakeholders without necessitating a public offering, allowing companies to remain private longer and reducing the reliance on IPOs. This shift has significant implications for equity compensation strategies, as employees may no longer expect IPOs for liquidity.
Additionally, contractual clauses in venture capital agreements can impede IPOs, especially when valuations decline. Clauses such as down-round protection, liquidation preferences, and board consent requirements can make public offerings economically unfeasible. Founders are advised to thoroughly understand these provisions to navigate potential exit constraints effectively.
The data indicates that the preference for IPOs among unicorns is no longer the norm. Companies now have more alternatives, including strategic acquisitions and secondary market options, which offer different advantages and challenges. For SaaS founders, this evolving landscape necessitates a reevaluation of exit strategies, emphasizing the importance of flexibility and preparedness for various scenarios.
The trend is not your friend
Signalrankupdate • July 21, 2025
Business•VentureCapital•InvestmentStrategy•EmergingTrends•Venture Capital
In 2010, Mark Suster of Upfront Ventures advised venture capitalists (VCs) to focus on "lines"—multiple data points over time indicating a trajectory—rather than "dots," which are isolated data points. This approach encourages early engagement with founders to assess progress and build relationships. However, this advice is increasingly overlooked in today's competitive VC landscape, where rapid investment processes are common.
The venture ecosystem's interconnectedness has led to the rapid rise and fall of "hot" themes, often without substantial value creation. For instance, the "____ for X" model, such as "Uber for X," has become prevalent, with numerous companies adopting this template. This trend reflects a mimetic approach to venture capital, where investors chase popular themes rather than seeking unique opportunities.
The most successful investors focus on identifying exceptional founders capable of building category-defining companies. They evaluate founders on an individual basis, independent of prevailing themes, and resist the urge to follow the crowd. This strategy involves building networks and relationships in unconventional areas, allowing investors to discover opportunities before they become mainstream.
Data supports this approach. For example, the defense sector gained prominence only after Anduril's Series C funding, and AI investments surged following OpenAI's release of ChatGPT. These instances demonstrate how the best investors can identify and capitalize on emerging trends before they gain widespread attention.
In summary, while following trends may seem appealing, the most successful investors prioritize building relationships and identifying unique opportunities, focusing on the long-term trajectory of founders and companies.
It’s Not Just You. Lead VCs Are Taking More of Each Round
Saastr • July 5, 2025
Business•Startups•VentureCapital•Fundraising•Venture Capital
If you've been fundraising lately and noticed your lead investor wants to take a bigger chunk of your round than expected, you're not imagining things. New data from Carta analyzing 17,896 primary priced rounds from Q1 2021 to Q2 2025 shows a clear trend: lead investors are systematically taking larger portions of the rounds they lead.
Here's what's happening across every funding stage:
Seed rounds: Lead investor participation jumped from 52% in 2021 to 61% in 2025. That's nearly a 10-percentage-point increase in just four years.
Series A: More stable but still climbing from 54% to 59% over the same period.
Series B: Despite some volatility, trending upward from 50% to 51%.
Lead VCs aren't just leading rounds anymore—they're dominating them. Sharper elbows, more ownership.
What This Means for Founders
Your round composition is changing. Where you might have had 4-5 investors splitting a round a few years ago, you're increasingly looking at 2-3 investors with your lead taking the lion's share.
Syndicate dynamics are shifting. With leads taking bigger bites, there's less room for other investors. This can make it harder to bring in strategic investors, additional value-add partners, or maintain optionality for future rounds.
Power concentration is real. When one investor controls 60%+ of your round, they wield significantly more influence over your company's direction, board composition, and future fundraising decisions.
Why This Is Happening
Several factors are driving this trend:
Flight to quality. In uncertain markets, top-tier VCs are being more selective and going deeper on fewer deals rather than spreading capital thin.
Larger fund sizes. Many VCs have raised bigger funds and need to deploy more capital per deal to achieve meaningful ownership percentages.
Competitive dynamics. To win competitive deals, leads are offering to take larger allocations to give founders more certainty and simplify the fundraising process.
Risk management. By taking bigger positions, VCs can exert more control and better protect their investments in volatile market conditions.
The Seed Stage Story
The most dramatic shift is happening at seed stage. Lead investors have gone from taking $1.6M of a $3.0M round in 2021 to $2.3M of a $3.8M round in 2025.
This isn't just about round sizes growing—it's about leads claiming an ever-larger slice of the pie. Even when round sizes stay flat, lead allocations keep growing.
What Founders Should Do
Plan your syndicate early. If you want multiple investors in your round, start conversations early and be explicit about allocation expectations upfront.
Understand the trade-offs. A lead taking 60% of your round isn't inherently bad—it can mean faster decisions, cleaner terms, and stronger support. But it does mean fewer voices around your table.
Negotiate thoughtfully. Don't just focus on valuation. The composition of your round matters for governance, future fundraising, and strategic flexibility.
Keep optionality in mind. Consider how round composition affects your ability to bring in strategic investors or sector specialists in follow-on rounds.
The Bottom Line
This trend toward lead concentration isn't necessarily good or bad—it's just the new reality of venture fundraising. The most successful founders are those who understand this shift and plan accordingly.
NFDG: The $1.1B VC Fund That 4X’d in Two Years—Then Got Acquired by Meta
Saastr • July 6, 2025
Business•VentureCapital•ArtificialIntelligence•StartupInvesting•Venture Capital
In the rapidly evolving landscape of venture capital, few stories are as remarkable as that of NFDG—a $1.1 billion fund that achieved a stunning 4x return in just two years (at least on paper)—only to see its founders recruited by Meta in one of the most unusual acqui-hire arrangements in Silicon Valley history.
The deal was done in less than a week—and the NFDG website is already down:
June 29: Gross leaves Safe Superintelligence, which he co-founded and is NFDG’s crown jewel investment
This week: Zuckerberg announces Friedman joining Meta
July 4: Friedman confirms on X he’s started at Meta
July 5: News breaks about the tender offer
Today: NFDG website is down
NFDG was founded by two of Silicon Valley’s most respected figures: Nat Friedman, former CEO of GitHub, and Daniel Gross, formerly a partner at Y Combinator. The duo launched their venture fund in 2023, raising an impressive $1.1 billion in their debut fund—a testament to their combined reputation and track record in the tech industry.
Friedman brought deep experience in developer tools and enterprise software from his time leading GitHub through its acquisition by Microsoft and subsequent growth. Gross contributed his expertise in early-stage investing and startup acceleration from his tenure at Y Combinator, where he helped identify and nurture some of the accelerator’s most successful companies.
From the outset, NFDG positioned itself as an AI-focused venture fund, anticipating the massive wave of innovation that would sweep through the artificial intelligence sector. This strategic focus proved prescient as the fund launched just as the generative AI boom was beginning to reshape entire industries.
The fund’s most spectacular success story is undoubtedly Safe Superintelligence, co-founded by Daniel Gross himself alongside Ilya Sutskever (former OpenAI co-founder and chief scientist). The company’s valuation trajectory tells a remarkable story:
Previous round: $5 billion valuation
Current valuation: $30 billion (as of 2024)
Growth multiple: 6x increase
NFDG’s role: Early investor through Gross’s direct involvement
This investment exemplifies NFDG’s unique approach, combining financial backing with direct operational involvement to drive exceptional growth.
The rapid success of NFDG and its strategic investments highlight a broader trend in Silicon Valley, where the lines between venture capital and operational roles are increasingly blurred. The allure of directly shaping the future of transformative technologies, particularly in the AI sector, is drawing top talent away from traditional fund management roles. This shift underscores a profound change in how industry leaders perceive the balance between financial returns and the opportunity to influence technological innovation at its core.
Accel, General Catalyst Topped Increasingly Busy Active Investor Ranks In Q2
Crunchbase • July 11, 2025
Business•VentureCapital•InvestmentTrends•ArtificialIntelligence•SpaceTechnology•Venture Capital
In the second quarter of 2025, venture capital activity saw a significant uptick, with nine of the ten most active investors increasing their deal counts compared to the first quarter. Notably, Accel and General Catalyst emerged as leaders in this surge, each taking prominent roles in multiple high-profile funding rounds.
Accel led 20 rounds during this period, including substantial investments such as a $500 million financing for generative AI company Perplexity and a $260 million Series C for spacetech startup True Anomaly. Similarly, General Catalyst was at the forefront, leading 16 rounds, with its most significant being a $1 billion financing for AI writing assistant Grammarly. These substantial commitments underscore the firms' confidence in sectors like artificial intelligence and space technology.
While Accel and General Catalyst were the most active in terms of deal count, the largest individual investments were led by other firms. Meta made headlines with a $14.3 billion investment in Scale AI, marking a strategic and financial partnership that also saw Scale AI's founder, Alexandr Wang, join Meta. Following Meta, Founders Fund and Andreessen Horowitz were among the top spenders, with Founders Fund leading a $2.5 billion round for defense tech company Anduril, and Andreessen Horowitz leading a $2 billion seed financing for Thinking Machines Lab.
In terms of deal volume, Y Combinator stood out by participating in 45 post-seed financings, the highest among investors in Q2. This included significant rounds for companies like HR platform Rippling and AI robotics startup Gecko Robotics. General Catalyst and Accel also maintained high activity levels, participating in 38 and 31 deals, respectively.
The seed-stage investment landscape was similarly active, with Y Combinator leading with 50 seed investments, followed by Antler with 33. It's important to note that reporting practices in seed-stage investments can vary, as accelerators often report investments in batches, leading to fluctuations in deal counts.
Overall, the second quarter of 2025 demonstrated a robust venture capital environment, characterized by increased deal activity and substantial individual investments. This trend reflects growing confidence among investors in sectors such as artificial intelligence, defense technology, and space exploration.
AI
Vibe Coding is the Future. But “Roll Your Own?” That’s More Complicated.
Saastr • Jason Lemkin • July 12, 2025
Technology•Software•VibeCoding•SaaS•AI
I spent the other deep in vibe coding on Replit for the first time — and I built a prototype in just a few hours that was pretty, pretty cool.
Getting into commercial-grade, enterprise-grade shape is different, though.
But to start it’s amazing:
You can build an “app” just by, well imagining it in a prompt
Replit QA’s it itself (super cool), at least partially with some help from you
and … then you push it to production — all in one seamless flow.
That moment when you click “Deploy” and your creation goes live? Pure dopamine hit.
First, I built a lightweight Cluely clone just for fun (emphasis on lightweight — it’s pretty rough around the edges, but the learning was the point). It was easy to build and … sort of worked:
Then I kicked off my second project, this time for real. To build a real, enterprise grade product folks would actually pay for. I’m maybe 5%-10% of the way there 2.5 days and $200 of replit credits in.
And whatever I build, needs to be novel. It needs to be something you can’t get elsewhere. Because as cool as vibe coding is, I won’t be able to build something better than Notion or Slack, let alone something with crazy compute like an Opus Clip or Higgsfield. All of which are … dirt cheap.
Vibe Coding Also Only Takes You So Far With Complicated Workflows, Enterprise Use Cases, Etc. For Now.
Over time, apps like Adobe Sign and DocuSign that were simple-ish at first have become incredibly deep workflow engines with 1000s and 1000s of workflows, maybe more. Vibe coding that is probably close to impossible. It might get you prototyped, it might get you going, it might help you learn. But I think that’s as far as most will get today 100% vibe coding.
Who has the time to rebuild those 1000s of intricate workflows? Make them actually secure? Enterprise-grade? Handle every edge case that real users will inevitably find?
I can vibe code a few. But not 1000s.
The learning: if it’s already built, I’d much rather spend $20-$200 a month for an app that already exists and is bulletproof.
If it’s already built. My time is worth more than $20/month.
In fact, I’ve already spent $200 the past 72 hours building my next app on Replit, and I’m only 10% done. This time I’m trying to build something commercial grade.
SaaS in the Vibe Coding Age in Fact Almost Seems Cheap Again. Great, Cost-Effective SaaS at least.
Many on X claim they could vibe code tons of their stack themselves now. “Why pay for Slack when we can build our own chat app?” “Why use Notion when we can create our own knowledge base?”
But vibe coding actually proves the opposite point. Yes, you can build at least some portion of almost any workflow + database app now. At least a simpler variant. The end-to-end vibe coding tools really are incredible.
But should you?
Notion is $0-$20 a month. And it’s really good. I can guarantee it will be 100% better than anything almost anyone can vibe code.
The New Calculus
For now, let’s be clear. There is no way no-engineers vibe coding apps in a few hours or even a week will replace major SaaS apps. No way. Hopefully it will put some pressure on legacy CRMs to stop nickel-and-diming customers, but even there, I have my doubts.
But if nothing else, vibe coding apps is unleashing new apps at an unprecedented pace. And they will all keep getting better and better, faster and faster. Especially for niche vertical apps that just don’t exist in the market, they may already be there. And for true developers that use them mainly to prototype, vibe coding is already epic.
For now, probably:
Vibe coding is perfect for:
Rapid prototyping (testing ideas)
Custom internal tools (helping teams work better)
Net-new workflow + database solutions that don’t exist yet, especially very niche tools
Learning and experimentation
The Real Insight
“Time is the new currency. Building is fun but buying saves sanity.”
This isn’t about replacing core business functions. For apps that need to work flawlessly at 3 AM when your biggest customer calls? Still buying.
The democratization of coding doesn’t make established SaaS obsolete — it makes us appreciate just how much complexity those companies have solved for us. “All we see is icebergs.”
Vibe coding is the future of creation. But the future of operations for now at least is still powered by companies that spent years getting the details right.
SaaS is becoming stronger, not weaker, thanks to vibe coding. When anyone can build the basics, the value of getting the advanced stuff right becomes even more apparent.
That $200/month Salesforce seat? Still a bargain. Not cheap. But a bargain.
The Vibe App Revolution is Real. Where It Goes Is The Only Murky Part.
If you need proof that AI code creation is absolutely on fire: Replit went from $10M to $100M ARR in just the first 6 months of this year alone.
10x growth in less than half a year.
When you see numbers like that, you realize we’re not just talking about a new tool or trend. We’re witnessing a fundamental shift in how software gets built. The barriers to creation have collapsed, and the market is responding accordingly.
The vibe coding revolution is here. The question isn’t whether it will change software production, it already has. The questions are just exactly where it will go for true production-grade, commercial apps. This year, and after.
For now, I’m going to keep vibe coding. And I’m also going to keep happily paying $20 a month for the best B2B apps. That I could never vibe code myself for $240 ($20 x 12). Not really. Nor could you.
How Anthropic Rocketed to $4B ARR — And Why Your B2B Playbook May Already Be Obsolete
Saastr • Jason Lemkin • July 12, 2025
Technology•AI•EnterpriseAI•SaaS•GrowthStrategy
When Anthropic hit a reported $4 billion in annual revenue at the end of 1H’25, it marked more than just another AI milestone. It validated a completely new category of B2B growth that’s operating by fundamentally different rules than anything we’ve seen before.
Let’s break down the numbers that should make every SaaS founder rethink their growth assumptions:
The Growth Trajectory That Breaks Every SaaS Model
Anthropic’s Revenue Timeline:
2022: $10M (founding year revenue)
2023: $100M (10x growth)
Dec 2024: $1B ARR (10x growth again)
July 2025: $4B ARR (300% growth in 7 months)
That’s 100x growth in three years. To put this in perspective, it took Snowflake—one of the fastest SaaS companies in history—six quarters to go from $1B to $2B ARR. Anthropic did $1B to $4B in seven months.
The Enterprise-First Strategy That Worked
While OpenAI captured headlines with consumer ChatGPT adoption, Anthropic quietly built an enterprise juggernaut. Here’s how they did it:
API-First Revenue Model
Unlike the subscription-heavy models of traditional SaaS, 70-75% of Anthropic’s revenue comes from API calls through pay-per-token pricing. This creates several advantages:
Immediate scalability: No lengthy enterprise sales cycles
Usage-based pricing: Revenue scales directly with customer success
Lower customer acquisition costs: Developers can start using APIs instantly
Key Pricing: Claude Sonnet 4 is priced at $3 per million input tokens and $6 per million output tokens. When customers are processing complex code generation or multi-file operations, single sessions can consume 5,000-20,000 tokens.
Code Generation as the Primary Growth Driver
While everyone talks about general AI adoption, Anthropic identified code generation as the killer use case. Here’s why this matters:
Token intensity: Code generation consumes 10-50x more tokens than typical chat
Enterprise necessity: Companies can’t avoid automating development workflows
Stickiness: Once integrated into developer workflows, switching costs are massive
Major customers like Sourcegraph, GitLab, Replit, and Bridgewater Associates leverage Claude’s 200,000-token context window for complex coding tasks and financial analysis.
Channel Partnership Strategy
Rather than building massive direct sales teams, Anthropic distributes primarily through:
AWS Bedrock: Leveraging Amazon’s enterprise relationships
Google Vertex AI: Tapping into Google Cloud’s customer base
Direct API access: For developer-first adoption
This reduces sales costs while accelerating enterprise adoption through existing trusted relationships.
The SaaS Metrics That Don’t Apply
Traditional SaaS metrics break down when analyzing AI infrastructure companies:
CAC/LTV Becomes Irrelevant
When developers can start using your API with a credit card and scale to millions in usage, traditional customer acquisition cost calculations don’t work. Anthropic’s “customers” can go from $0 to $100K+ monthly usage without ever talking to a salesperson.
Churn vs. Expansion Revenue
In token-based models, expansion revenue isn’t about selling more seats—it’s about customers consuming more tokens as they build larger applications. One customer’s successful product launch can 10x their token usage overnight.
Gross Margins at Scale
AI infrastructure operates with different margin profiles than traditional SaaS. While Anthropic likely operates at 40-60% gross margins today (vs. 80%+ for typical SaaS), the absolute dollar margins are massive given the revenue scale.
xAI's Grok 4: The tension of frontier performance with a side of Elon favoritism
Interconnects • Nathan Lambert • July 12, 2025
Technology•AI•FrontierModels•ModelPerformance•AIAdoption
Elon Musk’s xAI launched Grok 4 on Wednesday, the 9th, with the fanfare of leading benchmarks and 10X RL compute for reasoning, but even with that it is unlikely to substantively disrupt the current user bases of the frontier model market. On top of stellar scores, Grok 4 comes with severe brand risk, a lack of differentiation, and mixed vibe tests, highlighting how catching up in benchmarks is one thing, but finding a use for expensive frontier models isn’t automatic — it is the singular challenge as model performance becomes commoditized.
In this post we detail everything about Grok 4, including:
Performance overview and a survey of early vibe checks,
Testing Grok 4 Heavy and how xAI’s approach to parallel compute compares to o3 pro,
xAI’s lack of differentiated products, and
MechaHitler and culture risk.
The core of it is a very impressive model, but the frontier model plagued with the most serious behavioral risks and cultural concerns in the AI industry since ChatGPT’s release.
Grok 4 is the leading publicly available model on a wide variety of frontier model benchmarks. It was trained with large scale reinforcement learning on verifiable rewards with tool-integrated reasoning.
Swyx at Smol AI and Latent.Space summarized the performance perfectly:
Rumored to be 2.4T params (the second released >2T model after 4 Opus?), it hits new high water marks on HLE, GPQA (leading to a new AAQI) HMMT, Connections, LCB, Vending-Bench, AIME, Chest Agent Bench, and ARC-AGI, and Grok 4 Heavy, available at a new $300/month tier, is their equivalent of O3 pro (with some reliability issues). What else is there to say about it apart from go try it out?
A few others include it being top overall by ArtificialAnalysis and dethroning Gemini 2.5 Pro on long context. It also launches with an API version (a first for xAI).
This is an extremely impressive list, and something we don’t see regularly in AI model releases as the landscape has been more competitive. The sorts of models with this “wiping the floor” on benchmarks are only the likes of o1, o3 (which was just an announcement, not released), and Gemini 2.5 Pro. Benchmark progress is in many ways going faster than ever — the previous major step like these models was arguably GPT-4 itself.
In order to achieve this, xAI put up a slide saying that they increased the RL compute from Grok 3 reasoning by 10X to create this model.
This plot is not something that should be taken as precise. Even if they did use the exact same pretraining compute as Grok 3 and scaled up RL to be exactly the same amount of compute, it is definitely case that it is not representative of any “RL is saturating already” or other timeline comments. The benchmarks and speed of releases speak for this — RL is enabling a new type of rapid hillclimbing and all of the leading labs are committing large personnel and compute resources to exploiting it.
We have no indications that we are near the top of the RL curve to balance out the GPT-4.5 release that showed “simple parameter scaling alone” (without RL), isn’t the short term path forward.
The Humanity’s Last Exam plot, while showcasing overall peak performance, is also a beautiful example of scaling both training time (RL) and test-time (inference time scaling, CoT, parallel compute) with and without tools. This is the direction leading models are going.
The main question with this release was then — does the usefulness of the model in everyday queries match the on paper numbers of the model?
Immediately after the release there were a lot of reports of Grok 4 fumbling over its words. Soon after, the first crowdsourced leaderboards (Yupp in this case, a new LMArena competitor), showed Grok 4 as very middle of the pack — far lower than its benchmark scores would suggest.
My testing agrees with this. I didn’t find Grok 4 particularly nice to use like I did the original Claude 3.5 Sonnet or GPT 4.5, but its behavior with tools was immediately of interest to me. Grok 4 is a model that is very reminiscent of o3 in its search-heavy style — this is a milestone I’ve been specifically monitoring, and again confirms that major technical differentiation doesn’t last long across frontier model providers. Maybe making an o3 style model isn't so hard, but making one that has style and taste is.
It’s the sort of behavior where the model almost always searches, e.g. for the simple query below. Grok 4 uses search, so does o3, but Claude 4 and Gemini 2.5 do not.
At the same time, it doesn’t seem quite as extensive in its search as o3, but much of this could be down to UX and inference settings rather than the underlying model’s training. The reasoning is far more interpretable than OpenAI and some other providers which is nice to understand how the model is using tools (e.g. the exact search queries).
Overall, the vibe tests indicate that Grok 4 is a bit benchmaxxed and overcooked, but this doesn’t mean it is not a major technical achievement. It makes adoption harder.
Along with the new model itself, xAI announced a new “Heavy” mode which “dynamically spawns multiple agents” to help solve problems. This new offering combined with the search-heavy behavior represented an important item to test explicitly.
In summary, Grok 4 Heavy behaves like a hybrid between Deep Research products and o1/3-Pro style models on open domains. This points to a new era of technical uncertainty as users and companies race to understand how the top models behave at inference. No longer is it enough to only serve long chains of thought at inference — Grok 4 Heavy shows substantial improvements across all of the reasoning benchmarks.
We don’t have enough information on Grok Heavy, o3-Pro, or Deep Research to know exactly which of these are close to each other. The operating assumptions in industry are that two types of parallel compute exist:
Multi-agent systems with an orchestrator model: In this case, which I interpret being close to Claude Code with parallelism enabled or Deep Research, one central repository manages parallel search agents assigned sub tasks.
Parallel, ranked generation: in this case, the same prompt is provided to multiple copies of the model and the best answer is selected by a verifier or reward model.
Both of these will be impactful for different domains, but the former is far closer to general agents that the industry is collectively striving for and anticipating.
Here I like Grok 4 Heavy’s answer better than ChatGPT Deep Research. They have similar information, but Grok 4 Heavy is more concise.
In this case and overall, Grok 4 outperforms OpenAI Deep Research. Grok 4 simply got far more of the correct links and presented it in the requested form. Combined with the live information graph on X, there are multiple groups who would benefit from this substantially and immediately.
This example above is one of the first times a single request to an AI model has done a “wide” search over source materials. A factor that eventually will come into play here is both user price and effective margins. We don’t know the costs to serve any of these models.
All in, the performance of Grok 4 is very spikey. It has incredible performance on benchmarks and some of the tests done are the best an AI has ever been at some information retrieval tasks, but it falls on its face in some simple ways when compared to its peers like o3 or Claude 4 Opus.
Despite all of this success, xAI and Grok still face a major issue — making a slightly better model in performance that is comparable on price isn’t enough to unseat existing usage patterns. In order to make people switch from existing applications and workflows, the model needs to be way better. This sort of gap I have only experienced with the original Claude 3.5 Sonnet pulling me from ChatGPT (until better applications and ecosystem pulled me back to ChatGPT). The question is — how does xAI monetize this technical success?
Grok’s differentiation is still that it doesn’t have many of the industry standard guardrails. This is great from a consumer perspective, but presents challenges at the enterprise level (even if the lack of alignment was only a minor worry).
With the current offerings, the performance of ChatGPT at $20/month is similar to Super Grok at $300/month. Where is their market?
Claude Code with higher tiers of Anthropic plans is the most differentiated offering among the paid chatbots. For many people, Claude Code is the most fun and useful way to use a language model right now. This is a minority group, but at least one that is willing to pay. For coding, I don’t think Grok 4’s search behavior is as good (same reason I don’t recommend using o3 in something like Cursor), where Claude is still king.
The xAI team did say “Grok 4 coding model soon” before and during the livestream (easily found via Grok 4), so they understand this. Still, the timeliness of this model with product-market-fit matters far more than all the benchmarks, as seen by Claude 4’s lackluster benchmark release. Claude 4 has only become more popular since its release day — I don’t see Grok 4 being the same.
Another timely example of a model that’ll have immediate and practical real-world uptake is the new open-weight Kimi K2 model. Moonshot AI describes their new, mostly permissively licensed 1T total, 32B active MoE model “Open Agentic Intelligence.” This model rivals Claude 4 Sonnet and Opus on coding and reasoning benchmarks.
This makes an impact by being by far and away the best open weight model in this class. Similar to, but not in the same magnitude, there will be a rush to deploy this model and build new products off the backbone of cheap inference from an optimized stack similar to the DeepSeek MoE architecture.
AI adoption and market share downstream of modeling success comes from differentiation in AI.
Grok 4 is the culmination of an ethos that will lead to more dangerous outcomes for AI with little upside on added performance. xAI, in the livestream, announced they have gotten extended security compliance tests commonly referred to as System and Organization Controls 2 (SOC 2) in order to sell into enterprises. This is a wholly useless endeavor when the underlying technology isn’t trustworthy for cultural reasons.
How To Navigate The AI Distribution Shift
Brianbalfour • Brian Balfour • June 29, 2025
Technology•AI•PlatformStrategy•DistributionShift•CompetitiveAdvantage
After I wrote the The Next Great Distribution Shift, the most common thing people were asking me is - “What do I do about it? How are you preparing?”
Fareed Mosavat and I tackled this in our recent episode of Unsolicited Feedback. The TL;DR: If we're right, you have months (not years) to get your platform strategy right. The gates are opening. And the innovation replication cycle has never been faster.
In this post (and episode), we take some steps to help you learn how to play the game before the game plays you.
Listen On: Apple | Spotify | YouTube
Recap: The Next Great Distribution Shift
In The Next Great Distribution Shift, I laid out four points (skip this section if you’ve already read that article).
The AI Tech Shift Happened. The AI distribution Shift Is Just Beginning.
The AI technology shift has transformed product capabilities, business models, and competitive landscapes. Yet despite this technological revolution, we have not experienced its corresponding distribution shift. This isn't unusual. Every major platform transition follows the same pattern—technology first, distribution second.
Distribution shifts consistently lag technology shifts a couple of years. We witnessed this delay during the emergence of social networks, search engines, and mobile platforms. The technology foundation establishes first, creating new possibilities and disrupting existing workflows. The distribution revolution follows, determining which companies capture and control the value created by these technological capabilities.
We Are Now Approaching The Inflection Point Where The AI Distribution Shift Will Emerge
The competitive conditions necessary for distribution shift emergence are now aligned. We have achieved market consensus that AI chat experiences represent a massive, transformative category while the ultimate winner remains unclear.
Multiple major players are actively competing for platform dominance: OpenAI with ChatGPT, Anthropic with Claude, Google with Gemini, Meta with Llama, and Apple's strategic positioning remains uncertain. This competitive dynamic creates the exact environment that historically triggers distribution platform development.
Simultaneously, traditional distribution channels are experiencing significant degradation. SEO effectiveness has declined, app store discovery has become increasingly difficult, and paid advertising channels are delivering diminishing returns. This distribution scarcity creates market pressure for alternative channels, accelerating adoption of emerging platforms.
The convergence of competitive uncertainty and distribution scarcity establishes ideal conditions for platform emergence and ecosystem development.
The Distribution Shift Will Follow The Same 3 Step Cycle
Every successful distribution platform follows an identical progression that reflects structural market dynamics rather than platform-specific strategies.
Step 1: Moat Identification Market consensus emerges around category importance while multiple players compete for dominance. The eventual winner identifies sustainable competitive advantages that differentiate from feature parity competition. In the AI landscape, this moat is shifting from model intelligence to context and memory accumulation—the platforms that can gather and leverage user context most effectively will achieve escape velocity.
Step 2: Opening The Gates The leading platform creates an open ecosystem to accelerate moat development. This involves establishing value exchange mechanisms where third-party developers receive capabilities and organic distribution in exchange for extending platform functionality and contributing to competitive advantage accumulation. We are beginning to see early signals of this phase with integration announcements and platform development hiring.
Step 3: Closing The Gates Once competitive position is secured, platforms optimize for monetization and control. Organic distribution becomes artificially constrained toward paid channels, successful third-party applications get absorbed into first-party features, and revenue sharing terms deteriorate significantly. This phase is inevitable once platforms achieve market dominance.
There Is No Opting Out
Platform participation represents a strategic prisoner's dilemma where individual rational decisions create collective competitive dynamics that force market-wide participation.
If competitors integrate with emerging platforms and gain competitive advantages through enhanced capabilities or distribution access, non-participating companies face systematic disadvantage. Market forces compel integration regardless of long-term platform control concerns.
This dynamic explains why established companies like HubSpot integrate with ChatGPT despite obvious risks of user relationship displacement. The competitive pressure of potential customer preference for integrated experiences outweighs platform dependency concerns.
The strategic choice is not whether to engage with emerging platforms, but how to optimize timing, resource allocation, and competitive positioning within an inevitable participation framework. Understanding cycle progression enables proactive strategy development rather than reactive competitive responses.
How Do You Play The Game?
As I said on Unsolicited Feedback, my intention is not to paint a picture that these platforms are “good” or “evil.” I’m passing no judgement. This cycle happens due to competitive and capitalistic incentives.
My personal mentality is that “it is what it is.” But it’s a game. They are playing you, so you need to know how to play them. Given that, the most common question I got on the original post was - So what do I do next?
What’s Your Betting Strategy?
While my personal prediction centers on ChatGPT achieving platform dominance, this outcome is not guaranteed. The current competitive environment mirrors historical distribution shifts where multiple viable candidates compete for market control before a winner emerges.
OpenAI with ChatGPT
Anthropic with Claude
Google with Gemini
Meta with Llama and Meta AI
Apple with ??? (it will come eventually)
This competitive dynamic directly replicates previous distribution shifts. Facebook competed against MySpace, Orkut, Hi5, and Friendster before achieving dominance. Google emerged victorious despite Yahoo's early distribution supremacy. The pattern demonstrates that initial market position does not guarantee long-term platform control. That means at this stage you are taking bets and you need a betting strategy.
DIVERSIFY OR YOLO?
The two most common reactions to this stage are:
I’ll be early, but diversify across platforms until a winner emerges.
I’ll just wait to dedicate any resources until it’s clear who the winner is.
These are rarely the right strategies. Waiting until the winner is clear means you are typically too late. The early arbitrage advantages are probably gone and you will be fighting an uphill battle.
Being early and diversifying might be possible for very large companies that have large experimental budgets to spread around. Startups don’t have this advantage. They have resource constraints and need to focus. You need to bet, and you need to bet right. Higher risk, higher reward. In other words, portfolio strategy does not apply to startups.
Fareed Mosavat gave a couple of great examples in the Unsolicited Feedback Episode:
"In the mobile shift, if you bet on just Apple, you could be a winner, like Instagram. If you bet on just Android, you could not. If you did Apple and Android, you would grow faster, it turned out, because both those platforms are dominant. But building for both took a lot of resources at the time. And building on Windows Phone was a huge waste of time."
He went on to give another example in the shift to Social:
“The same was true on the Facebook platform. Some people did ok just on MySpace in the early days, but all of those developers shifted to Facebook, right? And there were a million other networks and there was a lot of encouragement from investors, from thought leaders, et cetera, saying should build as much distribution across them. It turned out the winning move were those that just built on the Facebook platform."
The point Fareed is making, is that every ounce of energy you spend on building on the platform that is not the ultimate winner has a massive tradeoff to spending energy on the platform that does end up being the winner.
How o3 and Grok 4 Accidentally Vindicated Neurosymbolic AI
Garymarcus • July 13, 2025
Technology•AI•NeurosymbolicAI•DeepLearning•ArtificialIntelligence
How o3 and Grok 4’s recent developments have quietly confirmed the validity of neurosymbolic AI, an approach that integrates neural networks and symbolic reasoning, marking a significant shift in the AI landscape after decades of debate. Neurosymbolic AI asserts that the strengths of neural networks—learning from large-scale data—complement the strengths of symbolic AI—explicit representation and rigorous reasoning—forming a crucial hybrid for advancing artificial intelligence beyond the limitations of either approach alone.
Neurosymbolic AI has been historically overshadowed by the prevailing enthusiasm for pure deep learning, which champions neural networks scaled by massive data and compute power as the route to true AI. Early proponents of deep learning like Geoffrey Hinton and Ilya Sutskever argued that neural networks would eventually surmount their flaws with enough parameters and training, citing the vast complexity of the human brain compared to current models. Opposing this, Gary Marcus, an early and persistent advocate of neurosymbolic AI, argued that deep learning’s inability to understand causality and reason abstractly is inherent and not just a matter of scale. Marcus’s position has long been marginalized within an ecosystem where funding and scientific focus favored purely connectionist methods.
The article traces the roots of AI into two traditions: connectionist neural networks inspired loosely by brain structures, and symbolic AI grounded in formal logic and computational abstractions developed by figures like Alan Turing and John McCarthy. Neurosymbolic AI proposes harmonizing these by embedding symbolic reasoning capabilities within neural systems. Marcus has emphasized three indispensable symbolic elements: algebraic systems (explicit variables and operations), structured symbolic representations (compositional and systematic), and database-like distinctions (individuals vs. kinds) to avoid overgeneralization and hallucinations—persistent problems in neural-only models.
Despite resistance—especially from influential figures like Hinton, who dismissed symbolic integration as misguided—some notable progress using neurosymbolic principles emerged, notably from Google DeepMind’s successes (AlphaGo, AlphaFold) that blend symbolic algorithms with neural components. Still, the broader AI field persisted with the “scale is all you need” ideology, betting on ever-larger language models like GPT-3 and GPT-4. Yet, even with astronomical computational resources (Elon Musk claims Grok 4 employed 100 times the compute of Grok 2), recent benchmarks reveal diminishing returns on pure scaling. Performance plateaus and persistent errors in reasoning, hallucination, and misalignment underscore that simply "scaling up" neural networks falls short of the path to artificial general intelligence (AGI).
What has shifted is the subtle but crucial adoption of symbolic tools in state-of-the-art models. OpenAI’s “code interpreter,” which allows language models to call and execute symbolic Python code, exemplifies neurosymbolic AI in practice—despite industry reluctance to openly label it as such. These models leverage symbolic algorithms explicitly, improving accuracy in tasks requiring structured reasoning, such as solving the Tower of Hanoi puzzle or generating crossword grids—tasks where pure neural networks struggle and hallucinate. This fusion approach enriches the representational and logical capacity beyond what raw pattern-matching in neural nets can achieve.
Quantitative evidence from recent model launches, such as Grok 4, suggests that the addition of symbolic processing dramatically boosts performance on challenging benchmarks, affirming the decades-long argument for neurosymbolic methods. The diminishing returns from purely data-driven training and the clear gains from integrating symbolic computations illustrate the complementary strengths of these traditions. However, the journey to AGI remains far from complete. Challenges like symbol grounding, reliable spatial reasoning, and constructing dynamic cognitive models “on-the-fly” need deeper research and breakthroughs beyond the current bolt-on approaches.
Sociologically, the article reflects on how investment trends and scientific orthodoxy influenced AI’s trajectory, with the dominant “scaling” narrative favored because it provided a clear, monetizable roadmap for investors and companies. Admitting reliance on symbolic methods complicates this narrative and potentially dilutes the simplistic message that compute alone can solve AI’s challenges. Consequently, neurosymbolic AI’s full potential has been underexplored and underfunded, even as top labs quietly implement hybrid approaches in recent years.
In conclusion, the recent successes of models like o3 and Grok 4 unexpectedly validate the neurosymbolic paradigm long championed by Marcus. These advances underscore the necessity of embracing hybrid systems combining data-driven learning and symbolic reasoning to overcome fundamental AI limitations. The article suggests opening scientific and industrial minds to a broader spectrum of approaches beyond pure deep learning is vital for sustained progress toward AGI. Encouraging transparency, interdisciplinary collaboration, and wider acceptance of neurosymbolic AI may catalyze the next wave of AI innovation.
Key Takeaways:
AI historically split into connectionist (neural net) and symbolic traditions; neurosymbolic AI merges these strengths.
Pure deep learning faces inherent limits in reasoning, abstraction, and causality; symbolic systems provide crucial complementary abilities.
Influential figures opposed neurosymbolic AI, delaying progress and skewing funding toward pure scaling approaches.
Recent models like Grok 4 and OpenAI’s code interpreter feature symbolic reasoning tools integrated with neural networks, improving performance on complex reasoning tasks.
Benchmarks show diminishing returns from scaling compute alone, with symbolic integrations offering significant improvements.
Neurosymbolic AI is not a single method but a set of approaches combining neural nets and symbolic tools; more research needed on integration specifics and unresolved challenges.
Industrial and academic communities would benefit from greater openness to neurosymbolic approaches to foster breakthroughs toward true AGI.
Kimi K2 and when "DeepSeek Moments" become normal
Interconnects • July 14, 2025
Technology•AI•OpenSourceAI•ChineseAI•DeepSeek
The DeepSeek R1 release earlier this year was more of a prequel than a one-off fluke in the trajectory of AI. Last week, a Chinese startup named Moonshot AI dropped Kimi K2, an open model that is permissively licensed and competitive with leading frontier models in the U.S. If you're interested in the geopolitics of AI and the rapid dissemination of the technology, this is going to represent another "DeepSeek moment" where much of the Western world — even those who consider themselves up-to-date with happenings of AI — need to change their expectations for the coming years.
In summary, Kimi K2 shows us that HighFlyer, the organization that built DeepSeek, is far from a uniquely capable AI laboratory in China, China is continuing to approach (or reached) the absolute frontier of modeling performance, and the West is falling even further behind on open models.
Kimi K2, described as an "Open-Source Agentic Model" is a sparse mixture of experts (MoE) model with 1 trillion total parameters (~1.5x DeepSeek V3/R1's 671 billion) and 32 billion active parameters (similar to DeepSeek V3/R1's 37 billion). It is a "non-thinking" model with leading performance numbers in coding and related agentic tasks (earning it many comparisons to Claude 3.5 Sonnet), which means it doesn't generate a long reasoning chain before answering, but it was still trained extensively with reinforcement learning. It clearly outperforms DeepSeek V3 on a variety of benchmarks, including SWE-Bench, LiveCodeBench, AIME, or GPQA, and comes with a base model released as well. It is the new best-available open model by a clear margin.
These facts with the points above all have useful parallels for what comes next: Controlling who can train cutting edge models is extremely difficult. More organizations will join this list of OpenAI, Anthropic, Google, Meta, xAI, Qwen, DeepSeek, Moonshot AI, etc. Where there is a concentration of talent and sufficient compute, excellent models are very possible. This is easier to do somewhere such as China or Europe where there is existing talent, but is not restricted to these localities.
Kimi K2 was trained on 15.5 trillion tokens and has a very similar number of active parameters as DeepSeek V3/R1, which was trained on 14.8 trillion tokens. Better models are being trained without substantial increases in compute — these are referred to as a mix of "algorithmic gains" or "efficiency gains" in training. Compute restrictions will certainly slow this pace of progress on Chinese companies, but they are clearly not a binary on/off bottleneck on training.
The gap between the leading open models from the Western research labs versus their Chinese counterparts is only increasing in magnitude. The best open model from an American company is, maybe, Llama-4-Maverick? Three Chinese organizations have released obviously more useful models with more permissive licenses: DeepSeek, Moonshot AI, and Qwen. A few others such as Tencent, Minimax, Z.ai/THUDM may have Llama-4 beat too but are a half step behind the leading Chinese models on some combination of license and performance.
This comes at the same time that new inference-heavy products are coming online that'll benefit from the potential of cheaper, lower margin hosting options on open models relative to API counterparts (which tend to have high profit margins).
Kimi K2 is set up for a much slower style "DeepSeek Moment" than the DeepSeek R1 model that came out in January of this year because it lacks two culturally salient factors: DeepSeek R1 was revelatory because it was the first model to expose the reasoning trace to the users, causing massive adoption outside of the technical AI community, and the broader public is already aware that training leading AI models is actually very low cost once the technical expertise is built up (recall the DeepSeek V3 $5M training cost number), i.e. the final training run is cheap, so there should be a smaller reaction to similar cheap training cost numbers in the Kimi K2 report coming soon.
Still, as more noise is created around the K2 release (Moonshot releases a technical report soon), this could evolve very rapidly. We've already seen quick experiments spin up slotting it into the Claude Code application (because Kimi's API is Claude-compatible) and K2 topping many nice "vibe tests" or creativity benchmarks. There are also tons of fun technical details that I don't have time to go into — from using a relatively unproven optimizer Muon and scaling up the self-rewarding LLM-as-a-judge pipeline in post-training. A fun tidbit to show how much this matters relative to the noisy Grok 4 release last week is that Kimi K2 has already surpassed Grok 4 in API usage on the popular OpenRouter platform.
Later in the day on the 11th, following the K2 release, OpenAI CEO Sam Altman shared the following message regarding OpenAI's forthcoming open model: "we planned to launch our open-weight model next week. we are delaying it; we need time to run additional safety tests and review high-risk areas. we are not yet sure how long it will take us. while we trust the community will build great things with this model, once weights are out, they can’t be pulled back. this is new for us and we want to get it right. sorry to be the bearer of bad news; we are working super hard!"
Many attributed this as a reactive move by OpenAI to get out from the shadow of Kimi K2's wonderful release and another DeepSeek media cycle.
Even though someone at OpenAI shared that the rumor that Kimi caused the delay for their open model is very likely not true, this is what being on the back foot looks like. When you're on the back foot, narratives like this are impossible to control.
We need leaders at the closed AI laboratories in the U.S. to rethink some of the long-term dynamics they're battling with R&D adoption. We need to mobilize funding for great, open science projects in the U.S. and Europe. Until then, this is what losing looks like if you want The West to be the long-term foundation of AI research and development. Kimi K2 has shown us that one "DeepSeek Moment" wasn't enough for us to make the changes we need, and hopefully we don't need a third.
NotebookLM adds featured notebooks from The Economist, The Atlantic and others
Techcrunch • July 14, 2025
Technology•AI•Research•Notebooks•Google
Google is enhancing its AI-powered research and note-taking assistant, NotebookLM, by introducing a series of featured notebooks. These curated collections, developed in collaboration with various authors, publications, researchers, and nonprofits, aim to provide users with in-depth explorations across a wide range of topics, including health, travel, financial analysis, and more.
The initial lineup of featured notebooks includes:
Longevity advice from Eric Topol, bestselling author of "Super Agers: An Evidence-Based Approach to Longevity"
Expert analysis and predictions for the year 2025 as shared in The Economist's annual report, "The World Ahead"
An advice notebook based on Arthur C. Brooks' "How to Build A Life" columns in The Atlantic
A science fan's guide to visiting Yellowstone National Park, complete with geological explanations and biodiversity insights
An overview of long-term trends in human well-being published by the University of Oxford-affiliated project, Our World In Data
Science-backed parenting advice based on psychology professor Jacqueline Nesi's popular Substack newsletter, Techno Sapiens
The Complete Works of William Shakespeare, for students and scholars to explore
A notebook tracking the Q1 earnings reports from the top 50 public companies worldwide, for financial analysts and market watchers alike
These featured notebooks are designed to offer users working examples of how NotebookLM can be utilized to delve deeper into subjects of interest. Users can read the original source material, pose questions, explore topics, and receive answers that include citations. Additionally, they can listen to pre-generated Audio Overviews or browse the notebook's main themes using the app's Mind Maps feature.
This initiative builds upon the recently launched feature that allows users to publicly share their notebooks with others on the app. Since its debut last month, Google reports that more than 140,000 public notebooks have been shared. The company plans to expand its collection of featured notebooks in the coming months, including more collaborations with The Economist and The Atlantic.
The featured collection of notebooks is rolling out to NotebookLM on the desktop starting today.
🔮 Kimi K2 is the model that should worry Silicon Valley
Exponentialview • Azeem Azhar • July 15, 2025
Technology•AI•MachineLearning•OpenSource•Innovation
In October 1957, Sputnik 1 proved that the USSR could breach Earth’s gravity well and shattered Western assumptions of technological primacy. Four years later, Vostok 1 carried Yuri Gagarin on a single loop around the Earth, confirming that Sputnik was no fluke and that Moscow’s program was accelerating.
In today’s AI, DeepSeek plays the Sputnik role – as we called it in December 2024 – as an unexpectedly capable Chinese open-source model that demonstrated a serious technical breakthrough.
Now AI has its Vostok 1 moment. Chinese startup Moonshot’s Kimi K2 model is cheap, high-performing and open-source. For American AI companies, the frontier is no longer theirs alone.
In today’s analysis, we’ll get you up to speed on Kimi K2, including:
What Kimi K2 is and how it works – its architecture, optimizer and training process, and how it was developed inexpensively and reliably on export-controlled chips.
Why Kimi K2 matters strategically – how it shifts the centre of AI gravity, particularly on efficiency, and why it’s a wake-up call for US incumbents.
What comes next – the implications for open-source versus closed-source, AGI strategy, and China’s growing AI advantage.
What’s so special about Kimi K2?
First off, it’s not a Kardashian 😂. But it is engineered for mass attention. Only here, the mechanism is literal. Like DeepSeek, Kimi K2 uses a mixture-of-experts (MoE) architecture, a technique that lets it be both powerful and efficient. Instead of processing every input with the entire model (which is slow and costly), MoE allows the model to route each task to a small group of specialized “experts.” Think of it like calling in just the right specialists for a job, rather than using a full team every time.
K2 packs one trillion parameters, the largest for an open-source model to date. It routes most of that capacity through 384 experts, of which eight – roughly 32 billion parameters – activate for each query. Each expert hones a niche. This setup speeds the initial pass over text while regaining depth through selective expert activation to deliver top-tier performance at a fraction of the compute cost.
But Kimi K2 didn’t start from scratch. It built directly on DeepSeek’s open architecture.
One of the most beautiful curves in ML history
It’s a textbook case of the open innovation feedback loop, where each release seeds the next and shared designs accelerate the whole field. That loop let Kimi K2 focus on the next innovation: its approach to training.
Training a large language model is like adjusting millions of tiny knobs – each one a parameter that nudges the model toward fewer mistakes. The optimizer decides how large each adjustment should be. The industry standard, AdamW, updates each parameter based on recent trends and gently nudges it back toward zero. But at massive scale, this can go haywire. Loss spikes – sudden jumps in error – can derail training and force costly restarts.
Moonshot’s MuonClip model introduces two innovations to improve the training and stability of AI systems.
First, it adds “second-order” insight, meaning it doesn’t just look at how the model is learning (via gradients), but also how those gradients themselves are changing. This helps the model make sharper, more stable updates during training to improve both speed and reliability.
Second, it adds a safety mechanism called QK-clipping to the attention mechanism. Normally, when the model calculates how words relate to each other (by multiplying ‘query’ and ‘key’ weights), those values can sometimes become too large and destabilize the system. QK-clipping caps those scores before they spiral out of control, acting like a circuit-breaker to keep the model focused and stable.
The result is “[o]ne of the most beautiful loss curves in ML history,” as AI researcher Cedric Chee put it. Training runs longer, it is more reliable and at an unprecedented scale for open-source.
This would have unlocked massive compute savings. Research from earlier this year estimates that Muon optimizers are roughly twice as computationally efficient as AdamW. This would have been a major help with export controls. Moonshot likely had to train K2 on compliant A800 and H800 hardware instead of the flagship H100s. The training ran on more than 15.5 trillion tokens, roughly 50x GPT-3’s intake, without a single loss spike, catastrophic crash or reset. Given this, training was likely relatively inexpensive. It probably cost in the low tens of millions of dollars.
Beyond its architecture and optimizer, Kimi K2 was trained with agentic capabilities in mind. Moonshot built simulated domains filled with real and imaginary tools, then let competing agents solve tasks within them. An LLM judge scored the outcomes, retaining only the most effective examples. This taught K2 when to act, pause, or delegate. Even without a chain-of-thought layer, where the model generates intermediate reasoning steps before answering, the public Kimi-K2-Instruct checkpoint performs impressively on tool use, agentic, and STEM-focused benchmarks, matching or exceeding GPT-4.1 and Claude 4 Sonnet. Quite differently, It also ranks as the best short-story writer.
Artificial Analysis notes that Kimi K2 is noticeably more verbose than other non-reasoning models like GPT-4o and GPT-4.1. In their classification it sits between reasoning and non-reasoning models. Its token usage is up to 30 % lower than Claude 4 Sonnet and Opus in maximum-budget extended-thinking modes but nearly triple that of both models when reasoning is disabled.
Still, it currently doesn’t leverage chain-of-thought reasoning. Moonshot will likely release a model which adds this in the future. If it mirrors DeepSeek’s leap from V3 to R1, it could close the gap with closed-source giants on multi-step reasoning and potentially become the best overall model. But that’s not guaranteed.
Even with its verbosity, pricing remains one of Kimi K2’s key strengths. Moonshot has taken DeepSeek’s foundation and improved it across the board, pushing out the price-performance frontier. The public API lists rates at $0.15 per million input tokens and $2.50 per million output tokens. This makes it 30% cheaper than Gemini 2.5 Flash on outputs, and more than an order of magnitude cheaper than Claude 4 Opus ($15 in / $75 out), GPT-4o ($2.5 in / $10 out), or GPT-4.1 ($2 in / $8 out).
However, in practice, K2’s higher token output makes it more expensive to run than other open-weight models like DeepSeek V3 and Qwen3, even though it significantly outperforms them.
Still, it sits right on the edge of the cost-performance frontier, as it delivers near-frontier capability on agentic and coding tasks at unit economics that make sense for everyday product workloads. And those economics improve further if you run it on your own hardware.
As I said in my introduction, Kimi K2 is China’s Vostok 1 moment: it is proof that China can not only match but push forward the state of the art under real-world constraints. And like Vostok, what matters most isn’t just the launch – but the chain reaction it sets off.
Why Kimi K2 matters
Within weeks of Gagarin’s Vostok flight, the US scrambled to close the gap. Alan Shepard’s 15-minute Freedom 7 hop on 5 May 1961 put the first American in space. Just twenty days later, President John F. Kennedy asked Congress to commit the nation to landing a man on the Moon before 1970.
K2 has now confirmed that China leads in AI-efficiency innovations. DeepSeek R1 proved that you could graft full chain-of-thought reasoning onto a sparse mixture-of-experts model simply by inventing a new reinforcement objective – Group-Relative PPO – and reducing the reliance on expensive human-written dialogues.
Last week, Kimi K2 repeated the trick on the training side: the MuonClip optimizer keeps the gradient so well behaved that a trillion-parameter MoE can process 15.5 trillion tokens without a single loss spike while using about half the FLOPs of AdamW. Two genuine algorithmic advances, six months apart, both published under permissive licenses, have shifted the centre of gravity for efficiency innovation to Beijing rather than Palo Alto.
The next question is whether that shift actually matters.
AI Filmmaker Kavan Unveils Trailer for "Untold - The Immortal Blades Saga," an Entirely AI-Made Movie
X • Rainmaker1973 • July 4, 2025
X•AI
Key Takeaway: The boundaries of filmmaking are being pushed as AI filmmaker Kavan releases the trailer for "Untold - The Immortal Blades Saga," an innovative movie created completely using artificial intelligence technologies.
On July 4, 2025, AI filmmaker Kavan, known online as @Kavanthekid, shared a groundbreaking trailer for "Untold - The Immortal Blades Saga", a full-length film generated entirely by AI. This marks a significant milestone in the creative and entertainment industries, showcasing how emerging AI tools can produce complex, cinematic content without traditional human involvement.
The trailer, available through Kavan’s Twitter announcement, illustrates the possibilities of AI-driven storytelling, blending automated scriptwriting, AI-powered animation, voice synthesis, and video editing into a cohesive narrative. This project demonstrates not only technical innovation but also sparks discussion about the future role of AI in artistic expression.
Audiences and industry observers are keen to see how this AI-generated movie performs and influences filmmaking norms moving forward. Kavan’s work invites creators and technologists alike to rethink collaboration, creativity, and the very definition of a filmmaker in an AI-enhanced future.
```
In Favor of Forgetting
Usv • Rebecca Kaden • July 21, 2025
Technology•AI•Memory•Personalization•Privacy
A few months ago, I asked ChatGPT to “remember” that my 7-year-old son, Max, is a sports fanatic. We were stuck in traffic on a long drive, and my best bet to delay the inevitable iPad handover was to get ChatGPT to generate math word problems. I added, “Use sports in the examples, especially basketball, hockey, and soccer. And teams like the Knicks, Sharks, and Real Madrid.”
ChatGPT nodded (metaphorically). The little memory dots pulsed in agreement. We spent the next 20 minutes happily figuring out how many players you’re left with if you start with five Knicks starters, add two subs, and then someone fouls out. Victory.
ChatGPT really took that memory to heart.
Since that April afternoon, I have not been able to escape the Knicks. They sneak into travel itineraries. They pop up in analogies while I’m doing diligence on robotics startups. Last week, I asked for an image of a “sticky spider web” to accompany a blog post on systems of record and the spiders were all wearing Knicks jerseys. It took multiple prompts to get them to take off the uniforms.
I’ve even told ChatGPT to forget Max’s fandom. No luck. Like that one friend from middle school who will never stop bringing up your misguided talent show solo, ChatGPT is fully committed to my past. It remembers Max’s fandom and, by association, mine.
This is no doubt a minor bug, destined for the fix-it list. But it’s gotten me thinking more broadly about memory in AI.
At USV, we’ve been deep in conversation about how memory—specifically, personal data aggregated over time—might be the golden ticket in this new platform race. The product that remembers us—our preferences, our quirks, our contradictions—may be the one that wins the longevity game.
But in that future, what happens to the beauty of forgetting?
In a hyper-personalized ecosystem, how do we keep our preferences dynamic and implied rather than rigid and engraved? What about the ones that are under the surface, still forming, not yet ready to be named?
@Teknosaur on X nailed it, referencing Inside Out: memory isn’t passive storage; it’s “an active, emotionally weighted system.”
So maybe the best AI memories won’t be flawless databases. Maybe they’ll be artful editors, skilled at remembering imperfectly. Knowing which signals are core, which are passing whims, and which should fade gently into the background.
Ultimately, we want our tools to know us better than we know ourselves. But we also want the freedom to evolve, change our minds, and shed old identities (without having to explicitly declare that we’re now, god forbid, kind of into the Warriors.)
As AI gets more personal, I’m hoping it learns not just what to remember but what to quietly forget.
The Endless Rebranding of AI
Spyglass • July 21, 2025
Technology•AI•ArtificialIntelligence•AGI•Superintelligence
The term "Artificial Intelligence" (AI) has undergone significant evolution since its inception in the 1950s. Initially a niche concept, AI gained prominence in the 1980s and experienced a surge in public awareness following IBM's 'Deep Blue' defeating Garry Kasparov in 1997. This event, coupled with the release of the film "A.I." directed by Steven Spielberg, brought AI into mainstream discourse. Larry Page's 2000 vision for Google's future further highlighted AI's growing importance.
As AI technology advanced, the terminology expanded to encompass more specific concepts. The rise of Machine Learning and Deep Learning provided more granular distinctions within the field. This progression led to the introduction of "Artificial General Intelligence" (AGI), denoting systems capable of performing tasks at human-level proficiency. However, the rapid development of Large Language Models (LLMs) has introduced ambiguity in defining AGI, complicating legal agreements and partnerships.
For instance, Microsoft's agreements with OpenAI include clauses that allow OpenAI to deny technology to Microsoft if new models achieve AGI, defined as "a highly autonomous system that outperforms humans at most economically valuable work." This vague definition has led to disputes, with OpenAI's board having significant control over determining AGI's achievement. Elon Musk has also utilized this term in lawsuits against OpenAI, highlighting the contentious nature of AGI's definition.
The ambiguity surrounding AGI has contributed to the emergence of the term "Superintelligence." While AGI refers to human-level intelligence, Superintelligence implies capabilities beyond human intelligence. This distinction has been emphasized in various contexts, including discussions about the future of AI and its potential impact on society.
In the corporate arena, companies are adopting these evolving terms to position themselves strategically. OpenAI co-founder Ilya Sutskever left to establish "Safe Superintelligence," signaling a focus on developing advanced AI safely. Meta has also rebranded its AI efforts around "Superintelligence," with CEO Mark Zuckerberg discussing "Personal Superintelligence" to differentiate their approach. Microsoft, through Mustafa Suleyman, introduced "Humanist Superintelligence," emphasizing AI's role in addressing societal challenges.
This trend of rebranding reflects the industry's attempt to define and differentiate their AI initiatives amidst the evolving landscape. The fluidity of these terms underscores the challenges in establishing clear definitions and the competitive nature of the AI sector.
🧠David Friedberg explains the great hope of Artificial Superintelligence
Youtube • All-In Podcast • July 21, 2025
Technology•AI•ArtificialSuperintelligence•Future•Innovation
When AI Agents Knock, Will Your Data Platform Answer? – Venrock’s investment in Collate.
Venrock • July 15, 2025
Business•Investment•DataIntelligence•AI•OpenSource
In the rapidly evolving field of artificial intelligence (AI), traditional data platforms—designed for manual processes and engineer-driven dashboards—are increasingly inadequate. As organizations adopt AI agents to automate tasks ranging from customer interactions to risk monitoring, they encounter a critical challenge: ensuring their data is not only accessible but also trustworthy and actionable for these autonomous systems.
Recognizing this need, Venrock has led a $10 million Series A funding round in Collate, the creator of OpenMetadata, an open-source project focused on data intelligence. This investment aims to accelerate Collate's mission of providing AI-powered data solutions tailored for enterprise environments. The funding round also saw participation from Unusual Ventures and Karman Ventures. (prnewswire.com)
Collate's platform addresses several key issues faced by modern data teams:
Disconnect Between Business and Technical Teams: Facilitating seamless collaboration to bridge the gap between business objectives and technical execution.
Lack of Trust in Data: Enhancing data quality and governance to build confidence across the organization.
Manual, Fragmented Tooling: Automating tasks and integrating workflows to improve productivity and reduce manual errors.
By leveraging AI, Collate's platform automates routine tasks and fosters collaboration among data teams, enabling them to deliver data that is ready for AI applications. Built upon the open-source foundation of OpenMetadata, Collate offers a unified solution that combines agent-driven workflows with human workspaces. OpenMetadata has experienced rapid growth, recently receiving a $10,000 grant from Bloomberg's Free and Open Source Software Contributor Fund, underscoring its significance in the open-source community. (prnewswire.com)
The leadership team at Collate brings extensive experience to the table. Co-founders Suresh Srinivas and Sriharsha Chintalapani have over four decades of combined experience, having contributed to the development of industry-standard tools like Apache Hadoop, Apache Kafka, and Uber DataBook. They were also founders of Hortonworks, a notable player in the big data ecosystem. Their vision for Collate is to deliver AI-powered data discovery, observability, and governance, providing modern data teams with the tools necessary to manage data in the AI era. (prnewswire.com)
The investment from Venrock is poised to accelerate Collate's growth and innovation in the data intelligence space. The new capital will be allocated across several strategic areas:
Accelerating the OpenMetadata Community: Expanding the open-source community to foster collaboration and innovation.
Expanding Engineering Investment in AI Agent Development: Enhancing the platform's AI capabilities to better serve enterprise needs.
Scaling Go-to-Market Operations: Targeting enterprise and cloud-native organizations to broaden the platform's reach.
Enhancing Customer Success Services: Providing robust support to Fortune 500 customers to ensure successful implementation and utilization of the platform. (prnewswire.com)
By addressing the critical challenges of data accessibility, trust, and automation, Collate is well-positioned to empower organizations in their AI initiatives, enabling them to unlock the full potential of their data assets.
The Question I Ask Myself Before I AI
Tomtunguz • July 19, 2025
Technology•AI•Automation•Collaboration•ContextManagement
In working with AI, I’m stopping before typing anything into the box to ask myself a question : what do I expect from the AI?
2x2 to the rescue! Which box am I in?
On one axis, how much context I provide : not very much to quite a bit. On the other, whether I should watch the AI or let it run.
If I provide very little information & let the system run : ‘research Forward Deployed Engineer trends,’ I get throwaway results: broad overviews without relevant detail.
Running the same project with a series of short questions produces an iterative conversation that succeeds - an Exploration.
“Which companies have implemented Forward Deployed Engineers (FDEs)? What are the typical backgrounds of FDEs? Which types of contract structures & businesses lend themselves to this work?”
When I have a very low tolerance for mistakes, I provide extensive context & work iteratively with the AI. For blog posts or financial analysis, I share everything (current drafts, previous writings, detailed requirements) then proceed sentence by sentence.
Letting an agent run freely requires defining everything upfront. I rarely succeed here because the upfront work demands tremendous clarity - exact goals, comprehensive information, & detailed task lists with validation criteria - an outline.
These prompts end up looking like the product requirements documents I wrote as a product manager.
The answer to ‘what do I expect?’ will get easier as AI systems access more of my information & improve at selecting relevant data. As I get better at articulating what I actually want, the collaboration improves.
I aim to move many more of my questions out of the top left bucket - how I was trained with Google search - into the other three quadrants.
I also expect this habit will help me work with people better.
OpenAI to take cut of ChatGPT shopping sales in hunt for revenues
Ft • July 16, 2025
Technology•AI•ECommerce•BusinessModel•Innovation
OpenAI is pursuing new revenue streams by integrating shopping functionalities into ChatGPT and taking a cut from sales generated through the AI platform. This strategic move comes as part of OpenAI’s broader efforts to diversify income sources beyond subscriptions and enterprise deals. By embedding commerce capabilities, ChatGPT users can interact directly with products and brands, facilitating a seamless shopping experience within the conversational AI interface.
Key Details and Strategic Context
OpenAI plans to monetize ChatGPT-enhanced shopping by taking commissions from sales transactions initiated via the AI chatbot.
This approach aims to capitalize on the surge in AI-driven consumer interactions and the increasing integration of AI in e-commerce.
The monetization strategy reflects the company's need to establish sustainable revenue streams amidst the high costs of AI development and growing competition.
Embedding shopping features into ChatGPT positions the platform not just as a conversational tool but as a transactional gateway linking users with retailers.
Implications for the Market and Users
OpenAI’s revenue-focused shift signals a broader trend of AI platforms becoming more commerce-oriented, potentially reshaping how consumers shop online. This integration could increase conversion rates for retailers by providing personalized recommendations and instant purchasing options directly through AI dialogue. However, it also raises questions about user privacy, data usage, and the influence of AI on consumer choices, which OpenAI and partners will need to address responsibly.
For consumers, the convenience of embedded shopping tools in ChatGPT may enhance online purchasing experiences by streamlining product discovery and checkout processes. From OpenAI’s perspective, taking a share of these transactions could provide a lucrative income stream that supports continued innovation and investment in AI technology.
Broader Industry Impact
OpenAI’s move may encourage other AI firms to explore similar commerce integrations, intensifying competition in the AI-powered shopping space. It also highlights the evolving role of conversational AI from purely informational assistants to interactive platforms with transactional capabilities. This evolution could lead to new partnerships between AI developers, retailers, and brands aiming to leverage AI for enhanced customer engagement and sales growth.
Overall, embedding shopping and taking a cut from sales positions OpenAI to benefit from the ongoing digital transformation of retail, while extending ChatGPT’s role in everyday digital interactions.
China Is Spending Billions to Become an A.I. Superpower
Nytimes • July 15, 2025
Technology•AI•Investment•China•GlobalLeadership
China is making substantial investments to establish itself as a global leader in artificial intelligence (AI). In January 2025, the government launched the National AI Industry Investment Fund with an initial capital of 60 billion yuan (approximately $8.2 billion). This fund aims to support technological innovation in areas such as AI, quantum technology, and hydrogen energy storage. (scmp.com)
Major Chinese technology companies are also significantly increasing their AI expenditures. ByteDance, the parent company of TikTok, plans to invest over $12 billion in AI infrastructure in 2025, including $5.5 billion for AI chip purchases in China and $6.8 billion overseas to enhance foundational model training capabilities using Nvidia chips. (ft.com) Tencent has announced plans to boost its capital expenditure in 2025, with AI as a key focus of strategic investments. (investing.com)
These investments are part of China's broader strategy to reduce reliance on foreign technology and strengthen its domestic AI capabilities. The country's AI capital spending is projected to reach between $84 billion and $98 billion by 2025, reflecting a significant commitment to this goal. (scmp.com)
Cognition Buys Windsurf, Nvidia Can Sell to China, Grok 4 and Kimi
Stratechery • Ben Thompson • July 15, 2025
Technology•AI•Acquisition•EnterpriseSoftware•Coding
Cognition AI has announced its acquisition of Windsurf, an integrated development environment (IDE) platform, aiming to strengthen its position in the enterprise software and AI-driven coding markets. This move follows Google's $2.4 billion deal with Windsurf, which focused on talent acquisition and technology licensing. Windsurf, backed by investors like Kleiner Perkins and General Catalyst, was last valued at $1.25 billion and generates $82 million in annual recurring revenue with over 350 enterprise clients. Cognition's acquisition includes Windsurf's intellectual property, product line, brand, and experienced teams across engineering, product, and go-to-market functions. Although financial terms were not disclosed, the acquisition highlights intensifying competition among major tech firms like Alphabet and Meta to acquire top AI talent. Windsurf, which had also been in acquisition talks with OpenAI possibly valuing it at $3 billion, will initially operate independently. Cognition plans to invest heavily in integrating Windsurf technology into its offerings, including its key product, the autonomous agent Devin. Windsurf interim CEO Jeff Wang expressed strong support for the acquisition, describing Cognition as the ideal partner to advance Windsurf’s growth.
Trump AI Czar David Sacks Defends Reversal of China Chip Curbs
Bloomberg • Brunella Tipismana Urbano, Edward Ludlow • July 15, 2025
Technology•AI•Semiconductors•TradePolicy•Innovation
White House AI adviser David Sacks defended the Trump administration’s decision to allow Nvidia Corp. and Advanced Micro Devices Inc. to resume sales of certain artificial intelligence chips to China, reversing earlier export restrictions. In an interview, Sacks stated that permitting Nvidia to restart shipments of its H20 chips would enable the U.S. to compete more effectively internationally and counteract efforts by Chinese tech giant Huawei Technologies Co. to expand its global market share. He emphasized that the U.S. is not selling the most advanced chips to China but aims to prevent Huawei from gaining a larger portion of the market. (news.bloomberglaw.com)
Sacks also downplayed concerns about potential smuggling of AI chips, noting that these are large server racks weighing up to two tons, making them difficult to conceal. He expressed apprehension that overly restrictive regulations could impede U.S. technological progress and inadvertently push global markets toward China. He criticized the previous administration's export restrictions, stating that such measures could backfire and push countries into China's arms. (benzinga.com)
Furthermore, Sacks highlighted China's rapid advancements in AI, noting that Chinese AI models are only three to six months behind those of the U.S., indicating a very close race in AI development. He warned that excessive regulation of AI in the U.S. could potentially hinder American innovation in the field. (reuters.com)
The Rise of the Agent Manager
Tomtunguz • July 13, 2025
Technology•AI•AgentManagement•Productivity•Automation
If 2025 is the year of agents, then 2026 will surely belong to agent managers.
Agent managers are people who can manage teams of AI agents. How many can one person successfully manage?
I can barely manage 4 AI agents at once. They ask for clarification, request permission, issue web searches—all requiring my attention. Sometimes a task takes 30 seconds. Other times, 30 minutes. I lose track of which agent is doing what & half the work gets thrown away because they misinterpret instructions.
This isn’t a skill problem. It’s a tooling problem.
Physical robots offer clues about robots manager productivity. MIT published an analysis in 2020 that suggested the average robot replaced 3.3 human jobs. In 2024, Amazon reported pickpack and ship robots replaced 24 workers.
But there’s a critical difference : AI is non-deterministic. AI agents interpret instructions. They improvise. They occasionally ignore directions entirely. A Roomba can only dream of the creative freedom to ignore your living room & decide the garage needs attention instead.
Management theory often guides teams to a span of control of 7 people.
Speaking with some better agent managers, I’ve learned they use an agent inbox, a project management tool for requesting AI work & evaluating it. In software engineering, Github’s pull requests or Linear tickets serve this purpose.
Very productive AI software engineers manage 10-15 agents by specifying 10-15 tasks in detail, sending them to an AI, waiting until completion & then reviewing the work. Half of the work is thrown away, & restarted with an improved prompt.
The agent inbox isn’t popular - yet. It’s not broadly available.
But I suspect it will become an essential part of the productivity stack for future agent managers because it’s the only way to keep track of the work that can come in at any time.
If ARR per employee is the new vanity metric for startups, then agents managed per person may become the vanity productivity metric of a worker.
In 12 months, how many agents do you think you could manage? 10? 50? 100? Could you manage an agent that manages other agents?
The Decade of Data with Tomasz Tunguz
Generalist • Mario Gabriele • July 22, 2025
Technology•AI•Investment•VentureCapital•DataInfrastructure
Brex: The banking solution for startups.
Generalist+: Essential intelligence for modern investors and technologists.
Tomasz Tunguz has spent almost two decades turning data into investment insights. After an impressive run at Redpoint Ventures, where he backed Looker, Expensify, Monte Carlo, and more, Tomasz launched Theory Ventures in 2022. His debut fund, which closed at $238 million, was followed 19 months later by a $450 million second fund.
Theory’s goal is simple but striking: to build an “investing corporation” where researchers, engineers, and operators sit alongside investors, arming the partnership with real‐time market maps, in‑house AI tooling, and domain expertise. Centered on data, AI, and crypto infrastructure, the firm operates at the very heart of many of today’s most consequential technological shifts.
In our conversation, we explore:
How Theory’s “investing corporation” model works
Why crypto exchanges could create a viable path to public markets for small-cap software companies
The looming power crunch—why data centers could consume 15% of U.S. electricity within five years
Stablecoins’ rapid ascent as major banks route 5‑10% of U.S. dollars through them
Why Ethereum faces an existential challenge similar to AWS losing ground to Azure in the AI era
Why Tomasz believes today’s handful of agents will become 100+ digital co‑workers by year‑end
Why Meta is betting billions on AR glasses to change how we interact with machines
How Theory Ventures uses AI to accelerate market research, deal analysis, and investment decisions
Much more
OpenAI Just Released ChatGPT Agent, Its Most Powerful Agent Yet
Youtube • Sequoia Capital • July 22, 2025
Technology•AI•MachineLearning•NaturalLanguageProcessing•Innovation
OpenAI has launched the ChatGPT Agent, its most powerful agent yet, designed to bring advanced AI capabilities into a more interactive and functional framework. This new iteration enhances the user experience by integrating deeper contextual understanding and more dynamic task management capabilities.
The ChatGPT Agent is built to operate with increased autonomy, enabling it to navigate complex scenarios with minimal user input. This marks a significant step forward in AI development, as it can now perform sophisticated tasks, provide thoughtful responses, and offer valuable assistance in real-time applications.
OpenAI’s innovative approach with ChatGPT Agent focuses on improving both the accuracy and relevance of its outputs. It utilizes enhanced models to interpret user queries more effectively, delivering precise and context-aware interactions. This addresses common challenges faced by previous versions that sometimes struggled with nuanced or multi-layered requests.
The deployment of the ChatGPT Agent also highlights OpenAI’s commitment to advancing AI ethics and safety. The new agent incorporates robust safeguards to ensure responsible AI behavior, minimizing risks related to misinformation or misuse. OpenAI continues to improve transparency and control mechanisms to offer users a secure and trustworthy AI experience.
By integrating these improvements, the ChatGPT Agent aims to serve a wide range of applications across industries, from customer support to professional content creation, enhancing productivity and enabling more personalized user interactions.
Studio level commercials with Veo3
X • EHuanglu • July 22, 2025
X•AI
AI Advances: Create Studio-Level Commercials with One Click Using JSON Prompts
Key takeaway: AI technology has reached a breakthrough allowing users to generate professional studio-quality commercials with a single click by using structured JSON prompts, demonstrating a remarkable leap in automated creative production.
Thread summary by @EHuanglu (2025-07-22):
Excitement is building as AI capabilities continue to accelerate. The recently shared innovation enables users to craft highly polished commercials—on par with professional studio standards—simply by providing a JSON-formatted prompt. This process can be done with a single click, drastically cutting production time and effort.
The Twitter thread highlights a practical example whereby the user can input JSON scripts to automatically generate full commercials, showcasing a seamless integration of creative prompting and AI-driven video production technology.
Included are 10 example prompts demonstrating the versatility and quality achievable through this approach, accessible via the provided link:
This advancement signals a transformative shift in content creation workflows, promising to democratize access to premium commercial production by reducing dependence on traditional manual video editing and direction.
Grok is the smartest
Therandomwalk • July 11, 2025
Technology•AI•MachineLearning•Innovation•ConsumerBehavior
In the rapidly evolving landscape of artificial intelligence, Grok, developed by xAI, has emerged as a leading model, consistently ranking at the top in various AI intelligence tests. This achievement underscores the relentless pace of innovation in the AI sector, making it increasingly challenging to predict future developments.
The swift advancement of AI models like Grok suggests a trend toward commoditization. However, this commoditization is occurring at a rapid pace, leading to models that are both expensive and short-lived.
On the consumer front, recent data indicates stability in credit card delinquencies, with figures remaining steady or even declining. This trend reflects consumers' prudent management of credit, despite economic uncertainties.
In the automotive sector, concerns about auto loan delinquencies have been linked to negative equity situations. As vehicle values depreciate, some consumers find themselves owing more on their loans than their cars are worth, leading to defaults. This phenomenon highlights the complexities of the auto loan market and the impact of vehicle valuation on loan performance.
Conversely, the housing market remains resilient. Nationwide negative mortgage equity is low, and even a 10% drop in home values would be manageable for most homeowners. This stability suggests that, barring significant economic downturns, the housing market is not facing a crisis.
These insights reflect a dynamic interplay between technological advancements and consumer behavior, emphasizing the need for continuous monitoring and analysis to navigate the complexities of the modern economic landscape.
Tech Philosophy and AI Opportunity
Stratechery • Ben Thompson • July 8, 2025
Technology•AI•TalentAcquisition•MarketDynamics•BusinessStrategy
One of the most paradoxical aspects of AI is that while it is hailed as the route to abundance, the most important financial outcomes have been about scarcity. The first and most obvious example has been Nvidia, whose valuation has skyrocketed while demand for its chips continues to outpace supply.
Another scarce resource that has come to the forefront over the last few months is AI talent; the people who are actually building and scaling the models are suddenly being paid more than professional athletes, and it makes sense:
The potential financial upside from "winning" in AI are enormous
Outputs are somewhat measurable
The work-to-be-done is the same across the various companies bidding for talent
It’s that last point that is fairly unique in tech history. While great programmers have always been in high demand, and there have been periods of intense competition in specific product spaces, over the past few decades tech companies have been franchises, wherein their market niches have been fairly differentiated:
Google and search
Amazon and e-commerce
Meta and social media
Microsoft and business applications
Apple and devices
This reality meant that the company mattered more than any one person, putting a cap on individual contributor salaries.
AI, at least to this point, is different: in the long run it seems likely that there will be dominant product companies in various niches, but as long as the game is foundational models, then everyone is in fact playing the same game, which elevates the bargaining power of the best players. It follows, then, that the team they play for is the team that pays the most, through some combination of money and mission; by extension, the teams that are destined to lose are the ones who can’t or won’t offer enough of either.
Apple + Anthropic?, Apple’s Fall, Apple’s Options
Stratechery • Ben Thompson • July 9, 2025
Technology•AI•ArtificialIntelligence•Siri•Partnership
Apple is considering integrating external artificial intelligence (AI) technologies into Siri, its voice assistant, by partnering with companies like Anthropic or OpenAI. This shift marks a significant departure from Apple's traditional reliance on in-house AI models. Discussions with both companies have focused on adapting their large language models (LLMs) to operate on Apple's cloud infrastructure for testing purposes. (cnbc.com)
The potential collaboration with Anthropic's Claude or OpenAI's ChatGPT models represents a substantial change in Apple's AI strategy. Historically, Apple has developed its AI features using proprietary technology, known as Apple Foundation Models, and had planned to introduce a new version of Siri utilizing this approach by 2026. (cnbc.com)
This exploration into third-party AI models is still in its early stages, and no final decision has been made. Apple continues to develop its internal project, codenamed "LLM Siri," which aims to enhance Siri using its own models. However, integrating external models could enable Apple to offer Siri features comparable to those of AI assistants on Android devices, potentially improving its competitive position in the AI market. (cnbc.com)
The consideration of external AI models has also led to internal challenges. Reports indicate that some engineers have left the company, and there are concerns about the impact on Apple's AI development team. (cincodias.elpais.com)
In summary, Apple's potential partnership with Anthropic or OpenAI to power Siri signifies a notable shift in its AI strategy, moving away from exclusive reliance on in-house models to incorporating external technologies. This approach aims to enhance Siri's capabilities and competitiveness in the rapidly evolving AI landscape.
How A.I. Could Make Us Dumber
Nytimes • July 3, 2025
Technology•AI•CognitiveDecline•CriticalThinking•Education
The increasing integration of artificial intelligence (AI) into daily life has sparked concerns about its impact on human cognition and education. While AI offers convenience and efficiency, there is a growing apprehension that overreliance on these technologies may lead to cognitive atrophy and diminished critical thinking skills.
A notable example is the case of a Paris schoolboy who used ChatGPT during a test, highlighting the challenges educational institutions face in enforcing anti-cheating rules due to widespread AI use. This incident underscores a broader issue: as AI tools become more prevalent, there is a risk that individuals may outsource essential cognitive functions, leading to a decline in intellectual engagement and problem-solving abilities. (ft.com)
Research supports these concerns. A study involving 319 knowledge workers found that increased reliance on AI for work-related tasks was associated with a decrease in self-reported critical thinking efforts. Participants noted that while AI could synthesize ideas and enhance reasoning, it also led to a reduction in the cognitive effort applied to their work, potentially eroding their critical thinking skills over time. (forbes.com)
Similarly, a study by Microsoft and Carnegie Mellon University researchers revealed that overreliance on AI tools could result in the deterioration of cognitive faculties that ought to be preserved. The study emphasized that while AI can enhance efficiency, it is crucial to use these tools in a manner that encourages critical engagement and preserves human judgment. (fastcompany.com)
The phenomenon of "algorithmic complacency" further illustrates this trend. This occurs when individuals allow AI to make decisions for them, leading to a passive consumption of information and a decline in independent thinking. For instance, users may rely on AI-generated summaries without critically evaluating the content, thereby diminishing their analytical skills. (mergesociety.com)
To mitigate these risks, experts suggest using AI as a tool to augment human intelligence rather than replace it. By engaging with AI-generated content critically and maintaining active participation in the learning process, individuals can preserve and even enhance their cognitive abilities. This approach involves using AI as a guide or mentor, prompting users to think more deeply and independently. (psychologytoday.com)
In conclusion, while AI offers significant benefits, it is essential to use these technologies thoughtfully to prevent the erosion of critical thinking skills. By maintaining an active role in processing and evaluating information, individuals can harness the advantages of AI without compromising their cognitive development.
It’s Time to Take Anthropic and OpenAI’s Wild Revenue Projections Seriously
Theinformation • Stephanie Palazzolo • July 3, 2025
Technology•AI•RevenueGrowth•BusinessModels•AIModels
Six months ago or so, when OpenAI and Anthropic projected how much revenue they would generate this year, the figures might have sounded crazy.
Not anymore! This week, Natasha and I reported that Anthropic has passed $4 billion in annualized revenue, up from a $3 billion rate it was at just a month ago and about $1 billion at the start of the year. At that growth rate, the company should easily pass its “base case” projection of $2 billion in revenue for the year, and hitting $4 billion in actual revenue—the “optimistic case”—looks well within sight.
OpenAI is a similar story, having just passed $10 billion in annualized revenue. That makes its goal of generating $13 billion in revenue in 2025 also look more than reasonable—even without the guaranteed $3 billion sum that SoftBank promised to pay OpenAI annually for its products, starting this year.
While both companies are burning a tremendous amount of cash to develop and run their artificial intelligence, people clearly want what they are selling, and their recent growth gives us no choice but to take their financial projections for the next couple of years seriously.
It’s easy to forget these companies were hardly generating any revenue at all in 2022. By 2027, the two companies optimistically projected a combined $85 billion in revenue.
Both of these companies are growing for entirely different reasons, however, and that raises a number of interesting questions.
The driver of virtually all of OpenAI’s revenue is ChatGPT app subscriptions while Anthropic’s revenue is mostly coming from developers accessing its application programming interface—companies buying access to its models, which specialize in generating, debugging or reformatting computer code.
Both companies’ projections imply that these growth drivers won’t change. So where does that leave us on the big question of where value will accrue in the AI “stack?”
Nvidia, for now, has a near monopoly on profits from AI, thanks to its specialized chips that no one seems to be able to emulate.
The next question is what’s more valuable: the AI models that power the likes of ChatGPT or the consumer and enterprise apps and features powered by those models e.g. ChatGPT, 365 Copilot, Perplexity, Cursor and Glean?
Clearly, ChatGPT is here to stay and that makes a strong argument for the value of apps that ordinary people or businesses pay for. But Anthropic has, for now, defeated the naysayers who argued that models will be commoditized.
Even so, it’s worth watching Anthropic to see if it shifts more focus to generating revenue from the app layer of the stack. If it goes that way, it will be because of its role powering Cursor, which sells an AI-powered integrated development environment for software engineers.
Cursor’s revenue growth has been nothing short of astounding this year. And while that is directly contributing to Anthropic’s growth, Cursor is racing as fast as it can to wean itself off Anthropic models to improve its gross profit margins, which are probably not great right now.
Cursor’s success is almost certainly the reason Anthropic is doing its best Cursor impression these days: Anthropic in May made its own coding product, Claude Code, generally available and it seems to be going strong.
Will Claude Code be strong enough to generate $15 billion in revenue Anthropic projected to earn from Claude enterprise subscriptions in 2027? Maybe not.
But now that Cursor has hired away Claude Code’s leaders from Anthropic, the race is on.
Some investors think that AI coding models can be replicated by the likes of Cursor, thanks to the well-known techniques researchers use to get models to automatically generate lots of coding examples. However, making models that are good at more complex coding tasks, not just simple autocomplete, is more of an art than a science, researchers tell me.
That sort of work consists of teaching AI models how to perform coding tasks in an entire operating system of a computer, not just an isolated environment, and how to break down vague requests, like “can you fix this web server timing out on requests” into more manageable steps.
Such work requires model developers to spend a lot of time with developer customers to understand their coding processes and then figure out how to turn those processes into training data. That’s why all eyes are on Cursor as it tries to develop its own coding models.
Here’s what else is going on…
Lovable, which lets users without coding experience create apps with AI, is raising $150 million at a valuation of $1.8 billion, in a funding round led by Accel, with participation from 20VC and Creandum, the Financial Times reported.
Talon.One, which uses AI to help companies with customer loyalty programs, raised $135 million in funding led by Silversmith Capital Partners and Meritech Capital, with participation from CRV.
Wonderful, which develops multilingual AI for customer service, raised $34 million in a seed funding round led by Index Ventures.
Cyngn, which develops autonomous vehicles for industrial settings, raised $32 million in a recent funding round.
Dexter Energy, which uses AI to help energy companies forecast their energy needs, raised $27.1 million in a Series C funding round led by Klima, with participation from Mirova, ETF Partners, Newion and PDENH.
BetterYeah, which develops AI agents to help employees with tasks like visualizing data, raised $13.8 million in a Series B funding round led by Alibaba Cloud, with participation from Maintrend Capital, Tech In Asia reported.
OpenAI plans to rent an additional 4.5 gigawatts of compute capacity from Oracle in the U.S. as part of its Stargate data center plan, according to Bloomberg, a sign that the effort, which was announced in January, is progressing.
Tesla has halted purchasing components for its Optimus humanoid robot, as the company is making changes to the robot’s hardware and software designs, Chinese tech media LatePost reported.
The FTC has launched an in-depth investigation of SoftBank’s acquisition of Ampere, Bloomberg Law reported.
Perplexity launched a $200 per month subscription plan, which gives users unlimited access to Perplexity’s spreadsheet features and the most advanced AI models.
Google is rolling out its Veo 3 video AI model to users around the world.
A U.S. district court decided that it would not dismiss a lawsuit alleging that Huawei committed racketeering and fraud, and stole trade secrets. Huawei has denied the allegations.
The Future of Engineering Leadership
Oreilly • July 7, 2025
Technology•AI•SoftwareDevelopment•EngineeringManagement•Leadership
As we continue to explore the integration of Generative AI (GenAI) into software development, it's evident that Large Language Models (LLMs) are poised to significantly transform both the creation of software and the management of engineering organizations. This evolution raises important questions about the future of engineering roles, career trajectories, and organizational structures.
Over the past year, discussions with various engineering leaders have highlighted emerging patterns in this transformative landscape.
One notable shift is the automation of routine tasks. While engineering leaders will still engage in performance reviews, coaching, and one-on-one meetings, many of the repetitive aspects are becoming automated. In the near future, LLMs could analyze data from code commits, pull request comments, and communication platforms like Slack to generate initial recommendations, thereby streamlining the review process.
Another critical aspect is the growing emphasis on technical fluency among engineering leaders. Companies are increasingly expecting directors and vice presidents to have a solid understanding of the technical stack, architectural decisions, and engineering trade-offs. This technical acumen enables leaders to remain closely connected to the work and enhances their credibility within the team.
Simultaneously, there's a heightened focus on strategic thinking. Organizations are looking for leaders who can comprehend the broader business context and guide their teams to improve business outcomes, even if it doesn't involve producing more code.
Maintaining team morale is also becoming more complex. As workflows become more automated and trustworthy, the role of individual contributors may evolve to resemble that of engineering managers, involving tasks like clarifying requirements, answering questions, and reviewing code produced by various software agents. This shift necessitates a reevaluation of job roles and career progression within engineering teams.
To navigate these changes effectively, engineering leaders can take several proactive steps:
Engage with the codebase: LLMs can assist leaders in quickly familiarizing themselves with unfamiliar codebases and programming languages, allowing them to apply their broad technical knowledge even when they are not experts in specific areas.
Focus on business objectives: Understanding the industry and business goals enables leaders to identify efficient ways to deliver meaningful results, aligning technical efforts with organizational success.
Enhance strategic rigor: In a rapidly changing technical landscape, it's crucial to adopt formal approaches to developing and validating strategies, ensuring they are robust and adaptable.
Cultivate kindness: Leading with empathy helps teams navigate transitions smoothly and retains top talent, fostering a positive and productive work environment.
By embracing these strategies, engineering leaders can position themselves and their organizations to thrive in the evolving landscape shaped by Generative AI.
Apple Loses Top AI Models Executive to Meta’s Hiring Spree
Bloomberg • July 7, 2025
Technology•AI•ArtificialIntelligence•TalentAcquisition•Meta
Ruoming Pang, Apple's top executive overseeing artificial intelligence models, is departing to join Meta Platforms Inc., marking a significant shift in the competitive AI landscape. Pang, who managed a team of approximately 100 engineers, was responsible for developing Apple's large language models that power features like Apple Intelligence, including email summaries, Priority Notifications, and Genmoji. (macrumors.com)
Meta has been aggressively expanding its AI capabilities, offering Pang a compensation package reportedly exceeding $200 million over multiple years to secure his expertise. This move is part of Meta's broader strategy to build a "superintelligence" division, with CEO Mark Zuckerberg personally involved in recruiting top AI talent. (macrumors.com)
The departure of Pang follows other notable hires by Meta, including Alexandr Wang, former CEO of Scale AI, and Daniel Gross, a startup founder. These efforts underscore Meta's commitment to advancing AI technologies and competing with industry leaders like OpenAI and Google. (reuters.com)
Apple's AI strategy is now overseen by Craig Federighi, head of software engineering, and Mike Rockwell, who led the development of the Apple Vision Pro headset and now leads engineering for Siri. The company has also opened its language models to external developers, aiming to expand applications on devices like the iPhone and iPad. (ndtv.com)
This development highlights the intensifying competition for AI talent in Silicon Valley, with companies offering substantial incentives to attract leading experts in the field.
Tokenization
Robinhood's OpenAI-linked token drop triggers demand debate
Theblock • Connect on • July 12, 2025
Finance•Cryptocurrency•Tokenization•PrivateEquity•InvestmentAccess
Robinhood offering European investors tokenized shares linked to OpenAI grabbed headlines and sparked debate last week — especially after the AI company distanced itself from the project, saying the tokens don’t represent real equity.
But with the tokenization of real-world assets (RWAs) undoubtedly on the rise, Robinhood's efforts to attract customers in Europe with tokens tied to buzzy tech companies like OpenAI and SpaceX provoked plenty of conversation around private investors gaining access to investing in early-stage, private companies through a tokenization mechanism.
One key question to be answered: Are there a lot of retail investors interested in purchasing tokenized equity in private companies?
Dinari CEO Gabe Otte is not so sure.
"The key factor with any approach to tokenization is scalability … this approach is generalizable to private securities, and in fact one of our early business lines focused on this," Otte told The Block. "The issue we found was more about supply and demand. While this could change, we found a couple of years ago that most pre-IPO companies are hesitant to take the risk of dedicating a chunk of their cap table to buyers of tokenized assets."
Alongside major cryptocurrency exchanges like Coinbase and Kraken, Dinari is aiming to be a chief supplier for investors who want to buy tokenized equity in publicly traded companies, a slice of RWA investing that many believe could be a huge area for growth, maybe even worth more than USD-pegged stablecoins.
Last month, Dinari said it had become the first company to secure U.S. approval to offer clients tokenized equities. So far most companies offering tokenized shares have focused on selling tokens that track the price of popular stocks like Tesla, Apple, and Nvidia to non-U.S. traders.
According to Otte, Dinari operates differently from some platforms offering access to tokenized equities. "We use a tokenization-on-demand model, which means that when one of our partners’ customers wants to purchase shares, we go out to the traditional market, place that buy order, and tokenize the shares that were purchased specifically for that customer."
To be clear, Robinhood's OpenAI tokens are not tied to actual equity in the artificial intelligence firm, but rather they are a derivative that tracks the company's valuation. That means, unlike what Dinari is doing, Robinhood didn't buy shares in OpenAI and then tokenize them (something that wasn't initially clear when the company's CEO Vlad Tenev told European users they'd have a chance to acquire the stock tokens during a live broadcast last week).
Plume Network CEO Chris Yin largely views Robinhood's initiative as a net positive regardless of how it pans out. But he is doubtful about the actual demand for tokenized private equity.
"If you look at the numbers, you look in the stats, you look what's going on, it's very clear, the demand is not there," Yin told The Block. Plume Network's mainnet launched last month with $150 million in RWAs onchain. The company is tokenizing a diverse portfolio of real-world assets like solar farms, Medicaid claims, mineral rights, and consumer credit.
Although brief, U.S. Securities and Exchange Commission Chairman Paul Atkins gave a much different assessment last week when asked about the tokenization of private equity. "There is a lot of demand on the investor side to be able to invest in private products," he said in a televised interview with CNBC.
While Atkins didn't specifically address Robinhood's initiative, the SEC chairman said he views tokenization as an innovation and he's optimistic about it helping to bring forth more investment products.
If there is a push to tokenize private equity in growing companies coming down the pike, Injective Head of Business Mirza Uddin is also confident in demand prospects.
"By turning equity into digital tokens, it gives everyday investors a chance to participate in markets once limited to venture capital and private equity," Uddin told The Block. "This is not about creating new markets, but about making existing ones more accessible and liquid. That said, tokenized shares often do not carry traditional rights like voting power."
Injective is a technology provider in the burgeoning RWA space.
Kevin Rusher, founder of RWA startup RAAC, which is working to provide private investors with the chance to access a range of tokenized assets, including gold and subsidized housing, also sees growth potential. "Tokenization in private markets has surged significantly this year. Private credit is already the largest RWA category on RWA.xyz,” he told The Block.
According to analytics platform RWA.xyz, the real-world asset market is closing in on $25 billion with $14.5 billion of that total coming from private credit.
"We can expect to see a similar trend in private equity... it will continue to grow, but will come with a lot of risk which need to be addressed, and transparency is key," Rusher added.
Whether tokenized private equity plays can attract significant retail interest may come down to brand recognition because very few private companies command the kind of attention OpenAI and SpaceX do, according to Dinari's Otte.
"Unless the company is a major brand name already, we find that demand for these opportunities tends to lean more on broader sector or asset class plays as opposed to buying equity in individual private companies," he said.
Demand or no demand, to Fundrise co-founder and CEO Ben Miller, the tokenizing of private equity seems pointless.
"It makes no sense to me, honestly," Miller said in an interview with CNBC on Monday. "Because what you're doing is taking something that is inherently a long-term investment and making it something you can trade 24/7."
Launching a $1-billion fund in 2022, Fundrise gives accredited and unaccredited investors access to private companies but their money goes into a venture fund that appears on a startup's cap table like any other VC.
Robinhood Founder & CEO, Vlad Tenev: Robinhood’s $85BN Resurgence & Tokenizing SpaceX & OpenAI
Youtube • 20VC with Harry Stebbings • July 14, 2025
Technology•FinTech•Investment•Tokenization•Blockchain
Vlad Tenev, the founder and CEO of Robinhood, shares insights into the company’s resurgence with a valuation of $85 billion. He discusses Robinhood’s mission to democratize finance for all and how the company overcame challenges to achieve its current success.
Tenev explains the evolution of Robinhood’s platform and the importance of creating accessible financial tools for a broad audience. He highlights the role of technology in transforming investment opportunities and empowering retail investors.
A key focus of the conversation is how Robinhood is innovating by tokenizing assets such as SpaceX and OpenAI. This move aims to provide investors with new ways to participate in cutting-edge companies that traditionally have been difficult to access.
Throughout the discussion, Tenev emphasizes the intersection of finance and technology, illustrating how tokenization and blockchain can revolutionize the investment landscape. He envisions a future in which financial markets are more inclusive, transparent, and liquid.
The interview also covers regulatory challenges and the efforts Robinhood is making to maintain compliance while pushing the boundaries of financial innovation. Tenev reflects on lessons learned from past hurdles and the importance of building trust with customers.
Additionally, the role of community engagement and education in Robinhood’s strategy is underscored, as the company strives to equip users with knowledge to make informed investment decisions.
The overall message conveys a strong belief in technology as a force for positive change in finance, enabling broader participation and new asset classes through tokenization.
Congress Just Injected Crypto Into the Most Stable Part of the U.S. Economy
Nymag • Matt Stieb • July 18, 2025
Finance•Cryptocurrency•Stablecoins•Legislation•TreasuryMarket•Tokenization
There would be a great irony if cryptocurrency — which was created in 2008 to provide an alternative to the mainstream financial system that had just failed — led to another economic crash. But that is what a handful of experts fear could happen now that Congress has passed the GENIUS Act, a major piece of crypto legislation.
The goal of the new legislation, as stated by pro-crypto lawmakers on both sides of the aisle, is to regulate a growing sector of the economy that already has $238 billion at stake: stablecoins, which are so named because, unlike bitcoin, they are never supposed to fluctuate from the value of $1. The bill requires that stablecoins be tethered to safe, liquid assets that can keep them, well, stable. (The safest such asset is U.S. Treasury bonds, which sell our debt.)
That’s a good thing, in the abstract, considering stablecoins have already failed many times to do the one thing they’re supposed to do. Tether, the largest stablecoin issuer by market cap, has lost its peg to the dollar on two important occasions, leading to a ban in New York. (It’s also currently the go-to currency for world-class money launderers.) Another stablecoin, terra, lost its $1 value in 2022, leading to billions of dollars of losses across crypto firms as well as a liquidity crisis that helped tank FTX, Sam Bankman-Fried’s company. Circle also lost its dollar peg when it reported that it had 8 percent of its holdings wrapped up in Silicon Valley Bank, which collapsed in 2023. The government bailed out Circle and other depositors to the tune of $15.8 billion.
The crypto industry, which spent hundreds of millions last election backing pro-crypto candidates, is thrilled by the passage of the new bill. As is Donald Trump, who is expected to sign it. (The president has his own stablecoin that could benefit from the legislation, currently advertised as “No games. No gimmicks. Just real stability.”)
“The thing that the bill really does is signal to traditional finance that the water’s safe to go in,” says J. Christopher Giancarlo, who served as the commissioner of the Commodity Futures Trading Commission under presidents Obama and Trump. JPMorgan Chase, Bank of America, Citigroup, and Wells Fargo have reportedly met to discuss issuing a joint stablecoin. Visa and Mastercard are also reportedly testing stablecoin plans, as are big-box companies like Amazon and Walmart.
“Even if you didn’t have national legislation, it’s not like it’s going to stop stablecoins from becoming a consumer of Treasury securities,” says Giancarlo, a proponent of the legislation. He pointed to a Coinbase report showing that the total volume of transfers through stablecoins was over $27.6 trillion in 2024 — more than Visa and Mastercard combined. “It’s only been growing, and we’ve done nothing so far,” he says.
But critics of the GENIUS Act, who include Democratic senator Elizabeth Warren and Republican senator Josh Hawley, fear that it will simply supercharge a volatile, sketchy asset. If stablecoins fail in similar ways after taking on trillions in Treasury bonds, the shocks could reverberate out in a way that they did not with Bankman-Fried’s contained collapse.
Some experts see the massive expansion of stablecoins tied to the Treasury market as a recipe for another 2008-style crash. In a paper published in May, GW Law professor Arthur Wilmarth argued that the GENIUS Act could “trigger systemic financial crises and require costly government bailouts.”
The law, he argued, will allow stablecoin issuers to sell derivatives, which would “produce a pile of highly-leveraged, speculative bets on crypto-assets, resembling the toxic pyramid of bets on subprime mortgages created during” the early 2000s.
Corey Frayer, the former right-hand man of SEC chair Gary Gensler in the last administration, sees a similar scenario. “A fundamental problem in the financial crisis was leverage, right?” he says. “Banks weren’t just making risky loans; they were using bad assets as collateral to make more investments. And so as those base assets lose value, the collateral up the chain starts to fall apart.” Another bill proposed in the House this week would allow federal mortgage lenders to consider an applicant’s crypto holdings when applying for a loan. “That is how you build leverage,” Frayer says.
Referring to stablecoins, he adds, “You are creating this money that doesn’t actually have any real value and has this counterparty risk such that it could lose that value.”
Yesha Yadav, a Treasury-market expert at Vanderbilt Law School, co-published a paper in June arguing that interdependence between stablecoins and Treasurys could create the biggest risk to the Treasury market since the first weeks of COVID. She says that a “nightmare situation” would resemble the bond market of March 2020, when “investors came to the floor to get a sale and nobody picked up the phone to honor that request.”
If a major stablecoin issuer loses public confidence after-hours and investors who want to sell are left holding a stablecoin that is losing its $1 peg, Yadav says, it could be an ugly situation that could result in a run on banks — and permanently damage the bond market. “The Treasury market is supposed to be the one market that needs to work when every other market’s falling apart,” she says. “It is the one market that’s supposed to produce liquid cash and Treasurys whenever any institution needs it.”
Frayer says, “It’s a great business: printing fake money and trading it to people for real money.” But if the fake money isn’t worth what the crypto industry says it is, the rest of us could end up paying the difference.
Big Bang 2.0
Netinterest • Marc Rubinstein • July 11, 2025
Technology•Blockchain•Ethereum•Tokenization•FinanceInnovation
“I believe tokenization is the greatest capital markets innovation since the central limit order book.” — Vlad Tenev, Co-Founder and CEO of Robinhood Markets, July 8
Ten years ago this month, Ethereum went live as a new kind of blockchain. Marketed as “a censorship-proof world computer that anyone can program,” it launched with plenty of promise. Unlike Bitcoin, whose capability was limited mostly to transferring funds between accounts, Ethereum came with an expressive programming language that invited developers to build applications on top. As Apple did with its App Store, Ethereum’s founders encouraged developers to write and run apps – they wanted it to be “the underlying and imperceptible medium for every application, just what medieval scientists thought ether was,” according to one account.
Although its scope was wide-ranging, the technology lent itself to financial applications. The key innovation was smart contracts – self-executing programs that automatically enforce agreements when conditions are met. A smart contract could hold funds in escrow until both parties fulfill their obligations, automatically distribute loan payments based on preset terms, or execute trades when certain price thresholds are reached. Unlike traditional contracts that require intermediaries to enforce, smart contracts run on code, making them faster, cheaper, and accessible to anyone with an internet connection.
Yet despite its promise, Ethereum faced several challenges gaining real-word adoption. First, it was slow. At launch, Ethereum could support only approximately 15 transactions per second. Five years later, that had increased to 1,000 but it was still too slow for mainstream financial use. Second, its regulatory status was uncertain. To motivate people to operate it, Ethereum launched its own digital currency, ETH, but because the Securities and Exchange Commission offered only informal hints and no binding rulemaking, ETH languished in regulatory limbo – neither formally classified as a security nor clearly exempt – leaving mainstream institutions wary.
Finally, Ethereum suffered from what tech investor Chris Dixon characterizes as the battle between “the casino and the computer.” The casino side – focused on trading and speculation – often overshadowed the computer side, which was building serious infrastructure for the long term. The casino culture manifested in wild price swings and speculative manias. Even Ethereum’s principal founder, Vitalik Buterin, was troubled by this dynamic: during a 2017 boom that pushed ETH’s market cap past half a trillion dollars, he asked, “have we earned it?” Four years later, amid another speculative surge, he warned of the “dystopian potential” of digital assets if implemented incorrectly.
Now, though, these obstacles are being resolved.
For one, regulation is becoming clearer. The Securities and Exchange Commission recently hosted a roundtable on institutional crypto adoption. One panel member reflected: “I’ve been in this space since 2013, and if you told me that I’d be sitting on this panel today, back then, I probably would have bought more.” The underlying technology is also improving. Ethereum can now handle 65-100,000 transactions per second experimentally, with performance improving over time. The system has also proven remarkably resilient – Ethereum has not experienced a complete outage in its history, affording its applications extremely reliable uptime and accessibility.
Unsurprising then that institutions are beginning to take it more seriously:
Fidelity, Nasdaq, Invesco, Franklin Templeton, BlackRock and Apollo were all at the SEC’s roundtable alongside the crypto bro who should have bought more.
Last month, Robinhood hosted a presentation at the historic Château de la Croix-des-Gardes in Cannes – the setting of Hitchcock’s To Catch a Thief – where CEO Vlad Tenev launched a new tokenized asset product: blockchain-based representations of traditional investments that can be traded (and settled) 24 hours a day.
BlackRock’s institutional digital liquidity fund – a tokenized money market fund launched on Ethereum in March 2024 – has grown to a market cap of $2.8 billion. According to the company, its overall digital assets offering is now a $250 million revenue business.
BlackRock’s founder and CEO, Larry Fink, is bullish about the prospects for tokenized assets. “Every stock, every bond, every fund – every asset – can be tokenized,” he writes in his latest shareholder letter. “If they are, it will revolutionize investing. Markets wouldn’t need to close. Transactions that currently take days would clear in seconds. And billions of dollars currently immobilized by settlement delays could be reinvested immediately back into the economy, generating more growth. Perhaps most importantly, tokenization makes investing much more democratic.”
We touched on tokenization as a theme back in December. To explore it further – including the specific risks and opportunities it presents – read on.
Essays
#27: Long Google
Loeber • July 12, 2025
Technology•AI•ArtificialIntelligence•Google•Innovation•Essays
Two weeks ago, I put 10% of my net worth into Google stock. This is a first for me: while I have held positions in other big tech companies over time, I’ve always shied away from Google because I don’t really understand advertising.
In recent months, many other people have also shied away from Google: ChatGPT is eating into Google Search, and Google’s public response has been tepid. Is this a textbook example of the Innovator’s Dilemma? Will Google’s empire crumble?
Such fear and doubt is reflected in the stock: Google is now trading at a 19x P/E ratio, when its historical average over the past decade is 28x, and today’s S&P average is 26x. In other words, the street ascribes a much lower value to Google’s profits than to those of other companies, implicitly anticipating a collapse in Google’s profitability.
But this is myopic, a view far too fixated on legacy conceptions of Google’s Search and advertising business. While the near term is anyone’s guess, the street substantially undervalues the totality of what Google has built, and how that positions Google for the future. My view is this:
AI poses threats to Google’s Search business, but they are overrated and solvable;
In fact, AI may supercharge Google’s existing Search business;
Google is best-positioned to win the AI race;
If Google wins the AI race, it may become a $20T+ company in 5-10 years;
Oh, and, by the way, Waymo is a trillion-dollar company hiding in plain sight.
Points three and four in bold are the ones that really matter. The AI community’s AGI timeline is now only seven years out. Most people do not understand:
These years will pass quickly;
As we get closer to AGI, trillions of dollars in potential revenue become unlocked. The first firm(s) to the finish line will win the largest economic prize in history.
Google is in the lead to win.
It’s easy to miss the value. Many investors are bearish on Google because they are fixated on Search as an immutable one-trick-pony, and Search appears paralyzed in a changing world. But Google’s position for AGI is wildly underrated, and it presents opportunities that make questions like whether Search makes money or not unimportant. There is a much larger game in play now. My bet is that Google slowly but surely turns the ship, and in this essay I’ll chart their path from here to a $20T+ world.
Many commentators view AI as disruptive to Google Search: people are going to ChatGPT rather than to Google Search because it provides better answers, and the answers are exhaustive such that no monetizable click on an advertisement can occur. But this misses a few things:
Net search volume is still growing. Google’s Search volume increased by 20% from 2023 to 2024. This may feel like a mature industry, but in some respects it is still early! People are still coming online. Software continues eating the world.
If Search becomes more like a ChatGPT-style experience, that may decrease link clicks, but not necessarily ad clicks: only ~20% of searches show an ad, and fewer yet result in an ad click. Today, most searches are not monetizable at all.
ChatGPT-style queries and answers may turn out more monetizable than traditional searches because the questions are higher-intent, and the answers surface far fewer links, which better nudges the user toward any displayed link. The prose of the answer can further nudge the user. As this matures, I’d expect higher click-through-rates/overall value for advertisers.
Google has the world’s best dataset on queries, ads, and user behavior, and Google’s ads are already partially AI-generated today. The advertiser only has limited ability to provide guidance. Advances in AI further empower Google’s existing advertising flywheel.
Finally, Google may eventually capture far more value by not getting paid for an ad click, but by closing the loop and offering the product or service that the user is looking for. This enables Google to capture the full amount the user is willing to pay, rather than just the partial margin ceded to an ad click.
In short, the future of Search seems to come down to two questions:
If ChatGPT offers a superior form factor, can Search move toward that form factor and avoid disruption? I think so, and it seems to already be happening.
Can advertising work just as well in that form factor as in traditional Search? Early results suggest yes, and it may work even better. The ChatGPT form factor is more powerful in how it can present the result to persuade user action.
Finally, if there’s a lesson from the last twenty years: whether for countries or big tech companies, betting on the collapse of an incumbent with great momentum rarely works out. Google has colossal momentum—old user habits die hard, and Google’s services are among the most deeply entrenched in the day-to-day lives of consumers.
But forget about Google’s Search business for a minute, and consider what Google has:
The most visited website on earth, the default entry-point to the internet for most humans for 25 years and counting;
The 1 consumer brand in the world;
Gemini: arguably the best AI models;
YouTube: the world’s biggest repository of video data;
Google Search: the world’s biggest store of internet data, having scraped the entire internet for the past 25 years;
Google Books: the world’s biggest store of published words;
GMail: the most popular email client with 1.8B active users;
Google Drive/Docs/Sheets: the most popular workplace suite in the world;
Android: the most widely used mobile phone operating system on earth;
A mature devices business including phones, laptops, watches, home assistants…
Google Chrome: the most popular web browser in the world;
GCP: their own cloud, behind AWS and Azure;
TPUs: their own chips for machine learning, now used by OpenAI;
Global data centers representing about $200-290B in investment-to-date and another $75B committed;
$100B on their balance sheet;
~$110B in annual operating profit that they could plow into AI if they so wished;
~180,000 employees including some of the very best and brightest machine learning researchers and engineers on the planet;
A truly massive amount of user behavior and ad performance data;
Endless weird dark horse projects that aren’t even on the public radar right now.
Don’t be distracted by existing revenue or product-in-market. The more you think about Google’s structural advantage in AI, the more staggering it is. They own the whole vertical stack required to win.
The full strength of this competitive advantage against Anthropic, OpenAI and others is yet to become apparent: where other firms top out, Google can keep pushing. Right now, the big AI labs are all focused on making better use of their not-fully-exhausted resources in terms of data, capital, and compute. Therefore, model performance is pretty competitive, and the perceived market leader switches every few months. But eventually, these firms will fully saturate the data, capital, or compute available to them. And however much they may have, Google has a lot more. Similar to how Mistral, Cohere, and others once looked competitive and then couldn’t keep up against superior resources, the same fate may play out at much larger scale — companies worth tens or even hundreds of billions of dollars exhaust their resources while Google’s products and distribution keep improving.
For the last few weeks, Meta has given us a taste of what it means for a trillion-dollar company with conviction to flex its weight: raiding competing labs to the point that OpenAI shut down for a week. Google has barely begun seriously competing; the world will look different when it does.
Philosophy Mondays: Universalism and Moral Progress
Paragraph • continuations@newsletter.paragraph.com (Albert Wenger) • July 21, 2025
Culture•Philosophy•Ethics•Universalism•MoralProgress•Essays
After an excursion into qualia we are now back to our regular scheduled programming here on Philosophy Mondays. As a brief refresher, let’s retrace our steps. My goal behind Philosophy Mondays is to help myself and hopefully others (including artificial intelligences) to answer what I consider to be the fundamental question:
How should we choose our actions in light of our understanding of reality and the potential impact of our actions on this reality (which happens to include how we and others are feeling)?
In order to tackle this we started by looking at how language allows us to construct maps of reality which form the basis of understanding. As humans we can therefore make choices over which actions we should take. This requires us to exercise judgment informed by values which are based on knowledge.
So this leads to an important question: is it possible to have universal values? By universal values I mean values that could and should be embraced by all humans (and also by all artificial intelligences)?
Much of ancient philosophy was directed at such universalism. When Greek philosophers asked what it means to live the good life, they didn’t think the answers they came up with were restricted in time or space but rather should be applicable to everyone.
Now there is an important caveat here: “everyone” had some limitations in much the same way that “all men” did in the Declaration of Independence. For much of history this meant a subset of humans, namely free men, with other groups, including women and slaves being owned and controlled. This limitation wound up being highly significant in attempts at universalism during and following the Enlightenment: A truly universal approach to humanism was at the root of the feminist and abolitionist movements thus showing its potential. But alas humanism was not strong enough to prevent the horrors of colonialism, fascism and of the Holocaust. This soured a great deal of philosophers entirely on the idea of universal values. It prompted the rise of various critical theories, such as deconstruction, which rightly asked questions about power: How did some groups use values to justify oppressing or even exterminating others.
These new theories unfortunately went too far in their counter reaction. Instead of questioning the exercise of power, their aggregate effect was to undermine claims of truth and of universality altogether. It is hard to overstate how far this has moved many people towards moral relativism, the idea that all philosophy (or religion) is simply narrative and that all narratives are equally valid. Here are two illustrations of how far we have come on this. First, my wife Gigi and I support an effort called the Valueslab which is bringing together philosophers and computer scientists on questions of values and artificial intelligence. During outreach, one professor wrote back that “if there is even a hint of universalism I will have nothing to do with this.” Second, Yuval Harrari’s book Homo Sapiens was widely praised, despite its full-on embrace of moral relativism (for which I consider it a dangerous book).
The flaw of humanism wasn’t its attempt at universalism. Its flaw was that it failed to achieve unversalism for a moral core. Isn’t there maybe some middle ground, such as a moral pluralism? No. Either there are some universal values or we are relegated to relativism. Suggestions of a possible compromise are really just relativism in disguise.
Why am I so bought into universalism? Because moral relativism stands opposed to moral progress. If all values are equally valid then we can never hope to pick better ones and make them become widely adopted. And without moral progress, technological progress will have horrible consequences. Relativism with regional moral experimentation was a great source of progress when our technologies had mostly local and at best regional reach. But today much of human technology has global implications. And this means we desperately need global moral progress. This, in retrospect, should be the correct lesson from the 20th and early 21st century. The following quote by E.O. Wilson sums it up well:
The real problem of humanity is the following: we have Paleolithic emotions; medieval institutions; and god-like technology.
Today our technology is ever more god-like given our ability to program cells and our rapid progress in building artificial intelligence systems. It is quite possible now that these systems will soon achieve self improvement unleashing an intelligence explosion. Their powers would then far outstrip ours at the very moment that our institutions are weaker than they have been in a long time and when we are going through a period of moral decay.
Values derived from an objective feature or reality can make a credible claim to universality. As argued previously, the existence of human knowledge is that feature. In the coming posts in Philosophy Mondays I will further explore what knowledge is and how to derive values from it.
Illustration by Claude Sonnet 4 based on this post.
Content and Community
Stratechery • Ben Thompson • July 21, 2025
Technology•AI•ContentCreation•Publishing•CommunityBuilding•Essays
The old model for content sprung from geographic communities; the new model for content is to be the organizing principle for virtual communities.
One of the oldest and most fruitful topics on Stratechery has been the evolution of the content industry, for two reasons: first, it undergirded the very existence of Stratechery itself, which I’ve long viewed not simply as a publication but also as a model for a (then) new content business model.
Second, I have long thought that what happened to content was a harbinger for what would happen to industries of all types. Content was trivially digitized, which means the forces of digital — particularly zero marginal cost reproduction and distribution — manifested in content industries first, but were by no means limited to them. That meant that if you could understand how the Internet impacted publishing — newspapers, books, magazines, music, movies, etc. — you might have a template for what would happen to other industries as they themselves digitized.
AI is the apotheosis of this story and, in retrospect, it’s a story the development of which stretches back not just to the creation of the Internet, but hundreds of years prior and the invention of the printing press. Or, if you really want to get crazy, to the evolution of humanity itself.
The AI Unbundling and Content Commoditization
In September 2022, two months before the release of ChatGPT, I wrote about The AI Unbundling, and traced the history of communication to those ancient times:
As much as newspapers may rue the Internet, their own business model — and my paper delivery job — were based on an invention that I believe is the only rival for the Internet’s ultimate impact: the printing press. Those two inventions, though, are only two pieces of the idea propagation value chain. That value chain has five parts:
The evolution of human communication has been about removing whatever bottleneck is in this value chain. Before humans could write, information could only be conveyed orally; that meant that the creation, vocalization, delivery, and consumption of an idea were all one-and-the-same. Writing, though, unbundled consumption, increasing the number of people who could consume an idea.
Now the new bottleneck was duplication: to reach more people whatever was written had to be painstakingly duplicated by hand, which dramatically limited what ideas were recorded and preserved. The printing press removed this bottleneck, dramatically increasing the number of ideas that could be economically distributed.
The new bottleneck was distribution, which is to say this was the new place to make money; thus the aforementioned profitability of newspapers. That bottleneck, though, was removed by the Internet, which made distribution free and available to anyone.
What remains is one final bundle: the creation and substantiation of an idea. To use myself as an example, I have plenty of ideas, and thanks to the Internet, the ability to distribute them around the globe; however, I still need to write them down, just as an artist needs to create an image, or a musician needs to write a song. What is becoming increasingly clear, though, is that this too is a bottleneck that is on the verge of being removed.
It’s a testament to how rapidly AI has evolved that this observation already feels trite; while I have no idea how to verify these numbers, it seems likely that AI has substantiated more content in the last three years than was substantiated by all of humanity in all of history previously. We have, in other words, reached total content commoditization: the chatbot of your choice will substantiate any content you want on command.
Copyright and Transformation
Many publishers are, as you might expect, up in arms about this reality, and have pinned their hopes for survival on the courts and copyright law. After all, the foundation for all of that new content is the content that came before — content that was created by humans.
The fundamental problem for publishers, however, is that all of this new content is, at least in terms of a textual examination of output, new; in other words, AI companies are soundly winning the first factor of the fair use test, which is whether or not their output is transformative. Judge William Alsup wrote in a lawsuit against Anthropic:
The purpose and character of using copyrighted works to train LLMs to generate new text was quintessentially transformative. Like any reader aspiring to be a writer, Anthropic’s LLMs trained upon works not to race ahead and replicate or supplant them — but to turn a hard corner and create something different. If this training process reasonably required making copies within the LLM or otherwise, those copies were engaged in a transformative use. The first factor favors fair use for the training copies.
Judge Vince Chabria wrote a day later in a lawsuit against Meta:
There is no serious question that Meta’s use of the plaintiffs’ books had a “further purpose” and “different character” than the books — that it was highly transformative. The purpose of Meta’s copying was to train its LLMs, which are innovative tools that can be used to generate diverse text and perform a wide range of functions. Users can ask Llama to edit an email they have written, translate an excerpt from or into a foreign language, write a skit based on a hypothetical scenario, or do any number of other tasks. The purpose of the plaintiffs’ books, by contrast, is to be read for entertainment or education.
The two judges differed in their view of the fourth factor — the impact that LLMs would have on the market for the copyright holders — but ultimately came to the same conclusion: Judge Alsup said that the purpose of copyright law wasn’t to protect authors from competition for new content, while Judge Chabria said that the authors hadn’t produced evidence of harm.
In fact, I think that both are making the same point: Judge Chabria clearly wished that he could rule in favor of the authors, but to do so would require proving a negative — sales that didn’t happen because would-be customers used LLMs instead. That’s something that seems impossible to ascertain, which gives credence to Judge Alsup’s more simplistic analogy of an LLM to a human author who learned from the books they read. Yes, AI is of such a different scale as to be another category entirely, but given the un-traceability of sales that didn’t happen, the analogy holds for legal purposes.
Publishing’s Three Eras
Still, just because it is impossible to trace specific harm, doesn’t mean harm doesn’t exist. Look no further than the aforementioned history of publishing. To briefly compress hundreds of years of history into three periods:
Printing Presses and Nation States
In the Middle Ages the principal organizing entity for Europe was the Catholic Church. Relatedly, the Catholic Church also held a de facto monopoly on the distribution of information: most books were in Latin, copied laboriously by hand by monks. There was some degree of ethnic affinity between various members of the nobility and the commoners on their lands, but underneath the umbrella of the Catholic Church were primarily independent city-states.
The printing press changed all of this. Suddenly Martin Luther, whose critique of the Catholic Church was strikingly similar to Jan Hus 100 years earlier, was not limited to spreading his beliefs to his local area (Prague in the case of Hus), but could rather see those beliefs spread throughout Europe; the nobility seized the opportunity to interpret the Bible in a way that suited their local interests, gradually shaking off the control of the Catholic Church.
Meanwhile, the economics of printing books was fundamentally different from the economics of copying by hand. The latter was purely an operational expense: output was strictly determined by the input of labor. The former, though, was mostly a capital expense: first, to construct the printing press, and second, to set the type for a book. The best way to pay for these significant up-front expenses was to produce as many copies of a particular book that could be sold.
How, then, to maximize the number of copies that could be sold? The answer was to print using the most widely used dialect of a particular language, which in turn incentivized people to adopt that dialect, standardizing language across Europe. That, by extension, deepened the affinities between city-states with shared languages, particularly over decades as a shared culture developed around books and later newspapers. This consolidation occurred at varying rates — England and France several hundred years before Germany and Italy — but in nearly every case the First Estate became not the clergy of the Catholic Church but a national monarch, even as the monarch gave up power to a new kind of meritocratic nobility epitomized by Burke.
The printing press created culture, which itself became the common substrate for nation-states.
Copyright and Franchises
It was nation-states, meanwhile, that made publishing into an incredible money-maker. The most important event in common-law countries was The Statute of Anne in 1710. For the first time the Parliament of Great Britain established the concept of copyright, vested in authors for a limited period of time (14 years, with the possibility of a 14 year renewal); the goal, clearly stated in the preamble, was to incentivize creation:
Whereas printers, booksellers, and other persons have of late frequently taken the liberty of printing, reprinting, and publishing, or causing to be printed, reprinted, and published, books and other writings, without the consent of the authors or proprietors of such books and writings, to their very great detriment, and too often to the ruin of them and their families: for preventing therefore such practices for the future, and for the encouragement of learned men to compose and write useful books; may it please your Majesty, that it may be enacted, and be it enacted by the Queen’s most excellent majesty, by and with the advice and consent of the lords spiritual and temporal, and commons, in this present parliament assembled, and by the authority of the same…
A quarter of a century later America’s founding fathers would, for similar motivations, and in line with the English tradition that undergirded the United States, put copyright in the Constitution:
[The Congress shall have power] To promote the progress of science and useful arts, by securing for limited times to authors and inventors the exclusive right to their respective writings and discoveries.
These are noble goals; at the same time, it’s important to keep in mind that copyright is an economic distortion, because it is a government-granted monopoly. That, by extension, meant that there was a lot of money to be made in publishing if you could leverage these monopoly rights to your advantage. To focus on the U.S.:
The mid-1800s, led by Benjamin Day and James Gordon Bennett Sr., saw the rise of advertising as a funding source of newspapers, which dramatically decreased the price of an individual copy, expanding reach, which attracted more advertisers.
The turn of the century brought nationwide scale to bear, as entrepreneurs like Joseph Pulitzer and William Randolph Hearst built out nation-wide publishing empires with scaled advertising and reporting.
In the mid-20th century Henry Luce and Condé Montrose Nast created and perfected the magazine model, which combined scale on the back-end with segmentation and targeting on the front-end.
The success of these publishing empires was, in contrast to the first era of publishing, downstream from the existence of nation-states: the fact that the U.S. was a massive market created the conditions for publishing’s golden era, and companies that were franchises. That’s Warren Buffett’s term, from a 1991 letter to shareholders:
An economic franchise arises from a product or service that: (1) is needed or desired; (2) is thought by its customers to have no close substitute and; (3) is not subject to price regulation. The existence of all three conditions will be demonstrated by a company’s ability to regularly price its product or service aggressively and thereby to earn high rates of return on capital. Moreover, franchises can tolerate mis-management. Inept managers may diminish a franchise’s profitability, but they cannot inflict mortal damage.
In contrast, “a business” earns exceptional profits only if it is the low-cost operator or if supply of its product or service is tight. Tightness in supply usually does not last long. With superior management, a company may maintain its status as a low-cost operator for a much longer time, but even then unceasingly faces the possibility of competitive attack. And a business, unlike a franchise, can be killed by poor management.
Until recently, media properties possessed the three characteristics of a franchise and consequently could both price aggressively and be managed loosely. Now, however, consumers looking for information and entertainment (their primary interest being the latter) enjoy greatly broadened choices as to where to find them. Unfortunately, demand can’t expand in response to this new supply: 500 million American eyeballs and a 24-hour day are all that’s available. The result is that competition has intensified, markets have fragmented, and the media industry has lost some — though far from all — of its franchise strength.
Given that Buffett wrote this in 1991, he was far more prescient than he probably realized, because the Internet was about to destroy the whole model.
The Internet and Aggregators
The great revelation of the Internet is that copyright wasn’t the only monopoly that mattered to publishers: newspapers in particular benefited from being de facto geographic monopolies as well. The largest newspaper in a particular geographic area attracted the most advertisers, which gave them the most resources to have the best content, further cementing their advantages and the leverage they had on their fixed costs (printing presses, delivery, and reporters). I explained what happened next in 2014’s Economic Power in the Age of Abundance:
One of the great paradoxes for newspapers today is that their financial prospects are inversely correlated to their addressable market. Even as advertising revenues have fallen off a cliff — adjusted for inflation, ad revenues are at the same level as the 1950s — newspapers are able to reach audiences not just in their hometowns but literally all over the world.
The problem for publishers, though, is that the free distribution provided by the Internet is not an exclusive. It’s available to every other newspaper as well. Moreover, it’s also available to publishers of any type, even bloggers like myself.
To be clear, this is absolutely a boon, particularly for readers, but also for any writer looking to have a broad impact. For your typical newspaper, though, the competitive environment is diametrically opposed to what they are used to: instead of there being a scarce amount of published material, there is an overwhelming abundance. More importantly, this shift in the competitive environment has fundamentally changed just who has economic power.
In a world defined by scarcity, those who control the scarce resources have the power to set the price for access to those resources. In the case of newspapers, the scarce resource was reader’s attention, and the purchasers were advertisers... The Internet, though, is a world of abundance, and there is a new power that matters: the ability to make sense of that abundance, to index it, to find needles in the proverbial haystack. And that power is held by Google.
Google was an Aggregator, and publishers — at least those who users visited via a search results page — were a commodity; it was inevitable that money from advertisers in particular would increasingly flow to the former at the expense of the latter.
There were copyright cases against Google, most notably 2006’s Field v. Google, which held that Google’s usage of snippets of the plaintiff’s content was fair use, and furthermore, that Blake Fields, the author, had implicitly given Google a license to cache his content by not specifying that Google not crawl his website.
The crucial point to make about this case, however, and Google’s role on the Internet generally, is that Google posting a snippet of content was good for publishers, at least compared to the AI alternative.
Cloudflare and the AI Content Market
Go back to the two copyright rulings I referenced above: both judges emphasized that the LLM’s in question (Claude and Llama) were not reproducing the copyrighted content they were accused of infringing; rather, they were generating novel new content by predicting tokens. Here’s Judge Alsup on how Anthropic used copyrighted work:
Each cleaned copy was translated into a “tokenized” copy. Some words were “stemmed” or “lemmatized” into simpler forms (e.g., “studying” to “study”). And, all characters were grouped into short sequences and translated into corresponding number sequences or “tokens” according to an Anthropic-made dictionary. The resulting tokenized copies were then copied repeatedly during training. By one account, this process involved the iterative, trial-and-error discovery of contingent statistical relationships between each word fragment and all other word fragments both within any work and across trillions of word fragments from other copied books, copied websites, and the like.
Judge Chabria explained how these tokens contribute to the final output:
LLMs learn to understand language by analyzing relationships among words and punctuation marks in their training data. The units of text — words and punctuation marks — on which LLMs are trained are often referred to as “tokens.” LLMs are trained on an immense amount of text and thereby learn an immense amount about the statistical relationships among words. Based on what they learned from their training data, LLMs can create new text by predicting what words are most likely to come next in sequences. This allows them to generate text responses to basically any user prompt.
This isn’t just commoditization: it’s deconstruction. To put it another way, publishers were better off when an entity like Google was copying their text; Google summarizing information — which is what happens with LLM-powered AI Search Overviews — is much worse, even if it’s even less of a copyright violation.
This was a point made to me by Cloudflare CEO Matthew Prince in a conversation we had after I wrote last week about the company’s audacious decision to block AI crawlers on Cloudflare-protected sites by default. What the company is proposing to build is a new model of monetization for publishers; Prince wrote in a blog post:
We’ll work on a marketplace where content creators and AI companies, large and small, can come together. Traffic was always a poor proxy for value. We think we can do better. Let me explain. Imagine an AI engine like a block of swiss cheese. New, original content that fills one of the holes in the AI engine’s block of cheese is more valuable than repetitive, low-value content that unfortunately dominates much of the web today. We believe that if we can begin to score and value content not on how much traffic it generates, but on how much it furthers knowledge — measured by how much it fills the current holes in AI engines “swiss cheese” — we not only will help AI engines get better faster, but also potentially facilitate a new golden age of high-value content creation. We don’t know all the answers yet, but we’re working with some of the leading economists and computer scientists to figure them out.
Cloudflare is calling its initial idea pay per crawl:
Pay per crawl, in private beta, is our first experiment in this area. Pay per crawl integrates with existing web infrastructure, leveraging HTTP status codes and established authentication mechanisms to create a framework for paid content access. Each time an AI crawler requests content, they either present payment intent via request headers for successful access (HTTP response code 200), or receive a 402 Payment Required response with pricing. Cloudflare acts as the Merchant of Record for pay per crawl and also provides the underlying technical infrastructure…
At its core, pay per crawl begins a technical shift in how content is controlled online. By providing creators with a robust, programmatic mechanism for valuing and controlling their digital assets, we empower them to continue creating the rich, diverse content that makes the Internet invaluable…The true potential of pay per crawl may emerge in an agentic world. What if an agentic paywall could operate entirely programmatically? Imagine asking your favorite deep research program to help you synthesize the latest cancer research or a legal brief, or just help you find the best restaurant in Soho — and then giving that agent a budget to spend to acquire the best and most relevant content. By anchoring our first solution on HTTP response code 402, we enable a future where intelligent agents can programmatically negotiate access to digital resources.
I think there is value in Cloudflare’s efforts, which are very much inline with what I proposed in May’s The Agentic Web and Original Sin:
What is possible — not probable, but at least possible — is to in the long run build an entirely new marketplace for content that results in a new win-win-win equilibrium.
First, the protocol layer should have a mechanism for payments via digital currency, i.e. stablecoins. Second, AI providers like ChatGPT should build an auction mechanism that pays out content sources based on the frequency with which they are cited in AI answers. The result would be a new universe of creators who will be incentivized to produce high quality content that is more likely to be useful to AI, competing in a marketplace a la the open web; indeed, this would be the new open web, but one that operates at even greater scale than the current web given the fact that human attention is a scarce resource, while the number of potential agents is infinite.
I do think that there is a market to be made in producing content for AI; it seems likely to me, however, that this market will not save existing publishers. Rather, just as Google created an entirely new class of content sites, Amazon and Meta an entirely new class of e-commerce merchants, and Apple and Meta an entirely new class of app builders, AI will create an entirely new class of token creators who explicitly produce content for LLMs. Existing publishers will participate in this market, but won’t be central to it.
Consider Meta’s market-making as an example. From 2020’s Apple and Facebook:
This explains why the news about large CPG companies boycotting Facebook is, from a financial perspective, simply not a big deal. Unilever’s $11.8 million in U.S. ad spend, to take one example, is replaced with the same automated efficiency that Facebook’s timeline ensures you never run out of content. Moreover, while Facebook loses some top-line revenue — in an auction-based system, less demand corresponds to lower prices — the companies that are the most likely to take advantage of those lower prices are those that would not exist without Facebook, like the direct-to-consumer companies trying to steal customers from massive conglomerates like Unilever.
In this way Facebook has a degree of anti-fragility that even Google lacks: so much of its business comes from the long tail of Internet-native companies that are built around Facebook from first principles, that any disruption to traditional advertisers — like the coronavirus crisis or the current boycotts — actually serves to strengthen the Facebook ecosystem at the expense of the TV-centric ecosystem of which these CPG companies are a part.
In short, Meta advertising made Meta advertisers; along those lines, the extent to which Cloudflare or anyone else manages to create a market for AI content is the extent to which I expect new companies to dominate that market; existing publishers will be too encumbered by their existing audience and business model — decrepit though it may be — to effectively compete with these new entrants.
Content-Based Communities
So, are existing publishers doomed?
Well by-and-large yes, but that’s because they have been doomed for a long time. People using AI instead of Google — or Google using AI to provide answers above links — make the long-term outlook for advertising-based publishers worse, but that’s an acceleration of a demise that has been in motion for a long time.
To that end, the answer for publishers in the age of AI is no different than it was in the age of Aggregators: build a direct connection with readers. This, by extension, means business models that maximize revenue per user, which is to say subscriptions (the business model that undergirds this site, and an increasing number of others).
What I think is intriguing, however, is the possibility to go back to the future. Once upon a time publishing made countries; the new opportunity for publishing is to make communities. This is something that AI, particularly as it manifests today, is fundamentally unsuited to: all of that content generated by LLMs is individualized; what you ask, and what the AI answers, is distinct from what I ask, and what answers I receive. This is great for getting things done, but it’s useless for creating common ground.
Stratechery, on the other hand, along with a host of other successful publications, has the potential to be a totem pole around which communities can form. Here is how Wikipedia defines a totem pole:
The word totem derives from the Algonquian word odoodem [oˈtuːtɛm] meaning “(his) kinship group”. The carvings may symbolize or commemorate ancestors, cultural beliefs that recount familiar legends, clan lineages, or notable events. The poles may also serve as functional architectural features, welcome signs for village visitors, mortuary vessels for the remains of deceased ancestors, or as a means to publicly ridicule someone. They may embody a historical narrative of significance to the people carving and installing the pole. Given the complexity and symbolic meanings of these various carvings, their placement and importance lies in the observer’s knowledge and connection to the meanings of the figures and the culture in which they are embedded. Contrary to common misconception, they are not worshipped or the subject of spiritual practice.
The digital environment, thanks in part to the economics of targeted advertising, the drive for engagement, and most recently, the mechanisms of token prediction, is customized to the individual; as LLMs consume everything, including advertising-based media — which, by definition, is meant to be mass market — the hunger for something shared is going to increase.
We already have a great example of this sort of shared experience in sports. Sports, for most people, is itself a form of content: I don’t play football or baseball or basketball or drive an F1 car, but I relish the fact that people around me watch the same games and races that I do, and that that shared experience gives me a reason to congregate and commune with others, and is an ongoing topic of discussion.
Indeed, this desire for a communal topic of interest is probably a factor in the inescapable reach of politics, particularly what happens in Washington D.C.: of course policies matter, but there is an aspect of politics’ prominence that I suspect is downstream of politics as entertainment, and a sorting mechanism for community.
In short, there is a need for community, and I think content, whether it be an essay, a podcast, or a video, can be artifacts around which communities can form and sustain themselves, ultimately to the economic benefit of the content creator. There is, admittedly, a lot to figure out in terms of that last piece, but when you remember that content made countries, the potential upside is likely quite large indeed.
GeoPolitics
Nvidia gets nod from Washington to resume sales of H20 China chip
Ft • July 14, 2025
Technology•AI•Semiconductors•USChinaRelations•Nvidia•GeoPolitics
Nvidia has received approval from the U.S. government to resume exports of its H20 AI chip to China, following a temporary ban in April that led to a $4.5 billion inventory charge. The H20 chip, designed to align with earlier U.S. export controls, had found strong demand among major Chinese tech firms such as ByteDance, Alibaba, and Tencent. Nvidia CEO Jensen Huang has actively lobbied in both the U.S. and China, emphasizing the importance of maintaining U.S. competitiveness in AI technology. As part of these efforts, Huang is currently in Beijing to meet officials and customers, aiming to secure talks with Premier Li Qiang. Nvidia also announced a new compliant GPU, based on the newer Blackwell RTX Pro 6000 processor, which lacks advanced features but caters to industrial applications and smaller AI models. Despite growing pressure from Beijing to adopt domestic AI chips, Nvidia’s robust software ecosystem continues to make it the preferred choice for AI workloads in China. However, uncertainties remain around the licensing and delivery timelines for the H20 chip, highlighting ongoing geopolitical tensions in the US-China tech landscape.
Amazon Faces a Complex China Calculus as Trade War Continues
Bigtechnology • Kristi Coulter • July 11, 2025
Business•ECommerce•Amazon•ChinaTradeWar•Tariffs•GeoPolitics
Over the past decade, Amazon's reliance on Chinese sellers has significantly increased, with over 60% of its sellers now based in China. This strategic move aimed to expand product selection, reduce prices, and counteract Chinese competitors selling directly to U.S. consumers. In late 2024, Amazon launched Amazon Haul, a platform offering Chinese products priced under $20, positioning itself against competitors like Temu and Shein.
However, the introduction of tariffs by the Trump administration has complicated this strategy. While the initial 145% tariff on Chinese goods has been reduced, current rates remain above 50%. These tariffs have led to increased costs, which are partially passed on to consumers, diminishing the benefits of Amazon's pricing strategy. In response, Amazon's Haul team considered itemizing tariff costs on product listings, a proposal that was eventually shelved due to political backlash.
This year, U.S. prices for Chinese-made goods on Amazon have risen faster than inflation, according to a DataWeave analysis. Additionally, the company has canceled some orders for Chinese products intended for its Amazon Basics brand. To navigate these challenges, Amazon is leveraging its strengths, including long-term seller relationships, robust logistics, an expanding physical presence, and a diverse global market reach. The ongoing trade war is prompting Amazon to adapt its business model in complex ways, potentially leading to positive outcomes in certain areas.
Attention Economy
From Dollar Dominance to the Slop Machine
Kyla • kyla scanlon • July 8, 2025
Finance•Investment•EconomicPolicy•AttentionEconomy•EnergyPolicy•Attention Economy
Source: her attention. She wasn't consuming content so much as being consumed by it.
This is Part 2 of a 2 Part series exploring attention as infrastructure and a main source of value creation across politics, markets, and the economy. The audio version of this essay will be up here.
I had the chance to go on the Ezra Klein Show and talk with Ezra about all of the below. Please check it out, it was a lot of fun to go back and forth on all of these topics.
UFC at the White House
Everything feels like that now? We're living in this constant scroll, trying to make sense of the world around us within a world confined by the limitations of an algorithm that doesn't care about truth, coherence, or consequences, only engagement.
And logically, this also became how we govern.
The Trump Administration has taken full advantage of this algorithm brain. We’ve entered the pure extraction phase of the economy, where things are created solely for consumption rather than purpose. I don't mean this as a moralistic argument, it's purely incentives, but it's puzzling.
Take the picture below. This is a fan account for the Department of Homeland Security tweeting about the 2026 White House x UFC Fight Night. It's a perfect image of the present moment. This is most powerful building in the world, a representation of freedom, lit up during an ominous storm, the flag drooping above, crowds gathered around a UFC-branded octagon - it's creepy AI fever dream, but it’s very, very real.
It's the perfect crystallization of what America has become: the world's most powerful content creator. Roland Barthes, Marshall McLuhan, and Guy Debord would be totally floored here. This image truly has everything - the medium is the message and society is defined by a social relationship to images and the digital world is increasingly disconnected from the physical world, and it’s hard to tell what is real and what isn’t.
That image is about extraction versus creation and it represents a fundamental choice about what kind of economy we want to be.
Act 1: The Show
The US has become an extraction economy.
We extract value from our existing position through dollar dominance, military supremacy, and technological leadership and now are choosing to tear down the foundations that created that position in the first place.
We extract attention through spectacle without creating the trust that makes spectacle meaningful.
We extract wealth from our own institutions without replenishing the capacity that generated that wealth.
The UFC image captures this well - it takes the symbolic power of American institutions and converts it into entertainment value, with no consideration for what that conversion costs us in terms of credibility or coherence.
China, meanwhile, has become a creation economy.
They're building electrical generation capacity, training engineers, developing industrial policy that spans decades.
They're creating an “electrostate” with an economy driven by the technologies that will determine 21st-century competitive advantage.
The tariff letters that President Trump sent around yesterday accomplish the same extraction mechanism too - telling other countries that tariffs “may be modified, upward or downward, depending on our relationship” is not a great way to do business.
I keep thinking about something Ezra said in our conversation - Trump embodies the attention economy so completely that he's become indistinguishable from it. The stock market didn’t take the tariff letters seriously - because yes, narrative is capital and attention is infrastructure but there are real world constraints on all of that.
Eventually, the attention games stop working. What do we do next?
In Part 1 of this series, Trump, Mamdani, and Cluely, I mapped how attention became infrastructure, narrative became capital, and speculation became the operating system between them.
Here in Part 2, we will hopefully answer some more questions and provide some more solutions.
What happens when an entire civilization optimizes for extraction over creation?
What are the material consequences when your resource allocation system rewards virality over productive capacity?
China is building the infrastructure for 21st-century economic dominance. As we're financializing everything, they're electrifying everything. We're optimizing for attention while they're optimizing for capacity. We must figure out how to channel the dynamics of the attention economy toward creation rather than extraction.
So… how?
Act 2: The Dollar, Energy, and Trust
The Big Beautiful Bill
Somewhere along the way, the United States decided that the most sophisticated thing you could do with economic power was to financialize it and to create increasingly complex mechanisms for extracting value from existing systems rather than building new ones.
We took the incredible wealth-generating capacity that built America and the industrial logic that made us globally dominant and we turned it into a machine for redistributing wealth from the future to the present, from the young to the old, much funneling to the already-wealthy.
This is the governing philosophy of extraction. Take the Big Beautiful Bill that just passed. We're adding $4.1 trillion to the national debt (potentially $5.5 trillion) to fund tax cuts. We are funding it (or at least, some of it) by cutting billions from SNAP, stripping healthcare from millions of Americans, and slashing the National Science Foundation budget.
The BBB very clearly establishes that more money will go toward extraction versus creation. We will not beat China with tax cuts! And rather than building systems that create new wealth, we're building systems that redistribute existing wealth to those who already have the most political power.
It's completely economically incoherent and politically brilliant, which tells you everything you need to know about how our resource allocation system actually works. Senator Lisa Murkowski, who was the deciding vote, captured this approach perfectly when she said:
“Do I like this bill? No. But I tried to take care of Alaska’s interests. But I know that in many parts of the country, there are Americans that are not going to be advantaged by this bill.”
And that’s the conundrum of an extractive economy. Everyone protects what they have instead of incentivizing what they could make.
The Dollar
We're also extracting value from America's position in the global financial system without rebuilding the foundations that created that position in the first place. Karthik’s piece on Monetizing Primacy is a great read on the complicated relationship the Trump administration has with the dollar.
He writes about the stablecoin legislation that passed through the the GENIUS Act. It’s a pretty distilled version of extraction. The theory, according to Treasury Secretary Scott Bessent, is that stablecoins will create massive new demand for US Treasuries, maybe $3.7 trillion worth over the next few years, which is substantial! Sure!
Foreign capital will buy these digital dollars
Stablecoin issuers will pocket the yield
And Treasury gets a new source of funding!
Stablecoins take advantage of an existing system, rather than building upon it. And the dollar is very valuable to the United States. The problem is the Trump administration doesn’t know what it wants from the dollar. It both wants a weak dollar to encourage reindustrialization and a strong dollar to prevent inflationary fallout from tariffs. And the dollar, as Karthik explains, is valued by
Fiscal and monetary policy interactions
The politics of central banking
Expectations of the rate of return on American assets.
So when you take that equation:
Fiscal policy via blowing out the deficit with the BBB
Monetary policy which is frozen because of the tariffs we can’t get an answer on
The Trump administration actively threatening Jerome Powell
You get a weak dollar. And you get higher bond yields because everyone is like “um, hello?” That combination is a crisis-of-confidence signal.
If the trust in the dollar begins to erode and if other countries begin to believe the dollar is being governed by a reality TV feedback loop then the whole system begins to shift.
That will weaken the dollar too. A weaker dollar raises the cost of living, shrinks geopolitical leverage, and chips away at the safety net while making it harder for people to understand why things suddenly feel worse.
So to the point of stablecoins - it's a classic attention-speculation play to create a new financial instrument that generates engagement while (maybe) theoretically serving strategic goals. But the underlying mechanism is fundamentally extractive rather than productive.
Extraction only works for so long. Eventually, you have to create again.
Energy
And this creation begins with energy! At the exact moment when AI is creating unprecedented demand for electricity (and is the backbone of the entire S&P 500), America is dismantling its capacity to generate power through some of the cheapest, fastest-to-deploy sources available through the Big Beautiful Bill. As Thomas Friedman wrote:
There has never been a more intimate connection between the amount of cheap, clean electricity a nation can generate for A.I. models and its future economic and military might.
The bill ignores all of that and instead prioritizes the past over the future:
Phases out clean electricity tax credits for wind and solar
Adds complex restrictions to battery storage credits
Bans fees on methane emissions
Opens up federal lands and waters to oil and gas drilling
Orders 4 million additional acres of federal land be made available for coal mining
All because clean energy is coded as lib or is aesthetically unpleasing or something. It completely misunderstands where energy comes from - 93% (!) of the electricity capacity added to the grid in 2025 will come from wind, solar, and battery storage. Texas was the top solar state in the nation, adding 10,000 megawatts of power in the last year, mostly from solar-plus-batteries, and saw brownouts decrease as a result.
But rather than scaling this success nationally, we're making it more expensive going forward. Energy Innovation projects that Trump's bill will increase wholesale electricity prices by roughly 50% by 2035, with cumulative consumer energy costs rising more than $16 billion by 2030. Some 830,000 renewable energy jobs will be lost or not created. We are weakening our competitive position to play tribalism games in the name of the attention economy.
We're trapped in the “infinite AI TikTok slop machine” - an information environment that makes long-term strategic thinking nearly impossible while rewarding the kind of attention-seeking behavior that undermines our competitive position. We need more infrastructure around real ideas.
Legislation
The Bills That Could Change Crypto in The U.S.
Nytimes • July 16, 2025
Politics•Legislation•Cryptocurrency•Stablecoins•DigitalAssets
The U.S. House of Representatives is currently considering several bills that could significantly reshape the cryptocurrency landscape in the United States. These legislative efforts aim to establish a comprehensive regulatory framework for digital assets, focusing particularly on stablecoins.
One of the key pieces of legislation is the Guiding and Establishing National Innovation for U.S. Stablecoins (GENIUS) Act. Passed by the Senate in June 2025 with a bipartisan vote of 68–30, the GENIUS Act seeks to create a federal framework for stablecoins, ensuring they are backed 1:1 by real assets like U.S. dollars or Treasuries. This measure aims to integrate stablecoins into the financial system while protecting consumers from potential collapses. (en.wikipedia.org)
Another significant bill is the Digital Asset Market Clarity Act (CLARITY Act), which aims to define the classification of digital assets and establish a regulatory framework for the cryptocurrency market. This legislation seeks to clarify the roles of the Securities and Exchange Commission (SEC) and the Commodity Futures Trading Commission (CFTC) in overseeing digital assets, providing a clearer structure for the industry. (washingtonpost.com)
Additionally, the Anti-CBDC Surveillance State Act proposes to prohibit the issuance of a central bank digital currency (CBDC) by the Federal Reserve. Proponents argue that such a move could infringe on privacy rights and lead to excessive government control over personal finances. (washingtonpost.com)
These legislative efforts coincide with a surge in cryptocurrency valuations. Bitcoin recently reached a new all-time high of over $123,000, marking a significant increase from $108,000 just a week prior. This rally has positioned Bitcoin as the fifth most valuable asset in the world, with a market capitalization of $2.4 trillion, surpassing Amazon. (apnews.com)
The legislative process has encountered some challenges. On July 16, 2025, a critical procedural vote to open debate on cryptocurrency legislation stalled in the House, despite an earlier victory in clearing an initial hurdle. The delay is attributed to internal Republican disputes over whether to address the bills separately or collectively, and concerns surrounding a proposal to ban a central bank-issued digital currency. (reuters.com)
As these bills progress through Congress, they have the potential to significantly impact the cryptocurrency industry, providing clearer regulations and potentially fostering greater integration of digital assets into the traditional financial system.
Trump Hails $90 Billion in AI Infrastructure Investments at Pennsylvania Summit
Nytimes • July 15, 2025
Technology•AI•Infrastructure•Energy•Investment•Legislation
At the Pennsylvania Energy and Innovation Summit held at Carnegie Mellon University in Pittsburgh, President Donald Trump announced over $90 billion in investments aimed at bolstering the United States' position in artificial intelligence (AI) and energy infrastructure. This initiative underscores the administration's commitment to integrating AI advancements with energy production, particularly through fossil fuels and nuclear power.
The summit featured significant contributions from major corporations:
Google committed $25 billion to develop AI and data center infrastructure within the PJM Interconnection, the nation's largest electricity market. Additionally, Google partnered with Brookfield Asset Management to invest $3 billion in modernizing two hydroelectric plants on Pennsylvania's Susquehanna River, aiming to supply power to these data centers. (washingtonpost.com)
Blackstone announced a $25 billion investment in data centers and natural gas-fired power plants in northeastern Pennsylvania, with construction slated to begin by the end of 2028. (fox19.com)
CoreWeave pledged $6 billion to establish a new AI-focused data center in Lancaster, Pennsylvania. (fox19.com)
These investments are part of a broader strategy to ensure the U.S. remains a global leader in AI by enhancing energy production capabilities. President Trump emphasized the necessity of increasing energy output to support the AI revolution, highlighting the role of coal, natural gas, and nuclear energy in this endeavor. (eenews.net)
The summit also addressed environmental concerns. Protests erupted at Carnegie Mellon University, with demonstrators criticizing the ecological impact of expanding data centers, including increased fracking and water usage. Protesters voiced fears that city efforts to attract tech companies could raise housing costs and displace lower-income residents. (axios.com)
In summary, the Pennsylvania Energy and Innovation Summit highlighted a significant convergence of technology and energy investments, aiming to position Pennsylvania—and by extension, the United States—as a dominant force in the global AI landscape.
Self Driving Cars
Uber agrees multibillion deal with Lucid for electric robotaxi fleet
Ft • July 17, 2025
Technology•Automotive•ElectricVehicles•AutonomousVehicles•Sustainability•Self Driving Cars
Uber has entered into a significant agreement with Lucid Group to purchase 20,000 electric vehicles (EVs) and invest $300 million in the company. This partnership aims to establish a fleet of electric robotaxis, marking a substantial advancement in Uber's autonomous vehicle initiatives.
The collaboration with Lucid Group, a prominent EV manufacturer, is expected to enhance Uber's capabilities in the electric vehicle sector. By integrating Lucid's advanced EV technology, Uber plans to offer a more sustainable and efficient transportation option to its users.
In addition to the vehicle acquisition, Uber's $300 million investment in Lucid Group signifies a deepening commitment to the future of electric mobility. This financial support is intended to bolster Lucid's research and development efforts, facilitating the production of high-quality electric vehicles that meet the demands of the evolving market.
The establishment of an electric robotaxi fleet aligns with Uber's strategic objectives to reduce its carbon footprint and provide eco-friendly transportation solutions. By leveraging Lucid's expertise in electric vehicle manufacturing, Uber aims to accelerate the deployment of autonomous, electric ride-hailing services in urban areas.
This partnership also reflects a broader industry trend towards collaboration between ride-hailing platforms and electric vehicle manufacturers. Such alliances are crucial for the development and scaling of electric autonomous transportation, addressing both technological challenges and consumer adoption barriers.
The integration of electric robotaxis into Uber's service offerings is anticipated to have a significant impact on urban mobility. It promises to provide riders with a cleaner, more efficient mode of transportation while contributing to the reduction of urban congestion and pollution.
As Uber and Lucid Group move forward with this partnership, the industry will be closely monitoring the outcomes to assess the viability and scalability of electric robotaxi services. The success of this initiative could set a precedent for future collaborations aimed at revolutionizing urban transportation through sustainable and autonomous technologies.
IPO
Figma’s Dylan Field will cash out about $60M in IPO, with Index, Kleiner, Greylock, Sequoia all selling, too
Techcrunch • July 21, 2025
Technology•Business•Startups•IPO•VentureCapital
When Figma announced its initial hoped-for price range on Monday ($25-$28), it also revealed an unusual decision for its highly anticipated IPO.
It will allow existing shareholders to sell more shares than the company plans to sell, by a high ratio. The company plans to offer about 12.5 million shares. Yet existing shareholders will be allowed to cash out of nearly 24.7 million shares, it said.
In addition, should this IPO be as hot as everyone thinks it will be, existing shareholders will get the option to sell, collectively, up to 5.5 million more shares.
Figma founder CEO Dylan Field has disclosed that he plans to sell 2.35 million shares. At the midrange he’ll be cashing out of over $62 million. (That might be a much higher number if the IPO prices above $28, too.)
Even with that sale, he will still own an enormous number of shares and control the company. He will hold 74% of the voting rights after the IPO. This is thanks to supervoting rights of 15 votes per share for the Class B stock he controls, plus the right to vote the Class B shares of his co-founder, Evan Wallace, the company says in its S-1.
Figma’s biggest venture investors are all cashing out some shares, as well, including Index, Greylock, Kleiner Perkins, and Sequoia. Should the demand be there for the over-allotment, they will cash out 1.7 million to 3.3 million shares apiece. That should allow them to return some cash to their investors in this liquidity-starved venture market.
It should be noted, though, that each of these investors is keeping the lion’s share of their Figma holdings. One way to interpret this largely secondary sale is that if the company hadn’t opened up share sales to existing investors, it might not have had enough shares to meet the demand.
As you might expect, the company will not make money from the shares its stockholders sell. But should it price above its announced range (as often happens with hot IPOs), Figma will raise more, as will its shareholders.
Prior to pricing, IPO experts expected Figma to sell around $1.5 billion worth of stock. Should it price above range and exceed that, Figma would be the biggest IPO of 2025 to date. The IPO could happen next week, so we shall soon see. Figma declined further comment.
Automotive
Rivian is getting a new navigation system with Google Maps.
Blog • July 15, 2025
Technology•Software•NavigationSystem•Automotive•GoogleMaps
In collaboration with Rivian, we're excited to announce Rivian Navigation with Google Maps, a brand new navigation system for Rivian drivers that uses Google Maps Auto Simulator technology. This integration aims to enhance driving experiences by providing more accurate routes, real-time traffic updates, and seamless navigation directly through Rivian’s vehicles.
The system leverages Google's powerful mapping data and machine learning capabilities to deliver precise directions and up-to-date road condition information. Rivian drivers will benefit from dynamic rerouting, lane guidance, and easy access to points of interest, making road trips and daily drives more efficient and enjoyable.
Additionally, the fusion of Rivian and Google Maps technology is designed to be user-friendly, integrating smoothly with the vehicle’s digital interface to ensure safety and convenience while driving. This new navigation system underscores a growing trend in the automotive industry, where advanced digital tools are becoming essential components of the electric vehicle experience.
Browser Wars
Live Demo & Review of the Newest AI Browsers: Dia vs Comet
Youtube • a16z • July 22, 2025
Technology•AI•WebBrowsers•Productivity•Research•Browser Wars
In the rapidly evolving landscape of web browsers, two AI-powered contenders have emerged: Perplexity's Comet and The Browser Company's Dia. Both aim to revolutionize the browsing experience by integrating artificial intelligence directly into their interfaces, but they adopt distinct approaches to achieve this goal.
Comet by Perplexity
Launched on July 9, 2025, Comet is a Chromium-based browser that seamlessly incorporates Perplexity's AI search engine. This integration allows users to query any webpage's content instantly, ask follow-up questions, or request summaries without leaving the page. The browser maintains context across multiple tabs, enabling complex research sessions where information from various sources can be intelligently synthesized. Additionally, Comet includes smart citation tracking, automatically organizing sources as users research, and can generate comprehensive reports from browsing sessions. Its "Focus Mode" filters out ads and distractions while highlighting relevant information based on the current research topic. (linkedin.com)
Comet's standout feature is its deep integration with Perplexity's AI search capabilities. As users browse, they can instantly query any webpage's content, ask follow-up questions, or request summaries without leaving the page. The browser maintains context across multiple tabs, allowing users to conduct complex research sessions where information from various sources can be synthesized intelligently. The browser also includes smart citation tracking, automatically organizing sources as users research, and can generate comprehensive reports from browsing sessions. Its "Focus Mode" filters out ads and distractions while highlighting relevant information based on the current research topic. (linkedin.com)
Dia by The Browser Company
In contrast, Dia is designed to be an AI-first browser, positioning itself as a productivity powerhouse that happens to browse the web. Developed with remote workers and digital professionals in mind, Dia emphasizes workflow automation and intelligent task management alongside browsing. Its "Smart Macros" feature can learn from browsing patterns and automate multi-step processes. For instance, if a user regularly checks multiple news sites each morning, Dia can create a summary dashboard automatically. The browser includes built-in tools for note-taking, task management, and calendar integration, all enhanced by AI. Its "Context Switching" feature intelligently saves and restores entire browsing sessions based on projects or contexts, making it easy to jump between different work modes. Dia also excels at form filling and data extraction, using AI to understand web forms and populate them intelligently. (linkedin.com)
Comparative Analysis
While both browsers integrate AI to enhance user experience, their core focuses differ. Comet excels in research-intensive tasks, offering deep AI integration for information synthesis and analysis. Its ability to maintain context across multiple tabs and generate comprehensive reports makes it invaluable for users engaged in extensive research. On the other hand, Dia is tailored for productivity, automating repetitive tasks and managing workflows to streamline daily activities. Its focus on task management, note-taking, and calendar integration positions it as a tool for users seeking to optimize their work processes.
In conclusion, the choice between Comet and Dia depends on individual user needs. Those requiring advanced research capabilities may find Comet more suitable, while users looking to enhance productivity through task automation and workflow management might prefer Dia. Both browsers represent significant advancements in integrating AI into the browsing experience, each offering unique features to cater to different user preferences.
Perplexity AI Launches Comet Browser to Challenge Google Chrome’s Dominance
Medium • ODSC - Open Data Science • July 10, 2025
Technology•Software•AI•Privacy•WebBrowsers•Browser Wars
Perplexity AI, a rising player in the AI space, has officially entered the competitive browser market with the launch of Comet, a new AI-powered web browser designed to compete directly with Google Chrome. The announcement was made on Wednesday as the company expands its product ecosystem beyond its existing AI-powered search platform.
Backed by Nvidia, Jeff Bezos, and SoftBank, Perplexity is positioning Comet as a transformative browser that replaces traditional navigation with agentic AI technology capable of acting, thinking, and deciding autonomously on behalf of users.
Google Chrome, the global leader in the browser space, held a 68% market share in June 2025, according to StatCounter. Perplexity aims to challenge that dominance by reimagining the browsing experience. Comet enables users to perform advanced tasks through a single interface, from product comparisons to meeting bookings, all powered by natural language prompts.
While the initial rollout is limited to Perplexity Max subscribers, who pay $200 per month, broader access will be introduced by invitation over the summer.
The launch of Comet comes amid a wave of AI integrations across search and browser platforms. OpenAI has expanded access to its ChatGPT search features, while Google debuted its AI Overviews feature last year. Perplexity’s approach differentiates itself through a tightly integrated assistant that simplifies research workflows and executes tasks within a conversational UI.
A key selling point for Comet is its privacy-first architecture. Unlike traditional AI platforms, Comet stores data locally and avoids training its models on personal information. This design is expected to resonate with users concerned about data surveillance and model misuse.
Comet also opens up new monetization pathways for Perplexity, including advertising and e-commerce integration. However, the company’s aggressive growth has not been without backlash. Media organizations, including News Corp, Forbes, Wired, and the Wall Street Journal’s parent company, Dow Jones, have criticized Perplexity for using their content without consent or compensation.
In response, Perplexity has launched a publisher partnership program aimed at offering formal collaboration opportunities to content creators. The company says this move is part of a broader strategy to build long-term trust with media outlets while maintaining content quality within its AI ecosystem.
With Comet, Perplexity enters a high-stakes race to redefine how users interact with the internet. By embedding AI capabilities directly into the browsing experience, the company is betting on a future where AI agents serve not just as search engines but as digital co-pilots across every aspect of online activity.
Whether Perplexity’s Comet browser can erode Google Chrome’s market lead remains to be seen. But its emphasis on intelligent automation, privacy, and media engagement signals that the browser wars are evolving—and AI is at the center of it.
Is A.I. the Future of Web Browsing?
Nytimes • July 11, 2025
Technology•AI•WebBrowsers•Innovation•Browser Wars
The Browser Company has introduced Dia, an AI-powered web browser designed to revolutionize online interactions by integrating artificial intelligence directly into the browsing experience. Unlike traditional browsers, Dia offers a conversational interface that allows users to perform tasks through natural language commands, enhancing both efficiency and personalization.
One of Dia's standout features is its ability to remember user activities, enabling the AI to provide summaries of recent browsing sessions and adapt to individual preferences over time. This personalized approach aims to make the browser feel more intuitive and responsive to each user's needs. (forbes.com)
Additionally, Dia integrates task automation capabilities, allowing users to set up workflows that handle repetitive actions. For example, users can instruct the browser to organize bookmarks, prepare research summaries, or draft content using built-in AI tools, streamlining daily tasks and boosting productivity. (ki-ecke.com)
The browser also features a command-based address bar, enabling users to execute complex tasks using natural language commands. This functionality allows for actions such as retrieving documents by description, sending emails through preferred clients, and scheduling meetings, all directly from the address bar. (techtimes.com)
Dia's autonomous web actions further set it apart by performing tasks like adding items to shopping carts or emailing multiple recipients without additional clicks. This capability aims to save time and reduce the need for manual intervention in routine online activities. (techtimes.com)
Currently, Dia is in beta and available exclusively for macOS users, with support for other operating systems expected in the future. The Browser Company continues to refine Dia's features, focusing on enhancing user experience and integrating AI more deeply into the web browsing process. (ghacks.net)
Startup of the Week
Miro’s CEO Andrey Khusid on navigating explosive growth
Seedcamp • July 8, 2025
Business•Startups•Leadership•Growth•Culture•Startup of the Week
In an insightful discussion with Seedcamp's Managing Partner, Carlos Eduardo Espinal, Andrey Khusid, co-founder and CEO of Miro, delved into the company's remarkable journey from its inception to becoming a global leader in visual collaboration. Starting with the simple idea of bringing whiteboards into browsers, Miro has evolved into a platform valued at nearly $20 billion.
Reflecting on Miro's rapid expansion, Khusid highlighted the period between 2020 and 2021, during which the team grew from 200 to 1,800 employees in just 18 months. This explosive growth presented challenges in maintaining company culture and cohesion. Khusid emphasized the importance of hiring individuals who align with the company's values and mission, focusing on mindset over experience.
Transitioning from a product-led growth model to incorporating sales-led strategies was another significant shift for Miro. Khusid discussed the balance between customer feedback and product vision, noting that while user input is invaluable, it's crucial to maintain a clear product direction. He also touched upon the challenges of expanding into enterprise markets and the necessity of adapting to market changes, including the integration of AI technologies.
Building a strong company culture was central to Miro's success. Khusid shared insights into fostering a collaborative environment, learning from failures, and the importance of leadership in guiding the company through its growth phases. He concluded by reflecting on the significance of founder-led companies and the unique perspectives they bring to scaling a tech startup globally.
Education
Is ChatGPT killing higher education?
Vox • Sean Illing • July 5, 2025
Education•AI•AcademicIntegrity•HigherEducation•Cheating
What’s the point of college if no one’s actually doing the work?
It’s not a rhetorical question. More and more students are not doing the work. They’re offloading their essays, their homework, even their exams, to AI tools like ChatGPT or Claude. These are not just study aids. They’re doing everything.
We’re living in a cheating utopia — and professors know it. It’s becoming increasingly common, and faculty are either too burned out or unsupported to do anything about it. And even if they wanted to do something, it’s not clear that there’s anything to be done at this point.
So what are we doing here?
James Walsh is a features writer for New York magazine’s Intelligencer and the author of the most unsettling piece I’ve read about the impact of AI on higher education.
Walsh spent months talking to students and professors who are living through this moment, and what he found isn’t just a story about cheating. It’s a story about ambivalence and disillusionment and despair. A story about what happens when technology moves faster than our institutions can adapt.
I invited Walsh onto The Gray Area to talk about what all of this means, not just for the future of college but the future of writing and thinking. As always, there’s much more in the full podcast, so listen and follow The Gray Area on Apple Podcasts, Spotify, Pandora, or wherever you find podcasts. New episodes drop every Monday.
This interview has been edited for length and clarity.
Let’s talk about how students are cheating today. How are they using these tools? What’s the process look like?
It depends on the type of student, the type of class, the type of school you’re going to. Whether or not a student can get away with that is a different question, but there are plenty of students who are taking their prompt from their professor, copying and pasting it into ChatGPT and saying, “I need a four to five-page essay,” and copying and pasting that essay without ever reading it.
One of the funniest examples I came across is a number of professors are using this so-called Trojan horse method where they’re dropping non-sequiturs into their prompts. They mention broccoli or Dua Lipa, or they say something about Finland in the essay prompts just to see if people are copying and pasting the prompts into ChatGPT. If they are, ChatGPT or whatever LLM they’re using will say something random about broccoli or Dua Lipa.
Unless you’re incredibly lazy, it takes just a little effort to cover that up.
Every professor I spoke to said, “So many of my students are using AI and I know that so many more students are using it and I have no idea,” because it can essentially write 70 percent of your essay for you, and if you do that other 30 percent to cover all your tracks and make it your own, it can write you a pretty good essay.
And there are these platforms, these AI detectors, and there’s a big debate about how effective they are. They will scan an essay and assign some grade, say a 70 percent chance that this is AI-generated. And that’s really just looking at the language and deciding whether or not that language is created by an LLM.
But it doesn’t account for big ideas. It doesn’t catch the students who are using AI and saying, “What should I write this essay about?” And not doing the actual thinking themselves and then just writing. It’s like paint by numbers at that point.
Did you find that students are relating very differently to all of this? What was the general vibe you got?
It was a pretty wide perspective on AI. I spoke to a student at the University of Wisconsin who said, “I realized AI was a problem last fall, walking into the library and at least half of the students were using ChatGPT.” And it was at that moment that she started thinking about her classroom discussions and some of the essays she was reading.
The one example she gave that really stuck with me was that she was taking some psych class, and they were talking about attachment theories. She was like, “Attachment theory is something that we should all be able to talk about [from] our own personal experiences. We all have our own attachment theory. We can talk about our relationships with our parents. That should be a great class discussion. And yet I’m sitting here in class and people are referencing studies that we haven’t even covered in class, and it just makes for a really boring and unfulfilling class.” That was the realization for her that something is really wrong. So there are students like that.
And then there are students who feel like they have to use AI because if they’re not using AI, they’re at a disadvantage. Not only that, AI is going to be around no matter what for the rest of their lives. So they feel as if college, to some extent now, is about training them to use AI.
What’s the general professor’s perspective on this? They seem to all share something pretty close to despair.
Yes. Those are primarily the professors in writing-heavy classes or computer science classes. There were professors who I spoke to who actually were really bullish on AI. I spoke to one professor who doesn’t appear in the piece, but she is at UCLA and she teaches comparative literature, and used AI to create her entire textbook for this class this semester. And she says it’s the best class she’s ever had.
So I think there are some people who are optimistic, [but] she was an outlier in terms of the professors I spoke to. For the most part, professors were, yes, in despair. They don’t know how to police AI usage. And even when they know an essay is AI-generated, the recourse there is really thorny. If you’re going to accuse a student of using AI, there’s no real good way to prove it. And students know this, so they can always deny, deny, deny. And the sheer volume of AI-generated essays or paragraphs is overwhelming. So that, just on the surface level, is extremely frustrating and has a lot of professors down.
Now, if we zoom out and think also about education in general, this raises a lot of really uncomfortable questions for teachers and administrators about the value of each assignment and the value of the degree in general.
Regulation
Apple, as Promised, Formally Appeals €500 Million DMA Fine in the EU
9to5mac • John Gruber • July 7, 2025
Technology•Software•Regulation•AppStore•EUCompliance
Here’s the full statement, given by Apple to the media, including Daring Fireball:
“Today we filed our appeal because we believe the European Commission’s decision — and their unprecedented fine — go far beyond what the law requires. As our appeal will show, the EC is mandating how we run our store and forcing business terms which are confusing for developers and bad for users. We implemented this to avoid punitive daily fines and will share the facts with the Court.”
Everyone — including, I believe, at Apple — agrees that the policy changes Apple announced at the end of June are confusing and seemingly incomplete in terms of fee structures. What Apple is saying here in this statement is they needed to launch these policy changes now, before the full fee implications are worked out, to avoid the daily fines they were set to be penalized with for the steering rules.
Chance Miller, reporting for 9to5Mac:
Apple also reiterates that the EU has continuously redefined what exactly it needs to do under the DMA. In particular, Apple says the European Commission has expanded the definition of steering. Apple adjusted its guidelines to allow EU developers to link out to external payment methods and use alternative in-app payment methods last year. Now, however, Apple says the EU has redefined steering to include promotions of in-app alternative payment options and in-app webviews, as well as linking to other alternative app marketplaces and the third-party apps distributed through those marketplaces.
Furthermore, Apple says that the EU mandated that the Store Services Fee include multiple tiers. [...] You can view the full breakdown of the two tiers on Apple’s developer website. Apple says that it was the EU who dictated which features should be included in which tier. For example, the EU mandated that Apple move app discovery features to the second tier.
Like I wrote last week, “byzantine compliance with a byzantine law”.
A reminder for new readers. Each week, That Was The Week, includes a collection of selected essays on critical issues in tech, startups, and venture capital.
I choose the articles based on their interest to me. The selections often include viewpoints I can't entirely agree with. I include them if they make me think or add to my knowledge. Click on the headline, the contents section link, or the ‘Read More’ link at the bottom of each piece to go to the original.
I express my point of view in the editorial and the weekly video.
Share this post