Hi from even deeper in July (sending on 1 August). Here are additional July readings worth capturing. Andrew is back so we discuss them in the vlog above. Here are the things that would have appeared in the past week had we been publishing.
Best
Keith
Contents
Essays
Venture Capital
European Weakness
AI
As Anthropic goes, so goes the generative AI trade, says Big Technology's Alex Kantrowitz
a16z GP, Martin Casado: Anthropic vs OpenAI & Why Open Source is a National Security Risk with China
Balaji Srinivasan: How AI Will Change Politics, War, and Money
Iconiq set to lead $5bn funding round for AI start-up Anthropic
OpenAI’s IMO Team on Why Models Are Finally Solving Elite-Level Math
Pew Study: Google Users Click Less When AI Summaries Appear in Search Results
New AI architecture delivers 100x faster reasoning than LLMs with just 1,000 training examples
Tesla signs a $16.5 billion chip contract with Samsung Electronics
Chinese Tech
Media
IPO
Education
Interview of the Week
Regulation
M & A
Essays
A Summer of AI in San Francisco
Joe Lonsdale • Joe Lonsdale • July 21, 2025
Technology•AI•EnterpriseSoftware•Automation•Innovation•Essays
I’ve been spending a lot of time out in San Francisco this summer with our 8VC team, and it's been energizing to meet truly impressive founders building at the frontier of AI. At 8VC, we always evaluate companies by first asking: Why now? The best answers highlight clear technological shifts that unlock entirely new products & solutions that were previously impossible but are now strategically necessary.
Over the past decade, many enterprise SaaS companies succeeded by focusing narrowly and building products for specific verticals, niche user segments, or overlooked workflows. This approach has produced great businesses which thrive precisely because they are specialized, capturing vertical data and improving their workflows.
Recent advances in AI like document understanding, reliable legacy software automation, conversational agents, and structured reasoning dramatically expand the range of tasks software can execute directly. Instead of merely assisting workers, software today can autonomously work alongside them, managing complex workflows. Founders are now confidently transforming how entire departments work.
Our Frameworks
When people ask about investing in AI, we share a simple framework to help them understand all the options, and where value accrues in the AI ecosystem, organized into six tiers, 0-5:
Tier 0: Energy Infrastructure – critical to support and scale the entire AI ecosystem.
Tier 1: Chips – the fundamental hardware, GPUs, and specialized processors powering AI compute.
Tier 2: Data Centers – the physical infrastructure and computing environments that host and scale AI workloads.
Tier 3: Foundation Model Companies – organizations building large language models and AI capabilities, such as xAI, OpenAI, Anthropic, Google, and Meta.
Tier 4: Software Infrastructure – tools and platforms that enable the deployment, orchestration, monitoring, and management of AI models (e.g., vector databases, orchestration platforms, model hosting services, Palantir).
Tier 5: AI-native Applications & Services – use-case-specific software that directly automates and executes entire workflows, moving beyond simple assistance, and full-stack services offerings that close the loop when AI can’t solve the task on its own. Often competing directly with existing businesses in the services economy (e.g. healthcare billing agencies).
The companies in Tier 3 making foundation models — like OpenAI, xAI, Anthropic, etc — are able to compete in the basics of Tier 4 (software infrastructure): vector stores, tool calling, memory, and other software fundamentals. This makes it difficult to compete in pure infrastructure. We will still fund heavy lifts by the best teams in areas like developer tooling, but the strength of the foundation model companies means that our attention climbs up to Tier 5 and AI-native apps.
Here, software lives inside the workflow and absorbs proprietary data. Companies create a beautiful flywheel as each completed task tightens feedback loops and raises switching costs. Opportunities unlocked in this shift are attracting top talent.
New categories of products & companies are possible because of new fundamental advancements at Tier 3 and Tier 4.
Better Doc Processing - Until recently, extracting structured information from unstructured documents was either unreliable or limited to simple tasks. Today’s LLMs can consistently read dense legal contracts, financial statements, medical records, and operational logs, not just to pull basic information but also to summarize key insights, categorize documents intelligently, and even generate fully formed, contextually appropriate drafts.
Intelligent Browser Automation - A significant share of business-critical data and tasks resides behind non-API, legacy systems. Historically, automating these interactions meant fragile web scraping or manual effort. Modern AI-driven browser automation products like Kaizen have become robust and reliable enough to safely extract data, navigate complex user interfaces, and complete workflows end-to-end with minimal human oversight.
Voice and Conversational Agents - Previously, interactions requiring nuanced conversational capabilities such as customer support calls, internal employee assistance, or vendor negotiations resisted effective automation. New voice and text agents can follow sophisticated multi-step instructions, adapt conversationally to specified edge cases, and handle critical operational tasks directly, reducing the need for human intermediaries.
Each of these capabilities represents a powerful step forward on its own -- combined, they enable new shapes of automation. Instead of helping employees carry out workflows, these new systems can directly execute them. They become agents within an organization, performing tasks, making decisions, and driving meaningful productivity gains at scale.
These breakthroughs unlock vast areas of greenfield opportunity. Tasks that once demanded armies of specialists -- or were simply skipped -- are now fair game for intelligent software. Workflows that used to be manual, costly, or error-prone can finally run at machine speed and accuracy.
When we look at investment opportunities at 8VC, we begin by looking closely at what these new capabilities allow software to achieve. Categories defined by:
High volumes of repetitive, document-centric tasks;
Workflows reliant on legacy interfaces and portals;
Knowledge-heavy tasks requiring judgment and decision-making, but where process is in place or extractable.
What's working?
One of our clearest proofs that the world has changed is our portfolio company Cognition, which rethinks how software engineering work gets done. Software development has long relied on humans for every high-judgment step – reading requirements, creating tickets, writing and reviewing code. Tools shaved minutes, never hours or days. Cognition’s AI agent, named Devin, now reads a spec, writes code, opens a pull request, and manages feedback. Some of the largest & fastest-moving software teams run Devin in production. The result is not incremental productivity; it is a new unit of work that no longer needs a person.
We’re also backing AI Applications that leverage these new Tier 3 capabilities to systematically replace and transform entrenched workflows across major enterprise spend categories.
Outset rethinks qualitative user research. Their system moderates user interviews, asks follow-ups, and synthesizes insights in real time. What would take a research team days or weeks can now be done in a single afternoon. AI interviewers can run for < $1 a session and costs on multi-modal inference continue to drop each model cycle.
Glimpse starts with retail deduction disputes – a painful, error-prone process that most brands handle manually or not at all. They automate the full dispute process, then expand into adjacent workflows like trade promotions and financial ops. These deductions can be 20-30% of a brand’s revenue, and 10% of those are winnable. AI can take back a whopping 2-3% of GMV, which changes the lives of these low margin CPG businesses.
Ground Control focuses on regulated manufacturing, starting with first-article inspections and building into other core document workflows in the back office. Every single regulated part needs these reports, and delayed reports can hold up revenue for weeks, sticking a profitable shop underwater.
Tezi is building an AI-native recruiting platform. Their system sources candidates, engages them, evaluates fit, and moves them through the funnel, executing an entire recruiting workflow with minimal human input. Global recruiting spend burns > $200B a year while software only captures low single digit % of that. Tezi today can shave off 80% of recruiting labor hours in the menial aspects of these workflows.
Broad-based automation is also central in our AI Services investments, where well-designed AI systems replace many functions and increase productivity by a multiple in legacy businesses.
Sequence Holdings acquires and transforms IT services businesses by embedding AI into core code generation workflows and ops, reshaping how managed IT services scale and deliver value.
Arcos reinvents transactional law practice as a technology-first legal platform, dramatically reducing transaction timelines and increasing margins. This requires a thoughtful understanding of how a workflow actually operates end-to-end, and the many areas to seamlessly involve a human with full context, then share any new context to automate rote work.
Although powerful AI models have made it easier to prototype impressive demos, durable companies in this space are doing something harder: deeply embedding into the real structure of work.
How do we create durable value?
We continue to reference our lessons from Palantir (which we wrote a bit about in The AI Services Wave), where understanding the ontology of a domain was essential to designing effective software. Mapping out the workflow – what can be automated, what should be augmented when model capabilities improve, and where humans still need to stay in the loop – is the foundational step. When done well, this informs not only what the product should do, but also where the moat will come from.
…
The Satya of Satya’s Layoff Memo
Om • July 26, 2025
Business•Management•AI•Layoffs•Transformation•Essays
In his recent memo, Microsoft CEO Satya Nadella addressed the company's decision to lay off approximately 9,000 employees, despite the company's strong financial performance. He acknowledged that these layoffs have been "weighing heavily" on him, emphasizing the emotional toll of such decisions. (cnbc.com)
Nadella framed the layoffs within the context of Microsoft's strategic shift towards artificial intelligence (AI). He described the current transformation as "messy but exciting," likening it to the technological shifts of the early 1990s. This period of change, he noted, is "dynamic, sometimes dissonant, and always demanding," but also presents an opportunity for the company to "shape, lead through, and have greater impact than ever before." (cnbc.com)
The CEO highlighted the necessity of adapting to the evolving tech landscape, stating that "progress isn't linear." He emphasized the importance of reimagining Microsoft's mission for the AI era, focusing on empowering individuals through accessible AI solutions. Nadella envisions a future where AI serves as an "intelligence engine," enabling users to create their own tools and solutions. (cnbc.com)
Despite the layoffs, Nadella reassured employees that Microsoft's overall headcount remains "basically flat," indicating that the company continues to hire new talent. He acknowledged the challenges of this transition but urged the workforce to embrace the changes, emphasizing the need for a "growth mindset" to navigate the complexities of transformation. (cnbc.com)
In summary, Nadella's memo sought to contextualize the layoffs within Microsoft's broader strategic vision, emphasizing the company's commitment to AI-driven innovation and the importance of adaptability in a rapidly changing industry.
Ads are inevitable in AI, and that's okay
Strange loop canon • July 28, 2025
Technology•AI•Advertising•BusinessModel•LLMs•Essays
We are going to get ads in our AI. It is inevitable. It’s also okay.
OpenAI, Anthropic and Gemini are in the lead for the AI race. Anything they produce also seems to get copied (and made open source) by Bytedance, Alibaba and Deepseek, not to mention Llama and Mistral. While the leaders have carved out niches (OpenAI is a consumer company with the most popular website, Claude is the developer’s darling and wins the CLI coding assistant), the models themselves are becoming more interchangeable amongst them.
One solution is to go deeper and create product variations that others don’t, such that people are attracted to your offering. OpenAI is trying with Operator and Codex, though it’s unclear if that’s a net draw, rather than a cross sell for usage.
Another option is to introduce new capabilities that will attract users. OpenAI has Agent and Deep Research. Claude has Artefacts, which are fantastic. Gemini is great here too, despite their reputation, it also has Deep Research but more importantly it has the ability to talk directly to Gemini live, show yourself on a webcam, and share your screen. It even has Veo3, which can generate videos with sound today.
The models are decreasing in price extremely rapidly. They’ve fallen by anywhere from 95 to 99% or more over the last couple years. This hasn’t hit the revenues of the larger providers because they’re releasing new models rapidly at higher-ish prices and also extraordinary growth in usage.
What could happen is that the training gets expensive enough that these half dozen (or a dozen) providers decide enough is enough and say we are not going to give these models out for free anymore.
Now, by itself this is fine. Because instead of it being a SaaS-like high margin business making tens of billions of dollars it’ll be an Amazon-like low margin business making hundreds of billions of dollars and growing fast. A Costco for intelligence.
There’s another option, which is to bring the best business model we have ever invented into the AI world. That is advertising.
It solves the problem of differential pricing, which is the hardest problem for all technologies but especially for AI, which will see a few providers who are all fighting it out to be the cheapest in order to get the most market share while they’re trying to get more people to use it. And AI has a unique challenge in that it is a strict catalyst for anything you might want to do!
For instance, imagine if Elon Musk is using Claude to have a conversation, the answer to which might well be worth trillions of dollars of his new company. If he only paid you $20 for the monthly subscription, or even $200, that would be grossly underpaying you for the privilege of providing him with the conversation. It’s presumably worth 100 or 1000x that price.
Or if you're using it to just randomly create stories for your kids, or to learn languages, or if you're using it to write an investment memo, those are widely varying activities in terms of economic value, and surely shouldn't be priced the same. But how do you get one person to pay $20k per month and other to pay $0.2? The only way we know how to do this is via ads.
And if you do it it helps in another way - it even helps you open up even your best models, even if rate limited, to a much wider group of people. Subscription businesses are a flat edge that only captures part of the pyramid.
Long‑term cost curves suggest another 3× drop in cash cost per token by 2027. Not just for product sales, but even for news recommendations or even service links.
What would it look like? The ads themselves could be AI-generated with better recommendations, contain expositions from products or services or content engines, direct purchase links, upsell own products, or have a second simultaneous chat about the existing chat.
A large part of purchasing already happens via ChatGPT or at least starts on there. Conversion rates are likely to be much higher than even social media, since this is content, and it’s happening in an extremely targeted fashion.
I predict this will work best for OpenAI and Gemini. They have the customer mindshare. And an interface where you can see it, unlike Claude via its CLI.
Put all these together I feel ads are inevitable. I also think this is a good thing. Whether it’s ads or not, every company wants you to use their product as much as possible. That’s what they’re selling!
Now, a caveat. If the model providers start being able to change the model output according to the discussion, that would be bad. But I honestly don't think this is feasible. We're still in the realm where we can't tell the model to not be sycophantic successfully for long enough periods.
So if we somehow created the ability to perfectly target the output of a model to make it such that we produce tailored outputs that would not corrupt the output quality much and guide people towards the products and services they might want to advertise, that would constitute a breakthrough in LLM steerability!
Instead what’s more likely is that the models will try to remain ones people would love to use for everything, both helpful and likeable. And unlike serving tokens at cost, this is one where economies of scale can really help cement an advantage and build an enduring moat. The future, whether we want it or not, is going to be like the past, which means there’s no escaping ads.
Being the first name someone recommends for something has enduring consumer value, even if a close substitute exists.
The AI Search Tipping Point
Tomtunguz • July 21, 2025
Technology•AI•Search•ConsumerBehavior•MarketTrends•Essays
OpenAI receives on average 1 query per American per day.
Google receives about 4 queries per American per day.
Since then 50% of Google search queries have AI Overviews, this means at least 60% of US searches are now AI.
It’s taken a bit longer than I expected for this to happen. In 2024, I predicted that 50% of consumer search would be AI-enabled.
But AI has arrived in search.
If Google search patterns are any indication, there’s a power law in search behavior. SparkToro’s analysis of Google search behavior shows the top third of Americans who search execute upwards of 80% of all searches - which means AI use isn’t likely evenly distributed - like the future.1
Websites & businesses are starting to feel the impacts of this. The Economist’s piece “AI is killing the web. Can anything save it?” captures the zeitgeist in a headline.
A supermajority of Americans now search with AI. The second-order effects from changing search patterns are coming in the second-half of this year & more will be asking, “What Happened to My Traffic?”
AI is a new distribution channel & those who seize it will gain market share.
William Gibson saw much further into the future! ↩︎
This is based on a midpoint analysis of the SparkToro chart, is a very simple analysis, & has some error as a result. ↩︎
The Slow Death of Social Networks
Salsop • Stewart Alsop • July 23, 2025
Technology•Web•SocialNetworks•UserEngagement•SocialMediaTrends•Essays
My son recently told me a startling piece of news, startling to me at least. Did you know the major social networks are downgrading posts that include links out of the network? Facebook, Threads, Instagram, LinkedIn, and X are trying to keep their users in their domain and not lose them to linked-out sites. In a way, they are putting walls up around their gardens. (If you don’t understand the reference, see Closed System.)
Some of you may know that son Stewart III and I are producing a podcast called Stewart Squared, which is a weekly discussion of how the history of technology helps predict the future of AI. We’ve produced more than 40 episodes, some with other notable figures from my history in the computer industry, people like Steve Case (AOL), Bill Gross (IdeaLab), Vince Kadlubek (Meow Wolf), and others, including my partner in our new venture capital firm, Jim Ward.
But Stewart Squared has also turned into a shared discovery process for the two of us to understand how to promote a podcast on the modern Internet. We’ve been trying to create more visibility for the podcast, and that process now includes a host of non-intuitive learnings about how to navigate web browsers, social networks and other ways to make yourself more visible.
My first learning, as noted, is that social networks today are jealously guarding the audience they have already built by discouraging users from leaving the network. Stewart III has been producing his own podcast, Crazy Wisdom, since 2017 and has recorded 473 episodes, so he knows a lot more than I do about this. He told me that I couldn’t post links, since the network would reduce the visibility and reach of any posts that link out.
He also told me to stop using hashtags, an early phenomenon of Twitter that allowed users to follow topics rather than people. Apparently, clicking on hashtags is not conducive to reviewing what is in your feed and engaging with the social network itself. Twitter has put out an advisory that it would downgrade posts with hashtags.
This is making it really hard for new creators (me) to get started. Everything I learned about social networks back in the heyday when they were new and exciting is now wrong. An astute observer might think that, strategically, it is stupid to try to build walled gardens around internet businesses, since the very purpose of the internet is to create an open, accessible network that puts control in the user’s hands. At some level, the existence of smart phone apps make it seem like social networks are walled gardens, since the user can only access the network through the app.
What is the difference between using the app and using the website? I have been in the habit of using social network’s website, not in the app. So it’s only recently that I realized that they are trying to “trap” their users inside the apps.
That lead me to wonder exactly what is going on with social networks. I was an early and enthusiastic adopter for Twitter, Facebook, Instagram, and Pinterest (even Orkut and Google Plus and others I don’t even remember). I still sign up for anything new, including Threads most recently. I was slow to sign up Snapchat and TikTok and haven’t engaged with either except occasionally to look at posts that seem interesting; perhaps that’s a reflection of my age and home cohort as a baby boomer since I don’t easily get into meaningless videos, which seem to be the core user experience of Snapchat and TikTok.
Fact is I’m bored by most social networks. I’ve stopped using Facebook to post anything, and only look at posts from friends. I use Instagram to post cool photos and foodie things, but I don’t get much traction. And I’m permanently pissed at Meta for turning it into an advertising machine, mainly through Reels and ads, rather than its original intent, which was to share photos with friends. X/Twitter is supposed to be a cesspool, although I don’t see that stuff in my feed and think most people who complain about it are actually looking for garbage so the algorithm delivers more of it. Like Facebook, I only look at X when someone links to a post. (Indeed, X says I have >13,100 followers , but I must assume most of them have stopped using X or died, since I get very little reaction to the few posts I have made in the past few years, with or without links or hashtags. That may also be a function of X making you pay for a membership, which I haven’t done, to get your posts read and get new followers.)
Funnily enough, I actually use the Threads app when I want to turn off my brain and be entertained; I’ve learned from using Threads that Meta actually does know how to run a social app (surprise, surprise). Threads hmwuickky figured I live in New Mexico and like food and art. Indeed, recent news indicates that Threads has almost as many daily active users as X. When they discover how to pump it full of ads, I’ll probably stop using that one too.
That’s my personal experience: From hours a day 10 years ago to 30 minutes or less a day now. Is that reflected in overall use of social media? Are other people as bored as I am? The answer is: yes, sort of. U.S. monthly active users of Facebook has declined from more than 200M to 160M in the past three years. But DAUs have stayed roughly the same, which probably means that really active users continue to be active but less active users (like me) tend to fade away. The most active users may well be the kind that post a lot of junk, which certainly doesn’t increase the quality of interaction.
Pinterest seems like a dead zone, although it is valued at $25B. Snap seems to be trying to follow Meta into AR glasses, but is valued at $16B. Meta’s social networks — Facebook, Instagram, WhatsApp, and Threads — are clearly the dominant force in social media and produce a ton of cash which the company is using to build its metaverse products, Oculus VR, Orion AR, and Rayban/Oakley AI glasses. The company is valued at $1.8T valuation, partly based on its ability to generate lots of profitable revenue from its social networks. No one knows what TikTok is worth, although its remarkable growth worldwide appears to position it to take over the lead in social networks soon, so it is likely to be worth a lot.
My point being: Social networks aren’t a growth business anymore, with a stable base of billions of users worldwide, and they aren’t serving the original purpose, which is embedded in the name “social networks”. So perhaps it makes business sense to try to keep users inside semi-walled gardens in order to preserve value in the existing businesses as long as possible. However, it sure doesn’t seem the right way to treat your users, particularly since you are using their engagement to sell advertising. The evolution of social networks from fascinating ways to connect and engage with friends into massive businesses trying to eke out as much profit as possible will ultimately train users to lose interest and fade away. Slowly, but inevitably leading to the death of social networks.
10 Things I Wish I Knew Before Vibe Coding
Saastr • Jason Lemkin • July 25, 2025
Technology•Software•Vibe Coding•AI Agents•Software Development Process•Essays
If you follow us on X or LinkedIn, you may have seen we’re deep into vibe coding. Specifically, trying to build basic B2B apps — for real — using the leading vibe coding platforms like Loveable, Replit, Bolt.new, etc.
After being deep into vibe coding a some complex (but not super complex) B2B product for the SaaStr community, I’ve already learned a lot.
These platforms are very cool, but super quirky. The “AI agents” that build the apps are so powerful but they are also … unpredictable. And they get more and more unpredictable the deeper you get into a project. That’s what you need to learn.
Here are the 10 things I wish I knew before I started.
1. The AI Agent Will Often Be Wrong. Deal With It.
AI agents are wrong 30-40% of the time on anything complex. API integrations, edge cases, architectural decisions — they’ll give you confident-sounding answers that are just wrong.
What to do: Always ask for multiple approaches. “Give me three ways to build this algorithm” became my default. Push back on suggestions that don’t sound right. And if it said it’s done something — verify that it actually has. Don’t move on.
Stop treating AI suggestions like they are … right. They’re hypotheses to test.
2. Slow It Down. Work on Small Pieces.
Vibe coding feels fast, so you want to keep going. “Add payments! Add user roles! Add notifications!” This is a trap.
I tried to build an entire dashboard in one conversation. I ended 200 lines of code that technically worked but was impossible to debug or extend. And nothing on it really worked.
What works: 15-20 minute focused sprints. Build one thing, test it, move to the next. If it takes more than 30 minutes to build and test, it’s too complicated. Even any feature that takes longer than a few minutes to build is likely too complicated. Break it down further.
3. The Agent Will Change Stuff Without Asking. Accept It.
Your code will constantly evolve. Variable names will frustratingly change. Functions get refactored. Architecture shifts. Without you ever asking or even knowing.
At first, it will be funny. You’ll log back in, and your app wlll have … changed. You’ll just fix it. But then as you go further and further into a project, it’s less funny. Because as you go deeper into any project with any level of complexity, there will be so many moving pieces in flux. This is why so many folks abandon their vibe coding project 70% of the way in. At that phase, it’s now changing way too many things you thought were done or more likely, close to done. You just have to learn to work around this.
Vibe coding may well be slower and less efficient than normal coding after you are about 70% of the way in. But if you aren’t a coder, you just need to live with that.
Reality check: As long as functionality works and tests pass, let it evolve. The AI often improves code in ways you wouldn’t think of. If it’s done enough, don’t touch it again.
4. Don’t Ask More Than Twice. Start Fresh Instead.
The pattern I see constantly, and that I’ve been ‘guilty’ of, even again today: You ask the AI agent to fix a bug. The fix doesn’t work. You ask again. It still doesn’t work. So you ask a third time… and it starts to go off the rails. Deleting things. Making up test results. Inserting “demo” user data that hides the fact the function just doesn’t work.
Stop. Don’t ask an AI agent 3 times to fix something — ever. The context window gets polluted with failed attempts. The AI builds on false assumptions. No matter what, it’s just not going to work if you have to ask the AI prompt 3 times. You’ll end up in a sea of endless bugs you can’t fix.
Rule: Never ask the same agent to solve the same problem more than twice. Start a new conversation. Describe the problem from scratch. Approach it in a simpler way, if possible.
5. Some Stuff Is Harder Than It Looks. Plan Accordingly.
Email integration, OAuth, payments — they look simple until you hit SPF records, refresh tokens, and webhook validation. A lot of this stuff just doesn’t work reliably today. Not unless you get close to getting into the code youself. The AI will give you a basic implementation that works in dev but fails in production.
I actually don’t have all the answers here other than be aware many things that seem to work won’t actually work. And use the ‘hardpoints’ in your vibe coding platform. Use their authentication, their database, their core choice for email, everything core you can vs. writing your own.
The platform I’ve used most desperately wants to use Sendgrid for email for example, by default. Every time I lock down Resend instead, it fights me. One way or another. Giving up and using the default vendor it wants to use (Sendgrid) is the right choice of 95% of us.
Budget accordingly: Anything involving external systems takes 3-5x longer than the AI estimates. Build the simplest, happy path first, then iterate on edge cases. Use the default “hard points” like OAUTH, email, etc. that the vibe platform provides. Don’t import your own unless you are 100% sure you really need to.
6. Don’t Build an App That’s Hard to Test
When you’re moving fast, you skip testing. “I’ll just click through it.” This doesn’t scale. Not even past a few days of vibe coding.
By hour 20 or so, you’ll spend more time testing than building.
Solution: Design for testing from day one. Unit tests, API testing, reusable test data, easy rollbacks. I spend 60% of time on testing infrastructure, 40% building features. 40% max.
7. Just Roll Back. Don’t Try to Fix Everything.
When you introduce a bug, resist the urge to debug in place. Instead ask youself: “Should I just roll back?”
Vibe coding platforms are really, really good at rolling back. And they automatically make save points, just like video games. So before you debug something that isn’t working, ask yourself if instead you should just … roll back. Get used to rolling back multiple times a session. It works elegantly, so take advantage.
Strategy: When your gut tells you something isn’t going to get fixed, just roll back. RIght then and there. When something breaks, you always have a recent checkpoint.
8. Accept Your App Will Never Be 100% Stable Until Production. And Probably Not After That If You Keep Vibe Coding It.
This a tough one and one I didn’t fully get at first. AI will keep suggesting “improvements” that introduce bugs. Features that worked yesterday break today. The further you go down a path with any complex app, the more this compounds. If your app is too complex, honestly, it will never be stable. If it changes 1-2 things per session, you can work around that. 20? It’s hopeless.
I used to try for 100% stability. These platforms are evolving and we may get there. But it’s not practical, let alone possible, today. Your AI Agent will just keep changing things.
Reality: Launch at 80% stability when the core user journey works. Real user feedback beats perfect code. And make the app as simple as possible. Complex apps are not ready for prosumer vibe coding yet. Not really.
9. You May Want to Just Plan to Rebuild Entire Pages of Your App. Probably All of Then.
Sometimes it’s faster to rebuild from scratch than debug existing code. Because AI agents generate code quickly, rebuilding often takes less time than debugging complex issues. Don’t fear rewriting every single page of your app.
When to rebuild: If you’ve been debugging the same component for over an hour, start fresh. Ask the AI to build a “v2” of your page and see where it takes you. Preserve the v1 in case it doesn’t go where you want.
10. A Lot of Stuff That Looks Like It Works at First Glance … Won’t
Code that looks perfect has subtle issues that only surface under specific conditions. Lots of functions and buttons and workflows that look like they work … won’t. Your app will load with demo data in dashboards and algorithms that are “simulated”. To make them look like they work.
You’ll become frighteningly good at breaking your own app. You’ll need to be. It will be job #1. And you’ll need to get great at seeing anything that looks wrong. It was probably made up.
Summary
Vibe coding is magical but it isn’t magic. It’s a tool that changes how you build products — more iterative, faster feedback loops, different skill requirements. But it’s early.
Finally, a big topic I’ll do a deep dive on later sn security. The more your app stores any sort of customer information, practically speaking, the more important this is. And the more risky using a vibe coding tool on its own is. This will get its own stand-alone post soon. For now, be cautious if your app stories any confidential or customer information and is public facing.
And another great summary here from Sr Director of AI products at Pendo:
PageRank in the Age of AI
Tomtunguz • July 22, 2025
Technology•AI•ContentDistribution•OnlineAdvertising•Publishers•Essays
The internet is on the brink of a significant transformation, aligning more closely with the dynamics of the online advertising industry. This shift doesn't necessarily mean an increase in advertisements; rather, it signifies a change in how content is distributed and valued. The technological framework for content dissemination is evolving to mirror the structures established in online advertising.
As we reach the AI search tipping point, publishers face an existential challenge: ensuring AI systems utilize their content in responses to maintain relevance. Traditionally, when you visit a website, your browser initiates an auction. The site's supply-side platform sends your data to an exchange—a marketplace for ads. Numerous advertisers bid for the opportunity to display their messages, with the highest bidder winning. This process occurs in under 200 milliseconds.
Now, envision a similar system applied to content. Instead of bidding to display ads, publishers compete to inform AI responses. The AI evaluates submissions based on quality metrics such as relevance, accuracy, freshness, and authority. This approach resembles PageRank for real-time AI responses—algorithmic evaluation operating in milliseconds rather than batch processing.
For instance, consider a scenario where you ask an AI system like Gemini, "What are the reviews of the new Google Pixel phone?" This query is broadcast to participating publishers, including tech reviewers, consumer sites, and electronics retailers. They submit their best content to the auction. Gemini then evaluates the quality, recency, and relevance of these submissions, synthesizing the winning content into your answer.
In this model, the demand-side platform disappears. There's no longer a need for advertisers optimizing for clicks. Instead, publishers compete to be the most useful source of information. The internet experiences fewer ads, but every piece of content vies for attention in an auction measured in milliseconds.
Publishers benefit from increased traffic and attribution when their content is selected, leading to indirect revenue through brand building and subscriptions. However, this is contingent upon their content consistently winning on merit.
Venture Capital
Data is actually not a great VC-backed business
Auren • July 27, 2025
Business•Data•VentureCapital•PrivateEquity•Growth•Venture Capital
[YOUTUBE_EMBED:VIDEO_ID]
Five years ago, I advocated for the growth of pure-play data businesses, believing that the increasing data-centric approach of companies would drive demand for data-as-a-service (DaaS). However, this prediction did not materialize as expected. The number of hedge funds investing significantly in alternative data has decreased, with fewer than 100 funds currently making substantial investments. Similarly, industries like real estate and retail have shown minimal adoption of data purchasing. Even with the advent of AI, the market for data has not expanded as anticipated, primarily because extracting value from data remains challenging.
Data businesses, which sell raw data, are profitable and experience steady growth but are not ideal candidates for venture capital funding. Unlike software-as-a-service (SaaS) companies, which have seen numerous unicorns, DaaS companies like ZoomInfo have remained exceptions, often achieving profitability without venture capital. The majority of large DaaS companies are privately held, with private equity firms favoring them due to predictable revenue streams and potential for cost optimization. Therefore, data businesses are more suited for private equity investment rather than venture capital.
Venture capital thrives on rapid growth and substantial losses, fueling fast expansion and deep research and development. In contrast, data companies typically grow slowly and profitably, making them less compatible with the venture capital model. The success of DaaS companies like ZoomInfo, which achieved unicorn status without venture funding, highlights this discrepancy. Most data businesses should consider alternative funding options, such as private equity or debt financing, to support their growth.
Why Every International Founder Should Spend Three Months in the U.S.
Speedrun • July 22, 2025
Business•Startups•Entrepreneurship•InternationalFounders•Innovation•Venture Capital
A guest post by investor and writer Guillermo Flor highlights why every international founder should spend a few months in the U.S. Flor, an entrepreneur and investor from Spain, shares his personal journey from working as a lawyer to building startups and eventually moving to the U.S., which transformed his career. He emphasizes the unique advantages of spending time in American startup hubs like San Francisco and New York City.
In the U.S., there is a distinct level of openness to new ideas that is less common in Europe. Founders, investors, and executives tend to have a mindset of "Let’s talk, let’s build, let’s try something," which significantly increases the chances of building something great. This openness is a key trait of successful innovators.
Flor also points out the American bias toward action. During events like New York Tech Week, he found that people act quickly on opportunities, such as a CEO of a $2 billion company agreeing to a podcast recording within a day of a cold message—something that rarely happens in Europe.
Respect for builders at any stage is another critical difference. In the U.S., early-stage founders are taken seriously and can access big customers and meetings based on their ambition, not just past outcomes. This culture encourages speed and momentum in startups.
Thinking big and aiming to build billion-dollar companies is normalized in the U.S., especially in innovation hubs. This mindset contrasts with some European views, where such ambitions may be met with skepticism or dismissal. Being around visionary entrepreneurs pushing the boundaries inspires bigger dreams and bolder actions.
Finally, the talent density in places like San Francisco is unmatched. The concentration of ambitious, talented individuals working on solving global problems with technology creates an environment where anyone can level up just by being there. The high bar and fast tempo make it clear how much more is possible.
Flor concludes that international founders don’t necessarily need to move permanently to the U.S., but spending three months annually, especially early in their career, can open their minds, expand ambition, and accelerate their learning curve in ways that many other places currently cannot match.
A Path Forward for Seed VCs
Nextview • July 23, 2025
Business•VentureCapital•SeedInvesting•AI•Innovation•Venture Capital
In my previous post, I argued that seed investing is facing an existential crisis driven by four forces:
The maturation of the venture capital industry
The formidable forces of mega-funds and YC
Power law thinking becoming consensus
The AI platform shift which multiplies the first three forces
My Partner Stephanie Palmeri described it as “the series finale cliffhanger of posts” and lots of folks are eagerly anticipating part II.
Well, here it is. But think of this as episode 1 of the new season. I’ll share part of the “answer,” but most of it will remain in the mystery box.
And let’s be honest, it would make no sense to share my specific “answer” publicly. It’s up to each investor to read this and figure out what the right path is for themselves.
But here’s how I think about the way forward. Ultimately, it comes down to a few simple and somewhat obvious things:
The magnitude and progression of the AI supercycle
Defending and increasing share through sustaining innovations
Betting the farm on some form of disruptive innovation
Why Seed Rounds Are Growing as Startups Shrink
Tomtunguz • July 27, 2025
Business•Startups•SeedFunding•VentureCapital•StartupTrends•Venture Capital
Why is the sub-$5 million seed round shrinking?
A decade ago, these smaller rounds formed the backbone of startup financing, comprising over 70% of all seed deals. Today, PitchBook data reveals that figure has plummeted to less than half.
The numbers tell a stark story. Sub-$5M deals declined from 62.5% in 2015 to 37.5% in 2024. This 29.5 percentage point drop fundamentally reshaped how startups raise their first institutional capital.
Three forces drove this transformation. We can decompose the decline to understand what reduced the small seed round & why it matters for founders today.
VC fundraising dynamics represent the largest driver, accounting for 46% of the decline. US venture capital fundraising nearly doubled from $42.3B in 2015 to $81.2B in 2024. The correlation of -0.68 between sub-$5M deals & VC fundraising shows a powerful relationship: as funds grew larger, small rounds became scarcer.
Larger funds need larger checks to move the needle. A $500M fund can’t build a portfolio writing $1M checks. The math simply doesn’t work for their economics.
Inflation represents the smallest contributor at just 15% of the decline. What cost $5M in 2015 requires $6.7M today. This represents a meaningful increase but not the primary culprit.
Crucially, BLS data shows software engineering salaries grew at nearly the same rate as general inflation. This means the real cost of building startups remained relatively stable. Salary inflation isn’t driving founders to raise larger rounds.
The remaining 39% stems from other market forces. These likely include heightened competition for deals, increased pre-seed valuations pushing up seed round sizes, & founders’ growing capital appetites as they chase more ambitious visions from day one. The proliferation of seed funds & the emergence of multi-stage firms investing earlier also contribute to this shift.
Here’s the paradox: despite these larger rounds, startups are actually shrinking. Carta data shows SaaS companies are 20% smaller at Series A today than in H1 2020. Smaller teams are more than offsetting inflation costs through increased productivity.
This efficiency gain will accelerate with AI. As productivity tools enable founders to build more with less, we’ll see teams generate more ARR per employee while valuations continue to climb. The best founders are already achieving with five engineers what previously required twenty.
We’re witnessing a shift in startup financing. The small, disciplined seed round that launched thousands of companies in the past decade has been replaced by bigger rounds, higher valuations, compressed timelines, & loftier expansion expectations.
Why our access is improving
Signalrankupdate • Rob Hodgkinson • July 30, 2025
Business•Startups•Investment•SeedFunding•VentureCapital•Venture Capital
With our index-like strategy of systematically investing in premier Series Bs, the only question that matters is whether we can partner with seed managers who are backing the full distribution of breakout Series Bs.
This is particularly important in a power law environment where the vast majority of returns reside within a tiny subset of the companies. The ability to consistently support these companies at Series B is critical not just to our returns, but to helping seed investors fully capture the upside of their best picks.
The good news is that our access continues to improve. Figure 1 shows the top 10 ranked Series Bs per vintage per our models. The companies we invested in are in green, while companies we saw but did not invest in are in pink and companies we did not see are blank. We invested in two of the top 10 companies last year (Anrok & Bounce).
Figure 1. SignalRank’s access to top 10 ranked qualifying Series Bs in 2023 & 2024
Source: SignalRank
In 2025 to date, we have invested in two of the top 10 ranked Series Bs, including the #1 ranked company on our model for this year so far (Together AI, see Figure 2). We are seeing our access continue to improve (Figure 2). But we know we’re not seeing everything yet. That’s why expanding our partner network is a priority.
Figure 2. SignalRank’s access to top 10 ranked qualifying Series Bs in 2025 (Jan-Mar 2025)
Source: SignalRank
So why is our access improving?
It could be because we have been in market for longer, with higher brand awareness and stronger relationships with a larger network of seed managers. This would be the feel good answer.
Structural reasons are more likely:
Larger round sizes allow for more pro rata
AI companies with high execution velocity are able to attract ever more capital, and may need to raise more capital still to keep ahead of their competition. A $2bn pre-money valuation allows for a $100m Series B with just ~5% dilution. Larger rounds leave more space for existing investors to fill their pro rata.
Opportunity funds are out of vogue
NextView’s Rob Go asked the question of whether we are seeing the end of seed investing as we know it, driven by the trifecta of Y Combinator attracting top talent (to the detriment of non-YC seed investors), mega-funds investing at seed (thereby squeezing out seed VCs) and AI being too capital intensive for seed investors.
He may be overstating the case somewhat, but it is clear that seed investors are fighting harder for every dollar being raised. A corollary has been the number of opportunity funds closing has ground to a halt. Seed investors are therefore left with the options of enduring dilution, trying to spin up an SPV with their LPs, or partnering with SignalRank (with full 20% deal by deal carry).
Pre-IPO SPV opportunities favored
Mid-stage SPVs (at Series B / Series C) have become harder to fill for managers seeking to take advantage of their pro rata directly with their LPs. Liquidity is the focus for these types of LPs, preferring to pile into household pre-IPO names (SpaceX, Stripe, Anduril, etc) than backing higher growth Series Bs with unknown liquidity timeframes.
We are now working with 200+ seed managers who are sharing 100+ post Series A / pre Series B opportunities with SignalRank per month. This is enabling us to see a high proportion of all qualifying Series Bs on our model.
By supporting our managers, we are not replacing seed investors’ own capital strategies, but offering a complementary product: our capital allows seed investors to preserve ownership in their best companies, with instant execution (without spinning up a separate SPV), and aligned interests (with our partners enjoying full 20% carry).
We’ve made a strong start. But to fully deliver on our mission, we know we must continue to earn the trust of the seed community. That’s how we’ll achieve the kind of consistent access required to execute fully on our strategy.
On 9 data-driven tips for YC startup founders
Medium • Jared Heyman • July 24, 2025
Business•Startups•YCFounders•StartupSuccess•DataDrivenTips•Venture Capital
Since Rebel Fund invests exclusively in seed-stage Y Combinator startups, the dozens of blog posts I’ve published over the years are focused mostly on how investors like us can improve their odds of investing in tomorrow’s YC unicorns. However, this post will take a different approach — I’ll share some data-driven tips that YC founders can follow to maximize their odds of success based on what we’ve learned over 5+ years building the world’s most sophisticated ML/AI algorithm for predicting YC startup success.
Tip #1 — Swing for the fences
Startup outcomes follow a very steep power law curve, such that ~6% of YC startups represent a whopping ~90% of the total valuation growth. The vast majority of startups enjoy no significant valuation growth at all, with a few big winners like Airbnb, Stripe, DoorDash, etc dwarfing even other decacorns ($10B+ valuations), which in turn dwarf even other unicorns ($1B+ valuations), which in turn dwarf the minicorns ($100M+ valuations).
Startup outcomes are relatively binary— either they’re a huge success or a failure. The implication is that founders should 1) pursue opportunities with massive upside potential rather than incremental gains, and 2) give it their all. The years after you start your first venture-backed startup will probably be the most consequential of your entire career.
Tip #2 — Be patient
The median time for a YC startup to achieve an exit (acquisition or IPO) is about 3 years, but larger exits ($100M-$999M) take closer to 6 years, and unicorn exits ($1B+) typically take nearly a decade. Building a successful technology startup is a marathon, not a sprint.
You may see headlines about tech companies achieving ridiculous valuations ridiculously fast, but the reason they make headlines is they’re the exception rather than the rule. In the vast majority of cases, building a unicorn is a slow, painful, and unglamorous decade-long commitment.
Of course there are many exciting milestones along the way, but you should think of them as islands in a sea of hard work, relentless focus, and near-death experiences. If you don’t enjoy the journey of “building something people want” with its long hours and even longer odds, then you’ll never make it to the destination.
Tip #3 — Start young
We found that the average YC unicorn founder had 8 years work experience when they started their YC startup. Assuming they got their first job after graduating college at ~22 years of age, that means they were ~30 years old. Plenty of founders had more or less than this average, and a surprising number were only a few years into their career — though few had over 15 years experience.
Building a startup takes a lot of time and energy, and risk, so it’s better to start early in your career. Clearly some real-world work experience helps, but the trick is to properly balance energy and wisdom. Anecdotally, we’ve noticed that the current generation of “AI first” startup founders are younger than the historical average, probably because advanced AI is so fresh that relevant industry experience isn’t really to be had.
Tip #4 — Get co-founders
One of the many things we learned training our Rebel Theorem 4.0 ML/AI algorithm for predicting YC startup success is that the number of co-founders in a startup is positively correlated with successful outcomes. It’s not true in the absolute (you can have too many cooks in the kitchen) but partnering with another co-founder or two is a smart idea.
We’re working on a set of new algorithm features now that dissect exactly what characteristics of “co-founder fit” predict startup success, so I’ll have more to say on that soon. One thing we know for sure though is co-founder breakups are the #1 cause of early-stage startup failure. So at a minimum, make sure you know your co-founders well and you’re aligned in terms of long-term vision and values.
Tip #5 — Go to a top (technical) university
Another thing we learned about YC unicorn founders is they often graduated from top-ranked universities with strong engineering programs (think Stanford or MIT).
There is a long tail of other universities represented amongst unicorn founders, so going to a top school really falls into the “helpful but not necessary” category, but having co-founders with strong technical and/or product backgrounds is vital based on our data.
…
Ultra-Unicorn Investors: These Firms Have Amassed The Largest Portfolios Of $5B+ Startups
Crunchbase • July 28, 2025
Business•Private Equity•Unicorns•Investment•Venture Capital
An analysis of Crunchbase data reveals that several firms have built substantial portfolios of ultra-unicorns—private companies valued at $5 billion or more. Among the most active investors in this category are Andreessen Horowitz, Sequoia Capital, Tiger Global Management, Lightspeed Venture Partners, and Accel. Notably, Tiger Global has invested in ultra-unicorns such as Databricks, Scale AI, and Shein. This indicates a significant involvement of private equity firms in funding these highly valued private companies.
In terms of portfolio counts, private equity investors dominate, with Tiger Global holding 19% of companies, Coatue 18%, and SoftBank Vision Fund, GIC, and Andreessen Horowitz also having significant shares. This dominance reflects the broader trend of private equity firms making later-stage investments across a wider pool of companies compared to venture capital firms, which typically invest earlier and continue to support their successful companies.
At the early investment stages, Andreessen Horowitz, Accel, and Sequoia Capital lead in Series A and B rounds, having invested in ultra-unicorns like Scale AI, Databricks, Stripe, and Klarna. Andreessen Horowitz has invested in 16 unique companies at these stages, while Accel and Sequoia each have 14. This trend is particularly prominent among U.S. firms, with Chinese investors such as IDG Capital, HSG (formerly Sequoia Capital China), and Tencent also being active in this space.
At the seed level, Y Combinator leads by a large margin, representing 10% of the seed investments in the $5 billion-plus club. The startup accelerator was an early investor in companies like Rippling, Scale AI, and Deel. Other notable seed investors include SV Angel 1, Initialized Capital, Soma Capital, and Homebrew.
In terms of funding amounts, private equity firms, particularly SoftBank and SoftBank Vision Fund, have led the largest rounds in this asset class. This list also includes major tech companies like Meta and Microsoft, which have backed companies such as Scale AI and OpenAI, respectively. Venture capital firms like Andreessen Horowitz and Sequoia Capital have also been involved in leading investments exceeding $8 billion.
As of mid-2025, six companies valued at or above $5 billion have exited, compared to nine in 2024. Notably, companies like Figma, valued at $12.5 billion, Navan at $9.2 billion, and Klarna at $6.7 billion, have filed confidentially with the SEC for potential IPOs. With $482 billion in investor capital placed into these 211 private high-value companies since the early 2000s, a few more listings in 2025 would certainly help alleviate the venture capital liquidity crunch.
Landscape of VC-Backed M&A
Lp Club • Sarah • July 29, 2025
Business•MergersAndAcquisitions•VentureCapital•InvestmentTrends•Venture Capital
The venture capital (VC) landscape is undergoing significant transformations, making it essential for global Limited Partners (LPs) to comprehend the evolving dynamics of mergers and acquisitions (M&A). Understanding these shifts is crucial for informed investment decisions and strategic positioning in the market.
In recent years, there has been a notable increase in VC-backed companies engaging in M&A activities. In 2024, over a third of U.S. VC-backed startup acquisitions involved another VC-backed company as the buyer, marking a significant rise from previous years. This trend reflects a strategic move by VC-backed firms to consolidate resources, expand market reach, and enhance competitive positioning. (cadetlegal.ai)
The surge in VC-backed M&A is also influenced by the availability of substantial capital reserves, often referred to as "dry powder." This financial strength enables firms to pursue larger acquisitions and implement "buy and build" strategies, particularly in sectors like Software as a Service (SaaS), fintech, and generative AI. For instance, companies such as Databricks and Stripe have been active acquirers, leveraging their capital to fuel growth and innovation. (cadetlegal.ai)
Additionally, the global M&A market has experienced a resurgence, with a 13.2% increase in deal count and a 26.8% rise in deal value year-over-year by the third quarter of 2024. This recovery indicates a renewed confidence among investors and a more favorable environment for M&A activities. (lpclub.co)
However, this evolving landscape presents challenges. The increased competition for high-quality targets has led to elevated valuations, requiring firms to be strategic in their acquisition approaches. Moreover, integrating acquired companies effectively remains a complex task, necessitating careful planning and execution to realize the anticipated synergies.
In conclusion, the landscape of VC-backed M&A is marked by increased activity and strategic consolidation. For global LPs, staying informed about these trends is vital to navigate the complexities of the market and capitalize on emerging opportunities.
European Weakness
Wise shareholders back plan to move listing from the UK to the US
Ft • co-founder • July 28, 2025
Business•Strategy•Finance•StockListing•CorporateGovernance•European Weakness
Fintech company Wise has successfully overcome an investor rebellion sparked by its co-founder, who opposed the company's plan to extend its dual-class share structure. The resolution saw shareholders back the proposal to move the company's stock listing from the UK to the US. This strategic decision aligns with Wise's ambition to tap into a broader, more liquid investor base and enhance its market valuation potential, leveraging the deep capital markets and tech-focused investor ecosystem in America.
The governance shift involving the extension of the dual-class share structure proved contentious. The co-founder led the opposition, expressing concerns about the potential dilution of shareholder rights and the possible long-term implications for corporate governance. However, the majority of shareholders viewed the move as a necessary step to support Wise’s growth trajectory and access to capital. This support reflects confidence in the company’s management team and the strategic importance of listing on a US exchange, which tends to be more favorable to technology enterprises with dual-class shares.
Key points include:
Shareholders voted in favor of the plan to change the listing venue from the London Stock Exchange to a US exchange, marking a significant pivot in Wise's market strategy.
The extension of the dual-class share structure allows founders and early investors to retain disproportionate voting power relative to their economic stake, a common structure among tech companies in the US.
The co-founder’s public opposition highlighted tensions around governance practices and investor rights but did not sway enough investors to block the proposal.
The move is expected to improve share liquidity, attract more institutional tech investors, and potentially lead to a stronger valuation.
This decision is part of a broader trend where UK-based fintech and tech companies are increasingly pursuing US listings to capitalize on broader investor interest and more favorable market conditions.
The implications of the listing shift are multifaceted. For Wise, the US listing could provide a more vibrant secondary market, enabling easier trading and price discovery for its shares. It may also facilitate future capital raises to fund product innovation and international expansion. Conversely, this move can stir debate around governance standards, as the US dual-class share model often faces criticism for entrenching founder control at the expense of minority shareholders.
Overall, Wise's successful navigation past internal dissent and shareholder approval indicates strong market support for its strategic direction. The decision underlines the evolving dynamics of global stock markets where tech companies seek environments that best support their long-term growth ambitions, even if it means moving away from historic home markets. This case exemplifies how governance structures and listing venue choices are critical considerations for fintech firms aiming to maximize both operational flexibility and investor appeal.
AI
As Anthropic goes, so goes the generative AI trade, says Big Technology's Alex Kantrowitz
Youtube • CNBC Television • July 28, 2025
Technology•AI•GenerativeAI•Investment•Innovation
The discussion centers around Anthropic, a notable player in the generative AI space, and its current influence on the broader generative AI industry, as highlighted by Big Technology commentator Alex Kantrowitz. Anthropic's progress, strategic decisions, and market movements are seen as a bellwether for the entire generative AI trade, underscoring its pivotal role in driving investor sentiment and technological innovation in this domain.
Alex Kantrowitz emphasizes how Anthropic’s performance and developments often set the tone for investor confidence, impacting not only AI startups but also established tech giants vested heavily in AI advancements. The video outlines that investors and market watchers closely track Anthropic’s milestones, product releases, and partnerships as key indicators of the generative AI trade's health and potential growth trajectory.
A critical insight from the discussion underscores the wider implications of Anthropic’s trajectory for the tech ecosystem. Success or setbacks at Anthropic can ripple across companies involved in AI research, investment funding, and commercial deployment. Kantrowitz notes that this influence is partly due to Anthropic’s status as a leader in creating safer and more controllable AI models—a crucial factor amid ongoing regulatory scrutiny and ethical debates around AI.
Furthermore, the conversation highlights the competitive landscape of generative AI, where Anthropic’s advancements contribute to shaping industry standards and technology benchmarks. The company’s innovative approaches to AI safety mechanisms are cited as setting a precedent other firms aim to match or exceed. This competitive posture enhances the dynamism of the generative AI field but also introduces pressures on AI firms to innovate responsibly.
Kantrowitz also discusses how Anthropic fits into the broader strategic interests of Big Tech companies that either collaborate or compete with it. The alignment or divergence between Anthropic’s strategic goals and those of major incumbents can significantly influence market positioning and partnership opportunities, further affecting the generative AI trade's overall outlook.
In summary, the video encapsulates Anthropic’s role as a touchstone for generative AI industry health. Its progress is a litmus test for investors, technologists, and policymakers watching the unfolding future of AI. With safety, competition, and market dynamics all intertwined, Anthropic's journey offers significant insights into the potential paths generative AI may take in the coming years.
a16z GP, Martin Casado: Anthropic vs OpenAI & Why Open Source is a National Security Risk with China
Youtube • 20VC with Harry Stebbings • July 28, 2025
Technology•AI•Open Source AI•Regulation•Geopolitics
In a recent discussion, Martin Casado, General Partner at Andreessen Horowitz (a16z), delved into several pressing topics within the artificial intelligence (AI) sector. He began by analyzing the current AI investment landscape, emphasizing the pivotal role of foundational models in shaping the future of AI applications. Casado highlighted the emergence of companies like Anthropic, which are developing advanced AI models, and discussed the potential implications for the AI application layer.
A significant portion of the conversation focused on the challenges and opportunities presented by open-source AI. Casado argued that open-source AI is essential for fostering innovation and competition, serving as a counterbalance to monopolistic tendencies in the tech industry. He expressed concern over regulatory efforts that might restrict open-source AI, suggesting that such measures could inadvertently stifle technological progress and grant undue advantages to large corporations. Casado drew parallels to historical instances where regulatory capture led to monopolies, underscoring the importance of regulations that promote competition and prevent monopolistic practices. (podcastworld.io)
The discussion also touched upon the geopolitical implications of AI development, particularly in relation to China. Casado highlighted China's strategic approach to AI, noting its dual-phase plan: first, leveraging AI for domestic population control through extensive surveillance, and second, exporting this technology globally to influence international norms and governance. He emphasized the need for the United States to lead in AI innovation, advocating for partnerships with private companies rather than imposing restrictive regulations. Casado stressed that embracing open-source AI could serve as a national security imperative, enabling the U.S. to maintain a competitive edge and uphold democratic values in the face of authoritarian models. (thirdway.org)
In conclusion, Casado's insights underscore the complex interplay between technological innovation, regulatory frameworks, and international dynamics in the AI sector. He advocates for policies that support open-source AI development, promote healthy competition, and position the U.S. as a leader in the global AI landscape.
Balaji Srinivasan: How AI Will Change Politics, War, and Money
Youtube • a16z • July 28, 2025
Technology•AI•Decentralization•Cryptocurrency•FutureOfWar
In a recent discussion, Balaji Srinivasan, a technologist and investor, delved into the transformative potential of artificial intelligence (AI) on politics, warfare, and finance. He emphasized that AI is not merely a technological advancement but a catalyst for significant political and economic shifts.
Srinivasan highlighted the centralization of power in current AI systems, which often reflect the biases and interests of their creators. He argued that decentralized AI could democratize technology, allowing individuals to build and control their own AI systems, thereby reducing the influence of centralized entities. This decentralization could lead to a more equitable distribution of power and resources.
In the realm of warfare, Srinivasan discussed the evolving nature of conflicts, noting that modern technologies like drones and cyber capabilities are reshaping military strategies. He pointed out that the Armenia-Azerbaijan conflict served as a glimpse into the future of warfare, where technology plays a pivotal role in determining outcomes. This shift suggests that future conflicts may be less about traditional military might and more about technological superiority.
Regarding finance, Srinivasan underscored the disruptive impact of cryptocurrencies, particularly Bitcoin. He described Bitcoin as a "political revolution," capable of challenging traditional financial systems and altering global economic dynamics. He argued that Bitcoin's decentralized nature empowers individuals, potentially reducing the influence of centralized financial institutions and altering the balance of power in the global economy.
Srinivasan also touched upon the concept of "network states," which are communities organized around shared values and goals, often facilitated by digital platforms. He suggested that these network states could emerge as new forms of political organization, offering alternatives to traditional nation-states. This idea reflects a broader trend towards digital and decentralized forms of governance.
In summary, Srinivasan's insights provide a comprehensive overview of how AI and related technologies are poised to reshape various facets of society, from governance and military engagement to financial systems and social structures.
AI that was inevitable
Jamesin • July 23, 2025
Technology•AI•MachineLearning•Productivity•Innovation
Google Sheets has introduced AI integration, allowing users to call prompts directly from cells. This feature streamlines workflows that previously required external tools and complex processes. For instance, in a Series-B analysis, instead of manually collecting data and using scripts to process it, users can now utilize the AI function within Sheets to categorize companies as AI-related or not and further classify them into specific categories.
This advancement signifies a broader trend where tech giants like Google are embedding AI into widely-used tools, enhancing efficiency and accessibility. While the rollout took longer than anticipated, likely due to thorough vetting processes, the integration of AI into Google Sheets opens up numerous possibilities for data analysis and decision-making.
The venture capital landscape is also evolving, with a noticeable shift towards vertical AI applications. Companies focusing on specialized AI solutions are gaining traction, as opposed to general horizontal workflow solutions. This trend reflects a growing recognition of the unique challenges and opportunities within specific industries, prompting investors to seek out startups that offer tailored AI solutions.
As AI continues to permeate various sectors, the importance of integrating it into everyday tools becomes increasingly evident. The seamless incorporation of AI into platforms like Google Sheets not only enhances productivity but also democratizes access to advanced data analysis capabilities, enabling a broader range of users to leverage AI in their workflows.
Iconiq set to lead $5bn funding round for AI start-up Anthropic
Ft • July 29, 2025
Business•Investment•AI•VentureCapital
Iconiq Capital, an investment group affiliated with Mark Zuckerberg and Jack Dorsey, is exploring new avenues to generate value from its $5.75 billion fund amid a decline in IPOs. This downturn has prompted startups and investors to seek alternative strategies for returns, such as mergers, acquisitions, and trading startup stock on secondary markets. (ft.com)
In this context, Iconiq is reportedly leading a $5 billion funding round for AI startup Anthropic. Anthropic, a competitor to OpenAI, is in early discussions to raise at least $3 billion, potentially up to $5 billion, which could more than double its valuation to over $150 billion. The company has received interest from several large Middle Eastern investors, including Abu Dhabi-based AI fund MGX. Although Anthropic has been cautious about accepting funding from the region due to ethical concerns, prior secondary share purchases linked to MGX have occurred. (ft.com)
Founded in 2021 by former OpenAI executives Dario and Daniela Amodei, Anthropic has positioned itself as a leader in AI safety and ethical AI development. Its flagship model, Claude, directly competes with OpenAI’s ChatGPT, gaining recognition for its focus on transparency and controllability in generative AI. (techfundingnews.com)
The potential investment from Iconiq underscores the growing interest in AI startups and the strategic importance of securing substantial funding to compete in the rapidly evolving AI industry. As the AI sector continues to attract significant investment, companies like Anthropic are positioning themselves to leverage these funds to advance their technologies and expand their market presence.
The Evolution of AI Agents: Navigating the “Fog of AI” in Rapidly Changing Foundations | Stanislas Polu and Harrison Chase
Generalist • Mario Gabriele • July 29, 2025
Technology•AI•Agents•Autonomous Systems•Innovation
What’s next for AI agents, and how will they change the way we work? In this conversation, Stanislas Polu (CEO of Dust, formerly research at OpenAI) and Harrison Chase (CEO of LangChain, one of the most influential open-source AI frameworks) unpack the current state and future of AI agents. They reflect on their early conversations in the pre-ChatGPT days, how the landscape has evolved, and where it's headed next.
Stan and Harrison share lessons from building today’s agent infrastructure—from chat interfaces to the future of ambient, autonomous systems—and discuss the challenges of operating in the chaotic "fog of AI." We dig into the open questions, early insights, and messy realities of building in today’s fast-moving AI landscape.
In this conversation, we explore:
What sparked Stan and Harrison’s early interest in LLMs
The pre-ChatGPT era and how the AI landscape has evolved since late 2022
High-leverage use cases for agents inside Dust and LangChain today
The critical differences between AI workflows and true agents—and why agents may unlock more powerful, long-term solutions
Why reliability is the main blocker to ambient agents
Real-world enterprise use cases for AI agents across customer support, sales, and engineering
How to build in the “fog of AI” and the challenge of maintaining product vision when foundations shift every six months
Strategies for creating defensibility in a world where tech giants can quickly replicate features
The future of multi-agent systems and how they could transform enterprise productivity
The current state of the AI talent market
And much more
What will it take for robotaxis to go global?
Ft • July 23, 2025
Technology•AI•Autonomous Vehicles•Urban Transportation•Innovation
Robotaxis are becoming a viable part of urban transportation, with companies like Waymo, Tesla, and Zoox progressing toward commercialization after years of investment and development. Waymo leads the pack with over 250,000 rides weekly across five U.S. cities, using expensive custom vehicles but aiming to cut costs through scalable partnerships and manufacturing. Zoox is building its own high-end robotaxis from scratch, while Tesla takes a cost-sensitive approach with its camera-only autonomous system deployed in Model Ys, aiming for consumer fleet-sharing.
Key challenges remain: robotaxis must prove safety in real-world environments, achieve profitability at scale, and overcome regulatory hurdles that vary by state. While Waymo and Zoox use Level 4 autonomy, Tesla’s system is only Level 2, prompting safety concerns and scrutiny from the NHTSA. Investors are cautiously optimistic, with firms like JPMorgan forecasting profitability only when vehicle costs fall below $100,000.
Companies are experimenting with business models—including partnerships with Uber and direct-to-consumer services—to manage high costs of fleet maintenance, charging, and staffing. Despite optimistic forecasts and supportive U.S. policies under President Trump, analysts say the market is likely to consolidate with one or two leading players per region. Ultimately, robotaxis could revolutionize transport if they overcome economic, technical, and societal barriers.
The AI SDR Reality Check: How To Actually Make It Work
Saastr • Jason Lemkin • July 24, 2025
Technology•AI•SalesAutomation•B
I’ve now watched 20+ B2B SaaS companies try to deploy AI SDRs over the past 6 months. And we’ve rolled out several AIs >successfully< at SaaStr itself. Still even with our own success, 90% of folks get absolutely nothing. Zero pipeline. Zero meetings. Complete waste of time and money.
But the other 10%? They’re booking far more qualified meetings than their human SDR teams ever did. Some are scaling to $10M+ ARR with AI doing 80% of their outbound.
What’s the difference? It’s not the tool. It’s not the data. It’s not even the ICP.
It’s whether the founder/sales leader treats the AI SDR like a $100K hire or a $29/month SaaS tool.
The “Set It and Forget It” Disaster
Here’s what 90% of companies do:
Sign up for AI SDR tool
Import contact list
Write one email template
Hit “start campaign”
Check back in two weeks
Wonder why conversion rates are 0.02%
Blame the tool and churn
This is like hiring a junior SDR, giving them zero training, no coaching, no feedback, and expecting them to crush quota. Insane.
The 10% Who Actually Scale: The Daily Discipline Framework
The companies seeing 300-500% increases in qualified pipeline follow what I call the “Daily Discipline Framework.” Here’s exactly what they do:
Week 1-2: Foundation Setting
Daily time investment: 2-3 hours
Message Architecture: Create 15+ email variants for different personas, pain points, and sequence positions. Not one template—fifteen minimum.
Daily Output Review: Read every single email the AI sends. Every. Single. One. Mark what sounds human vs. robotic.
Response Monitoring: Set up Slack alerts for every reply. Respond within 2 hours max, even if it’s just to acknowledge.
Week 3-4: Optimization Sprint
Daily time investment: 1-2 hours
Performance Analysis: Which subject lines get opens? Which CTAs get clicks? Which pain points resonate? Track everything.
A/B Test Relentlessly: Change one variable daily. Subject line, opening hook, social proof, CTA. Never test multiple variables simultaneously.
Quality Gate: Ask yourself daily: “Is this email better than what my best human SDR would send?” If no, kill it and iterate.
Month 2+: Scaling Excellence
Daily time investment: 30-60 minutes
Template Evolution: Your highest-performing emails become templates. Low performers get killed. Continuous culling.
Dynamic Personalization: Move beyond “I saw you’re hiring” to genuine research-based insights. AI should reference recent company news, competitor moves, industry trends.
Human Handoff Optimization: Perfect the transition from AI to human. Best practice: AI books the meeting, human AE takes discovery call.
The Three Non-Negotiables That Separate Winners from Losers
1. Response Velocity
If your AI SDR gets a response and no human follows up for 6+ hours, you’ve lost the deal. Period. The companies scaling AI SDR have dedicated “AI response managers” whose only job is rapid follow-up.
Winner example: Prospect replies “Tell me more” at 2:47 PM. Human SDR calls at 3:15 PM. Meeting booked by 4:00 PM.
Loser example: Prospect replies Tuesday morning. Someone gets back to them Friday afternoon. Deal dead.
2. Message Quality Control
Your AI SDR should sound like your best human SDR, not a robot trying to sound human. This requires obsessive message curation.
Winner approach: CEO reads first 100 AI emails personally. Creates style guide. Establishes voice guidelines. Reviews sample emails weekly.
Loser approach: “The AI will figure it out.” Spoiler: it won’t.
3. Honest Performance Benchmarking
You need brutal honesty about AI vs. human performance. Track response rates, meeting booking rates, and show rates separately for AI-sourced vs. human-sourced meetings.
The companies that scale ask: “Is our AI SDR outperforming our humans yet? If not, what’s missing?”
The companies that fail assume: “AI is working because it’s sending emails.”
The $10M ARR Pattern: What Actually Works
I’ve reverse-engineered the playbooks from 12 companies that scaled past $10M ARR with AI SDR as a core growth engine. Here’s the pattern:
Month 1: Foundation
40+ hours of initial setup and training
Daily message quality reviews
Rapid response system implementation
Baseline performance measurement
Month 2-3: Optimization
Weekly A/B testing cycles
Persona-specific message tracks
Dynamic data integration
Human handoff refinement
Month 4-6: Scaling
Multi-channel sequence integration (email + LinkedIn + phone)
Industry-specific messaging tracks
Automated lead scoring integration
Team expansion around AI SDR success
Month 7+: Systematic Excellence
Predictable pipeline generation
Clear ROI measurement
Documented playbooks
Hiring human SDRs to handle AI-generated pipeline
The Reality Check: This Isn’t Easy
Let me be brutally honest: Making AI SDR work requires the same discipline as building any other core business function. You wouldn’t hire a salesperson and ignore them for a month. You wouldn’t launch a product without testing it. You wouldn’t run marketing campaigns without measuring results.
Yet somehow, founders think AI SDR should work differently.
The companies that succeed treat AI SDR implementation like a $500K revenue initiative. Because that’s what it becomes.
The companies that fail treat it like a $50 monthly expense. And they get exactly what they pay for.
The Bottom Line
AI SDR isn’t magic. It’s leverage. But leverage only amplifies what you put into it.
Put in laziness? Get amplified failure. Put in daily discipline? Get amplified success.
The choice is yours. But don’t blame the tool when you chose the wrong approach. I see way, way too much of that.
OpenAI’s IMO Team on Why Models Are Finally Solving Elite-Level Math
Youtube • Sequoia Capital • July 30, 2025
Technology•AI•MachineLearning•Mathematics•OpenAI
#259: Why Data Is the Stack
The fund cfo • Doug Dyer • July 24, 2025
Technology•Software•DataStack•Investment•AI
The Decade of Data with Tomasz Tunguz
In a recent episode of The Generalist Podcast, Tomasz Tunguz—managing partner at Theory Ventures—made a compelling case that we’ve entered the decade of data. His thesis: data isn’t just a layer in the stack—it is the stack. Whether you’re building with AI, blockchain, or SaaS, the real unlock comes from how you capture, structure, and activate data.
“All of them [our theses] are underpinned by data... AI, machine learning—that’s data. And blockchains, well, that's just a different kind of database.”
The Data Stack, Visualized
Here’s a simple framework we use to think about how innovation—and value—builds on top of data:
From raw data to insights that move markets, every layer compounds the value of the one beneath.
Data Businesses Scale Differently
The biggest winners of the last cycle—Snowflake, Databricks, OpenAI, Ethereum—aren’t just great products. They’re scalable systems for managing and monetizing data. The outcomes are outsized because the leverage is built in.
“These systems create really big companies.”
It’s not about hype. It’s about structured information flow—and the operational discipline to make data a source of compounding insight, not just noise.
Internal Systems = External Advantage
One of the most important takeaways was around internal knowledge systems. Investing isn’t just about picking—it’s about tracking what you’ve seen, how it connects, and where the edge lives.
“The nuances of one wave... if you have them stored... is a huge advantage later on.”
Firms that revisit their market maps and treat research as a compound asset are playing a different game. They're not reacting to trends—they’re recognizing familiar patterns early and building conviction faster.
The Power of Triangulation
A theme I keep seeing: faster, higher-confidence decisions come from diverse expert inputs. Not more opinions—sharper ones. Think GTM operators evaluating founder narrative. Technical leads digging into infra choices. These aren’t one-off consults—they’re core to the investment loop.
“It gives us a mosaic, a better triangulation of understanding a particular business or market.”
That mosaic matters more than ever, especially in AI and deep tech, where what looks like signal is often just noise dressed up with a demo.
The Liquidity Stack Is Evolving
Finally, one of the more forward-looking points: crypto may unlock new venues for startup liquidity—especially for software companies currently stuck between late-stage stasis and IPO purgatory.
“Software startups will have a much faster path to IPO as a result of crypto.”
It’s early. But the idea of bridging public market access and crypto-native capital formation isn’t theoretical anymore.
Bottom Line
If the last decade was about mobile and cloud, this one is about data—structured, leveraged, and compounding.
The firms (and founders) who win won’t just use data—they’ll build systems that let them think with it. Faster pattern recognition. Sharper decision loops. More surface area for insight.
We’re going deeper on this thesis—how it applies to fund managers, firm design, and secondaries strategy—in this week’s subscriber-only memo.
Google Lands $1.2 Billion Cloud Contract From ServiceNow
Bloomberg • Brody Ford, Davey Alba • July 24, 2025
Technology•Cloud Computing•AI Integration•Enterprise Solutions•Google Cloud•AI
[YOUTUBE_EMBED:VIDEO_ID]
Alphabet Inc.'s Google has secured a deal exceeding $1 billion to provide cloud-computing services to ServiceNow Inc., marking a significant advancement in Google Cloud's strategy to attract major enterprises to its platform. ServiceNow has committed to spending $1.2 billion over the next six years, with the contract set to commence in 2026. This partnership is poised to enhance ServiceNow's workflow automation capabilities by integrating them with Google's advanced cloud infrastructure.
The collaboration aims to deliver AI-powered tools to millions of users, with ServiceNow's platform launching on Google Cloud Marketplace and certain offerings available on Google Distributed Cloud. This expansion is designed to address the growing demand from both private and public-sector organizations for efficient and scalable cloud solutions. (servicenow.com)
Additionally, ServiceNow plans to integrate its Workflow Data Fabric with Google Cloud's BigQuery, enabling users to connect enterprise data to AI and leverage BigQuery analytics for real-time automation and decision-making on the Now Platform. This integration is expected to enhance CRM, ITSM, and SIR solutions by adding AI Agent capabilities and supporting proactive operational actions. (investing.com)
The partnership also focuses on improving customer service experiences by automating and personalizing interactions across service channels. New integrations with Google Workspace will make ServiceNow data more accessible within productivity tools like Google Sheets and Chat, streamlining workflows for IT and HR teams. (investing.com)
ServiceNow's offerings are slated to launch on Google Cloud Marketplace throughout the second and third quarters of the year, with new integrations across BigQuery, Customer Engagement Suite with Google AI, and Workspace anticipated later in the year. The CRM, ITSM, and SIR modules for Infrastructure Operators in Google-Operated and Partner-Operated models of Google Distributed Cloud are also expected to become available in the same timeframe. (investing.com)
Pew Study: Google Users Click Less When AI Summaries Appear in Search Results
Medium • ODSC - Open Data Science • July 24, 2025
Technology•AI•SearchEngine•UserBehavior•ContentPublishing
Google’s AI Overviews are changing how users interact with search results — and not necessarily in publishers’ favor. According to a Pew Research Center study published this spring, users are significantly less likely to click on links when an AI-generated summary appears at the top of the search page.
The study, which analyzed the browsing behavior of 900 U.S. adults during March 2025, revealed that 58% of participants encountered at least one search result with an AI summary. In these cases, user interaction with links dropped sharply. Only 8% of users clicked on traditional links when an AI summary was present, compared to 15% on pages without one.
Even the links embedded within the AI summaries received little attention. Pew found that just 1% of visits to pages with AI summaries resulted in a click on a cited source. Online publishers — already grappling with declining traffic — have expressed concern that Google’s AI-generated content may be redirecting users away from original sources.
The Pew data reinforces those concerns, suggesting that users are increasingly relying on summaries rather than exploring the underlying content.
The presence of AI-generated overviews also correlates with shorter browsing sessions. The study found that users were more likely to end their session entirely after encountering a page with an AI summary. This occurred on 26% of such pages, compared to 16% of traditional search result pages.
Regardless of summary presence, the majority of searches — around two-thirds — ended without any link clicks, with users either conducting another search or leaving Google altogether.
Among the most frequently cited sources in both AI summaries and standard results were Wikipedia, YouTube, and Reddit, which collectively accounted for 15% of citations in AI overviews and 17% in regular results. However, government websites (.gov domains) appeared more frequently in AI summaries (6%) than in traditional search results (2%).
AI summaries typically contained a median of 67 words, though this ranged from as few as seven words to as many as 369. Additionally, the vast majority (88%) of AI summaries cited at least three sources, while just 1% cited a single source.
Search query structure appears to influence whether an AI summary is generated. According to the study:
Only 8% of one- or two-word searches triggered AI summaries
53% of 10+ word searches did
60% of question-form searches (e.g., beginning with “who,” “what,” or “why”) resulted in summaries
36% of searches with full sentence structure (a noun and a verb) returned an AI summary
These findings indicate that more detailed, natural-language searches are more likely to activate the AI Overview feature.
As AI summaries become a more prominent feature of Google search, user engagement patterns are shifting. Fewer clicks and shorter browsing sessions may offer convenience for users — but they present growing challenges for content publishers reliant on web traffic. The implications of this shift will likely continue to spark debate as AI integration in search expands.
Loveable and Replit Both Hit $100M ARR in Record Time. The Vibe Coding TAM: How Big Can This Market Really Get?
Saastr • July 23, 2025
Technology•AI•Software Development•Vibe Coding•Innovation
In the rapidly evolving "vibe coding" sector, two companies have achieved remarkable milestones: Loveable and Replit. Loveable, a Swedish AI startup founded in 2023, reached $100 million in annual recurring revenue (ARR) within eight months, setting a new record for the fastest software company to achieve this milestone. Similarly, Replit, an American technology company established in 2016, experienced a tenfold revenue increase, growing from $10 million to $100 million ARR in just 5.5 months following the launch of its AI-powered coding assistant, Replit Agent, in September 2024. (saastr.com)
The term "vibe coding," introduced by AI researcher Andrej Karpathy, describes a new paradigm where users can create software applications by interacting with AI through natural language prompts, eliminating the need for traditional coding skills. This approach has democratized software development, enabling both technical and non-technical users to build applications by simply describing their desired outcomes. (saastr.com)
The rapid growth of Loveable and Replit underscores the substantial demand for AI-driven development tools. Loveable's user base expanded to 2.3 million, with over 100,000 projects created daily across its platform, driven largely by organic growth and word-of-mouth. Replit's subscriber base grew by 45% monthly following the release of its AI agent, with the company now being the single largest user of Anthropic models by tokens on Google Cloud. (growthunhinged.com)
These developments highlight the transformative potential of AI in software development, suggesting that the total addressable market (TAM) for "vibe coding" could be vast. As more users adopt AI-driven development tools, the market is expected to expand significantly, offering new opportunities for innovation and growth in the tech industry.
New AI architecture delivers 100x faster reasoning than LLMs with just 1,000 training examples
Venturebeat • July 25, 2025
Technology•AI•MachineLearning•HierarchicalReasoningModel•DataEfficiency
Sapient Intelligence has introduced the Hierarchical Reasoning Model (HRM), a brain-inspired AI architecture that efficiently handles complex reasoning tasks with minimal data and computational resources. Unlike traditional large language models (LLMs) that rely heavily on extensive training data and Chain-of-Thought (CoT) prompting, HRM employs a hierarchical structure to achieve substantial computational depth without compromising training stability or efficiency.
The HRM consists of two interdependent recurrent modules: a high-level module responsible for slow, abstract planning, and a low-level module handling rapid, detailed computations. This design enables the model to alternate dynamically between automatic thinking ("System 1") and deliberate reasoning ("System 2") in a single forward pass, mirroring the human brain's processing mechanisms. (en.acnnewswire.com)
Trained on just 1,000 examples without pre-training and comprising only 27 million parameters, HRM has demonstrated exceptional performance on complex reasoning tasks. It has achieved near-perfect accuracy on challenging benchmarks such as Sudoku-Extreme and Maze-Hard, outperforming larger models that require significantly more data and computational resources. (emergentmind.com)
The model's efficiency and effectiveness open new opportunities in fields where large datasets are scarce, yet accuracy is critical. In healthcare, HRM is being deployed to support complex diagnostics, particularly in rare-disease cases where data signals are sparse and demand deep reasoning. In climate forecasting, HRM has improved subseasonal-to-seasonal forecasting accuracy to 97%, translating directly into social and economic value. In robotics, HRM's low-latency, lightweight architecture serves as an on-device "decision brain," enabling next-generation robots to perceive and act in real time within dynamic environments. (platodata.ai)
By leveraging hierarchical processing and multi-timescale computation, HRM addresses the limitations of current LLMs, offering a more efficient and scalable solution for complex reasoning tasks. Its open-source release allows researchers and developers to explore and integrate this innovative approach into various applications, potentially revolutionizing fields that require advanced reasoning capabilities.
Tesla signs a $16.5 billion chip contract with Samsung Electronics
Youtube • CNBC Television • July 28, 2025
Technology•Semiconductors•AIChips•Tesla•Samsung•AI
Elon Musk has confirmed that Tesla has entered into a $16.5 billion chip supply agreement with Samsung Electronics. This deal, effective from July 26, 2025, and extending through December 31, 2033, involves Samsung manufacturing Tesla's next-generation AI6 chips at its new fabrication facility in Taylor, Texas. (cnbc.com)
The partnership is strategically significant for both companies. For Samsung, the contract is expected to revitalize its foundry business, which has been facing financial challenges, including losses exceeding 5 trillion won ($3.6 billion) in the first half of 2025. The deal is anticipated to reduce these losses by over 70% by 2027, assuming production yields meet expectations. (ainvest.com)
For Tesla, the agreement secures a dedicated supply of advanced AI chips essential for its Full Self-Driving (FSD) systems and AI infrastructure. By diversifying its chip supply chain, Tesla aims to reduce reliance on a single supplier and ensure a steady flow of high-performance chips critical for its autonomous driving technology. (ainvest.com)
The collaboration also aligns with broader industry trends emphasizing localized semiconductor production to mitigate geopolitical risks and supply chain disruptions. Samsung's investment in the Texas facility, supported by U.S. government incentives under the 2022 Chips and Science Act, reflects a strategic move to bolster domestic chip manufacturing capabilities. (ainvest.com)
Elon Musk highlighted the strategic importance of the partnership, stating, "The strategic importance of this is hard to overstate." (cnbc.com)
The deal is expected to commence mass production of Tesla's AI6 chips in late 2026, with the Taylor, Texas facility playing a pivotal role in meeting Tesla's future computing needs. (techpowerup.com)
The Making Of Dario Amodei
Big Technology • July 29, 2025
Uncategorized•AI
Dario Amodei is the CEO and co-founder of Anthropic, a public benefit corporation dedicated to building AI systems that are steerable, interpretable, and safe. Prior to founding Anthropic, Amodei served as Vice President of Research at OpenAI, where he led the development of large language models like GPT-2 and GPT-3. He is also the co-inventor of reinforcement learning from human feedback. Before joining OpenAI, he worked at Google Brain as a Senior Research Scientist. Amodei earned his doctorate in biophysics from Princeton University as a Hertz Fellow and was a postdoctoral scholar at the Stanford University School of Medicine. (darioamodei.com)
In April 2025, Amodei published an essay titled "The Urgency of Interpretability," emphasizing the need for understanding the inner workings of AI systems to ensure their safe and beneficial deployment. He argued that while AI technology is advancing rapidly, achieving interpretability is crucial to guide its development responsibly. (darioamodei.com)
In a recent discussion, Amodei predicted that generative AI could lead to the elimination of half of entry-level, white-collar jobs within five years. He acknowledged that this projection might be self-serving but emphasized the importance of preparing for such transformative changes in the workforce. (linkedin.com)
Amodei's work continues to influence the AI industry, focusing on creating systems that align with human values and contribute positively to society.
Chinese Tech
Kimi
Chinatalk • Irene Zhang • July 18, 2025
Technology•AI•OpenSource•LargeLanguageModels•ChineseAIDevelopment•Chinese Tech
An anon start-up conducting cutting-edge open-source research on China’s science, technology, and industrial ecosystems is looking for part-time China research analysts. You’ll be saving America with a firm run by someone I [Jordan] can vouch for being the literal best in the business.
Today’s post is brought to you by 80,000 Hours, a nonprofit that helps people find fulfilling careers that do good. 80,000 Hours — named for the average length of a career — has been doing in-depth research on AI issues for over a decade, producing reports on how the US and China can manage existential risk, scenarios for potential AI catastrophe, and examining the concrete steps you can take to help ensure AI development goes well. Their research suggests that working to reduce risks from advanced AI could be one of the most impactful ways to make a positive difference in the world. They provide free resources to help you contribute, including a job board with hundreds of high-impact opportunities, a podcast featuring deep conversations with experts like Carl Shulman and Ajeya Cotra, and free, one-on-one career advising to help you find your path.
Beijing-based Moonshot AI (literally “dark side of the moon” — a Pink Floyd reference) released Kimi K2 on July 11. K2 is a non-reasoning, open source large language model based on the Mixture-of-Experts (MoE) technique and achieved benchmark scores competitive with many leading models, including DeepSeek V3. At 1 trillion parameters, it is an impressive feat. Per Nathan Lambert of Interconnects: It is a "non-thinking" model with leading performance numbers in coding and related agentic tasks (earning it many comparisons to Claude 3.5 Sonnet), which means it doesn't generate a long reasoning chain before answering, but it was still trained extensively with reinforcement learning. It clearly outperforms DeepSeek V3 on a variety of benchmarks, including SWE-Bench, LiveCodeBench, AIME, or GPQA, and comes with a base model released as well. It is the new best-available open model by a clear margin.
ChinaTalk last covered Moonshot AI in March, when we translated an expansive interview CEO Yang Zhilin gave to the online tech news platform Overseas Unicorn. In the conversation, Yang portrayed himself and his company as stubborn AGI purists who focus on “tech visions” rather than product design or short-term revenue generation. K2 is a step towards many aspects of this vision, but its story so far also reflects the jagged reality of cutting-edge model research in China. This piece discusses what distinguishes Moonshot in China’s landscape — and what the DeepSeek and Kimi moments should tell Westerners about the future of Chinese AI labs; how DeepSeek paved the way for K2, and why this is about open-source culture; why “the model is the agent” for Kimi; and what we might expect next from Chinese AI startups.
Yang Zhilin, born in coastal Guangdong in 1992, earned his bachelor’s degree from Tsinghua University and went on to a PhD at Carnegie Mellon. He worked at Meta AI and Google Brain before returning to China to begin his entrepreneurship journey. Unlike fellow Guangdong native Liang Wenfeng, CEO of DeepSeek, Yang has deep connections in both China and the US and does not only focus on hiring domestically educated talent. While Tsinghua is heavily represented in the résumés of Moonshot’s founding team, others come from more diverse global educational backgrounds.
Moonshot has no B2B offerings and does not build wrapper tools for corporate users, instead focusing directly on individual customers. From the beginning, Kimi’s selling point to Chinese users was its long context window, allowing users to upload dozens of documents and analyze long articles. But it’s not just about an awesome user experience; long-context is central to Yang Zhilin’s AI worldview. Per his comments in the Overseas Unicorn interview: To achieve AGI, long-context will be a crucial factor. Every problem is essentially a long-context problem — the evolution of architectures throughout history has fundamentally been about increasing effective context length. Recently, word2vec won the NeurIPS Test of Time award. Ten years ago, it predicted surrounding words using only a single word, meaning its context length was about 5. RNNs extended the effective context length to about 20, LSTMs increased it to several dozen, and transformers pushed it to several thousand. Now, we can reach hundreds of thousands.
Yang’s previous venture Recurrent AI was funded by seven venture capital firms, two of which also invested in Moonshot. Alibaba became Moonshot’s biggest backer in 2024. As Moonshot’s valuation rose rapidly, five of Recurrent AI’s investors — those who did not join Moonshot’s funding rounds — filed an arbitration case against Yang, alleging that Moonshot was launched without obtaining necessary waivers from previous investors. Recall that High Flyer, the parent company of DeepSeek, is a hedge fund, and that Liang Wenfeng has rejected outside investment as of March. Moonshot operates under much more normal tech startup restraints and faces investor pressure. Still, with just $1 billion raised (much less than the likes of Anthropic and OpenAI) and pressure to ultimately deliver value to shareholders, it created a leading open model, rather than operating in the hedge fund-funded cocoon that DeepSeek researchers enjoy.
What we are starting to observe here is rather obvious in hindsight, as we move farther away from the DeepSeek moment: there is no single path to success for Chinese frontier labs. One does not necessarily have to replicate the DeepSeek recipe, whether in terms of hiring, funding, or labor practices, to create world-class models. Compute constraints, which apply across China, continue to incentivize a diverse range of research teams in China to pursue novel algorithmic research.
The team behind Kimi is very active on Zhihu, China’s Quora equivalent. According to a post by engineer Liu Shaowei, K2 essentially copied the combination of Expert Parallelism (EP) and Data Parallelism (DP) outlined by DeepSeek in V3’s technical report, with four notable changes: raising the number of experts from 256 to 384, as their pretraining team found that scaling laws are valid for sparsity; reducing the number of attention heads to compensate for a higher number of experts; keeping only the first layer as dense and using MoE for all the rest, to maximize the benefits of MoE; and keeping all experts in one group.
Recall Yang Zhilin’s Overseas Unicorn interview, where he argued that “AI is essentially a pile of scaling laws laid on top of each other.” Raising the number of experts seems to reflect that. As for why they used DeepSeek’s architecture, Liu says there was no point reinventing the wheel: Before starting to train K2, we conducted a large number of scaling experiments related to model architecture. The result was that none of the proposed architectures at the time were truly able to outperform DeepSeek V3. … The reason is simple: the V3 architecture has been validated and remains effective at large scale, whereas our “new architectures” haven’t yet undergone sufficient large-scale validation. Given the presence of two massive variables — Muon optimizer and a much larger model size — we didn’t want to introduce additional unproven variables just for the sake of “being new.”
Another Zhihu comment by fellow Moonshot engineer Su Jianlin highlights other ways K2 learned from DeepSeek: Internally, we were also constantly exploring better alternatives to MLA [multi-latent attention, an architectural idea refined and scaled by DeepSeek], but since this was our first open-source large-scale model, we ultimately chose to pay tribute to DeepSeek by replicating its MLA design. As for the MoE (Mixture of Experts) component, we adopted DeepSeek-V3’s shared expert, high sparsity, and loss-free load balancing. … A special note on the Sparsity part: for quite some time, we were running experiments with Sparsity = 8 (i.e., selecting 8 out of 64 experts). It wasn’t until we resolved some infrastructure issues one day that we began trying higher sparsity levels and found the gains to be significant. So we started exploring the Sparsity Scaling Law and gradually leaned into configurations similar to DeepSeek’s projections (DeepSeek-V3 already uses 256 choose 8, 32 Sparsity; while K2 uses 384 choose 8, 48 Sparsity). It felt like fulfilling a prophecy that the DeepSeek team had already made.
K2 would not exist without DeepSeek — and without an open-source culture and free flow of research. While Moonshot and DeepSeek are certainly competitors at the corporate level, Kimi’s engineers express deep respect for their DeepSeek colleagues. Yang Zhilin, in February 2024, had told Tencent News that he didn’t believe open source models could catch up to closed source any time soon, because “many open-source contributions may not have been validated through compute-intensive testing,” while closed-source projects “attract concentrated talent and capital.” Moreover, he remarked that if he had a leading model today, “open-sourcing it would most likely be unreasonable … it's usually the laggards who might do that—or they might open-source a small model just to stir things up.” So what changed between then and now? The success of DeepSeek was probably an important proof-of-concept for open source for fellow Chinese AI entrepreneurs. While Yang himself has yet to offer any updated comments on his open-source views, Moonshot engineer Justin Wong shared his “why open source” take on Zhihu: First of all, we obviously wanted to gain some recognition. If K2 were just a closed-source service, it wouldn’t be getting nearly as much attention and discussion as it is now. Next, open-sourcing allows us to leverage the power of the developer community to improve the technical ecosystem. Within 24 hours of our release, the community had already implemented K2 in MLX, with 4-bit quantization and more—things we truly don’t have the manpower to accomplish ourselves at this stage. But more importantly: open-sourcing means holding ourselves to a higher technical standard, which in turn pushes us to build better models—aligned with our goal of AGI. This might seem counterintuitive—if we’re just releasing model weights, why would that force the model to progress? The logic is actually very simple: open source means performance comes first. You can no longer rely on superficial tricks or hacks to dazzle users. Anyone who gets the same weights should be able to easily reproduce your performance—only then is it truly valid.
With the success of DeepSeek, Chinese frontier labs now have ample, proven justification for the value of open source, both for marketing and in terms of research relevance.
China’s AI Gambit: Code as Standards
Chinatalk • Jordan Schneider • July 10, 2025
Technology•AI•Innovation•OpenSource•Standards•Chinese Tech
Today we’re running a guest translation by the great Thomas des Garets Geddes of the Sinification substack, which translates leading Chinese thinkers.
Below, Liu Shaoshan — a leading figure in China’s embodied AI research with a PhD from UC Irvine and an official state-designated “high-end overseas talent”— proposes a roadmap for Chinese AI dominance cueing off America’s successful diffusion of TCP/IP protocols in the late 20th century. Just as influence over the internet afforded the USA “a truly global mechanism of discursive control,” Liu argues, AI diffusion and the standards exported along with AI systems will be key to power projection in the 21st century.
He outlines four strategic levers for achieving open source dominance — technological competitiveness, open-source ecosystem development, international standard-setting, and talent internationalisation. Acknowledging that China’s engagement in open source AI still has a long way to go, he advocates for the creation of a comprehensive “China HuggingFace” that maximizes market share by publishing toolkits for model training, embodied AI implementation, and everything in between. Finally, the author argues that Beijing should encourage Chinese AI talent to live and work abroad, especially in Belt and Road participant countries, rather than encouraging them to come back to China.
As we moved into 2025, the Trump administration reintroduced tariffs across the board on global goods while simultaneously implementing stricter controls on technology exports, delivering a dual blow to global trade and technological ecosystems. This latest round of trade protectionism and technological blockade policies has significantly increased systemic uncertainty across the global economy and technology sectors. In March 2025, the Organisation for Economic Co-operation and Development (OECD) revised its forecast for global growth in 2025 from 3.3% down to 3.1%, and further downgraded it to 2.9% in June, citing “trade policy uncertainty” and “structural barriers” as factors dragging down global investment and supply chain stability. The United Nations Conference on Trade and Development (UNCTAD) also warned that escalating trade tensions and policy volatility could slow global growth to 2.3%, leading to stagnation in both investment and innovation. Against this backdrop, a strategic window of opportunity has opened for Chinese AI to “go global.” As the United States increases obstacles to trade and to the sharing and export of technology, China should participate actively in the restructuring of the global technological ecosystem. It can do this by exporting its technology, building an open-source ecosystem, setting standards and encouraging the global mobility of Chinese AI talent. This would then mark a shift from a passive posture to a proactive strategic approach.
On 8 May 2025, during a hearing of the US Senate Committee on Commerce titled “Winning the AI Race,” Microsoft President Brad Smith issued a warning: “The number-one factor that will define whether the US or China wins this race is whose technology is most broadly adopted in the rest of the world.” His statement underscored a fundamental shift in the strategic landscape: against the backdrop of Trump's increasingly stringent technology and global trade policies, the extent to which [a country’s technology] is adopted globally has become the key determinant of [that country’s] great power status in AI. It is precisely within this climate of mounting restrictions that China’s AI industry has encountered a window of opportunity to “go global” and reshape the international technological ecosystem.
Looking back at the 1960s to 1980s, the United States successfully capitalised on the opportunity offered by its technological ascendancy in IT. First, in terms of technology export, in 1969 the US Defence Advanced Research Projects Agency (DARPA) launched ARPANET, and by 1980 had incorporated TCP/IP [its new communications protocol] into its defence communications system. On 1 January 1983, a full network-wide transition to TCP/IP was completed, laying the foundation for the global internet. Next came the construction of the open-source ecosystem: DARPA’s TCP/IP protocols were incorporated into the open-source BSD system, thereby initiating an “open-source means of dissemination” model. In 1986, the NSFNET project extended this model to the academic community. This made networking functions readily accessible at the operating system level and further stimulated broad participation from the research and developer communities. It was precisely this “code usable is code used” design philosophy that accelerated the internet’s diffusion from the laboratory to commercial and civilian domains. The evidence demonstrates that only when core communication protocols move from closed-loop research to open-source community development can exponential technological breakthroughs occur — moving from source code to [forming the core of] global infrastructure.
In addition to its advanced technology research and development (R&D) and open-source contributions, the United States also achieved widespread adoption of TCP/IP by standardising outputs and aligning government policy with open-source practice to establish a globally compatible basic infrastructure. In March 1982, the US Department of Defence officially designated TCP/IP as the standard for military communications and announced a nationwide transition scheduled for 1 January 1983 — the so-called “flag day” policy. This was not merely a technical upgrade, but a form of “compatibility mandate”: all hosts connected to the network were required to support TCP/IP, or else they would be disconnected on the day of the switch. This compulsory standardisation not only enforced synchronised upgrades across the US military-industrial and research systems but also fostered a nationwide consensus on protocol standards.
More importantly, this standards rollout did not take place in a closed-off system. Rather, it advanced in tandem with the open-source ecosystem: DARPA awarded contracts in phases to institutions such as BBN, Stanford and Berkeley to develop TCP/IP implementations for major platforms including Unix BSD, IBM systems and VAX, subsequently incorporating the code into the 4.2BSD version of Unix for public release. In 1986, the NSFNET project further accelerated the widespread deployment of this protocol across the national academic network, effectively achieving near 100% coverage.
This series of measures served collectively as the blueprint for the internationalisation of a US-developed communications standard. The government played a leading role by setting a compatibility timeline to guide synchronised upgrades to [this new] standard. Open-source practices facilitated multi-platform availability, enabling “use on demand” by research institutions and enterprises. The infrastructure for global system compatibility was deployed in parallel with the release of open-source code. This strategy — combining policy, open-source and platform integration — not only accelerated the adoption of the standard, but also rapidly established TCP/IP as the default protocol for international communication.
From the 1960s to 1980s, the United States established its leading role through the export of technical standards. Of even greater long-term significance was [its strategy of] sending IT talent abroad, which profoundly influenced the global internet architecture. This experience offers valuable lessons for Chinese AI’s [ambition to] expand overseas. America’s engineers and researchers didn’t close themselves off from the world—large numbers participated actively in international standards organisations and met with different parts of the academic community. For example, the International Network Working Group (INWG), founded in 1972 by American scholars such as Vint Cerf and Steve Crocker, played a key role in designing global network protocols and laid the groundwork for the birth of TCP/IP.
Subsequently, the Internet Engineering Task Force (IETF), established in 1986, held its first meeting in San Francisco with 21 American researchers and received government funding and support. These platforms became key frontlines through which American researchers exercised technological discourse power. American experts held core technical and leadership roles within the INWG, the IETF and its parent body, the Internet Architecture Board (IAB) — which together oversaw the development of the internet’s technical architecture. Figures such as Vint Cerf, Jon Postel, and David Clark participated in meetings for many years, published RFC documents, and oversaw the registration of technical parameters and the standardisation process. These efforts not only ensured the professionalism of the adopted standards, but also reinforced the central role of the United States within global internet governance.
More importantly, the institutional design adopted by these organisations was one of open collaboration, enabling engineers from around the world to influence the direction of standards through voluntary participation. By leveraging their first-mover advantage and influence within the research community, American researchers led the development of key standards. In doing so, the US not only exported the technology itself, but also [American] internet culture, projecting its governance discourse power [overseas]. Furthermore, this strategy of internationalising talent meant that the US exported not just protocols, but also the capability to shape their evolution and the associated rule-making process — thereby establishing a truly global mechanism of discourse control over the Internet.
How Hangzhou Spawned Deepseek and Unitree
Chinatalk • Lily Ottinger • July 9, 2025
Technology•AI•Innovation•Startups•Regional Development•Chinese Tech
Zilan Qian is a fellow at the Oxford China Policy Lab and an MSc student at the Oxford Internet Institute.
What conditions made DeepSeek possible? Despite widespread debate, much of this discussion remains concentrated at the macro-level of nation-states or the micro-level of tech companies. In prevailing narratives, DeepSeek is either seen as a symbol of China’s rising technological prowess or a lone disruptor challenging a top-down innovation system. Hangzhou 杭州, the city where DeepSeek is based, rarely takes center stage.
A closer examination of Hangzhou’s emerging tech ecosystem reveals that DeepSeek did not appear by chance. Hangzhou is the home of six other emerging tech companies — nicknamed Hangzhou’s “six little dragons (六小龙)”: Unitree (极链科技) and Deep Robotics (深度机器人), two of China’s leading robotics companies; Game Science (游戏科学), which produced China’s first AAA game Black Myth: Wukong; BrainCo (脑波科技), a brain-machine interface innovator; and Manycore Tech (万核科技), the world’s largest spatial design platform as of 2023. Even earlier, in 2000, the city saw the emergence of the Alibaba Group — now the second largest e-commerce platform in the world and the developer of another leading Chinese AI model (Qwen).
So, how much does geography matter for the emergence of these companies? After DeepSeek made headlines, the media started to name Hangzhou “China’s Silicon Valley [1][2][3].” This convenient comparison is often used to imply a zero-sum U.S.-China rivalry, as if Hangzhou is China’s secret AI and robotics hub designed to challenge the Silicon Valley-backed U.S. The label also projects a misleading image of Hangzhou as a hyper-technical, Silicon Valley-style hotspot, which obscures the fundamentally different comparative advantages and strategies at play.
The Missing Ingredients
Silicon Valley has many essential ingredients that Hangzhou lacks. Researchers argue that Silicon Valley's model has six interconnected elements: (1) venture capital, (2) human capital, (3) university-industry ties, (4) direct and indirect government support, (5) industrial structure, and (6) support ecosystem. Compared to Silicon Valley, and even to most tier 1 cities in China, Hangzhou lacks at least four of these elements, with no clear advantages in venture capital, human capital, university-industry ties, or industrial structure.
Firstly, the venture capital system in China has always been weaker than that in the US, and the gap has significantly widened in the past few years. Overseas and domestic VC fundraising for Chinese companies has drastically fallen, with RMB-denominated funds falling from 88.42 billion USD in 2022 to 5.38 billion in 2024, and USD-denominated funds from 17.32 billion to 0.75 billion during the same period. Within China, Hangzhou did not stand out as a recipient, with venture capital investment mainly flowing into Beijing, Shenzhen, and Shanghai from 2000-2022. Although Zhejiang province (which houses Hangzhou) was the biggest recipient of venture capital funding in 2024, this capital poured in only after companies like Game Science and Unitree had already begun to gain national attention. In fact, Zhejiang saw 41 new corporate venture capital funds registered in 2024, the highest among 18 mainland provinces, which highlights how investment responded to, rather than catalyzed, the region’s tech momentum.
Likewise, Hangzhou lags behind not just Silicon Valley but also major Chinese cities in terms of human capital. The city does not have a strong university cluster. Although some would hype up Zhejiang University as China’s Stanford, in reality, Hangzhou’s higher education is not even competitive among other top cities in China. Zhejiang University (ZJU) is the only elite university included in the national 211 project in the whole of Zhejiang province. In comparison, Beijing has 26 such universities, Jiangsu 11, and Shanghai 10. This shortage has broader implications. Although talent can migrate, the lack of top universities also makes a Hangzhou hukou (household registration) less appealing. Due to the provincial nature of China’s university entrance exams, living in a city with prestigious institutions (like Beijing and Shanghai) offers more educational opportunities. That’s because top universities allocate a larger share of admissions quotas to local students. For example, in 2024 Peking University and Tsinghua University recruited 580 out of 68,000 students from Beijing, compared to 380 out of 405,000 from Zhejiang province. The admission rate for Beijing students (0.85%) was 9.5 times that of Zhejiang province (0.09%).
While university-industrial ties do exist in Hangzhou, they are far less rigorous than the “Silicon Valley model.” The few cases of cooperation between ZJU and Alibaba are incomparable to the numerous startup accelerators and the wide range of university-industry collaborations offered by Stanford or the University of California, Berkeley. Overall, China has relatively weak university-industry ties compared to most prominent research universities in the U.S. Even within China, Tsinghua University and Shanghai Jiao Tong University precede ZJU in terms of unicorn incubation capacity.
The successful tech companies did not directly spin out from the universities themselves. Liang Wenfeng founded High-Flyer, the hedge fund behind DeepSeek, eight years after graduating from ZJU. Alibaba founder Jack Ma applied and got rejected from 30 different jobs after graduating from Hangzhou Normal University. BrainCo was a spinout from Harvard’s Innovation Lab during the CEO’s postgraduate studies at Harvard, although he completed his undergraduate degree at ZJU. After graduating from Zhejiang Science and Technology University, Wang Xingxing, the founder of Unitree, went to Shanghai for master studies and joined DJI, China’s leading drone company. The distance between these entrepreneurs and their Zhejiang academic backgrounds makes it challenging to prove a causal connection to Zhejiang’s university innovation capacity.
Lastly, unlike Silicon Valley, which developed on a base of the Cold War defense industry, the city does not have a strong industrial history. In 2023, Hangzhou’s industrial gain was 2107.4 billion RMB, 12th among major cities in China and less than half of what Shenzhen (4851.0 billion RMB) made that year. Moreover, Hangzhou’s industrial structure is largely dominated by light industries such as textiles (i.e., Silk) and food & beverages (i.e., the F&B giant Wahaha).
Zhipu AI’s $1.4B Raise Signals a New Wave of AI Innovation in 2025
X • bgurley • July 28, 2025
X•Chinese Tech
Key Takeaway: Zhipu AI’s recent $1.4 billion funding milestone highlights the rapid acceleration and collaborative evolution in the AI sector, joining peers like DeepSeek and Kimi Moonshot in breaking new benchmarks and setting the stage for transformative advancements.
In a brief but powerful announcement, @bgurley shared the news of Zhipu AI’s monumental success in both funding and technology performance:
Zhipu AI has raised $1.4 billion, signaling massive investor confidence and capital influx into AI research and applications.
The company is noted for “crushing benchmarks,” implying breakthroughs in AI model performance, efficiency, or both.
Zhipu joins a cohort of cutting-edge AI firms including @deepseek_ai and @Kimi_Moonshot in a dynamic ecosystem where these entities are “co-evolving,” likely learning from and inspiring each other’s progress.
The rapid succession of milestones from these companies underscores the competitive yet collaborative nature of the AI industry in 2025, fueling expectations for accelerated innovation and deployment of new technologies.
This announcement reflects broader trends in AI funding and development:
Venture capital interest continues to surge, especially in scalable models that demonstrate strong practical or research utility.
A focus on open collaboration and transparent advancement (“Also open”) suggests the sustaining power of community-driven AI progress.
Successes like these mark a growing diversification of AI ventures from foundational models to specialized applications.
While the tweet itself is concise, its implications are far-reaching for those tracking AI investment cycles, model performance benchmarks, and the evolving competitive landscape among emergent AI leaders.
Xi Jinping is the main thing holding China back
Noahpinion • July 28, 2025
Politics•Leadership•China•XiJinping•Geopolitics•China Tech•Chinese Tech
During the Biden years, many anticipated that the upcoming era of global history would be largely defined by economic and geostrategic competition between the U.S. and China. However, this seems less likely now as Donald Trump has taken a more conciliatory approach toward China, abandoning many of Biden's policies like export controls and tariffs.
China is at the peak of its power. Its share of world manufacturing has surged to levels similar to those the U.S. enjoyed in the mid-20th century when it was the dominant industrial force globally. Chinese cities project a futuristic image with vast infrastructure, robots, electric vehicles, and advanced payment systems. While China’s innovation system produces fewer groundbreaking discoveries than the U.S. once did, it still leads in many science and technology fields through incremental advances. China manufactures most electric cars, drones, ships, industrial machines, and robots worldwide, and could soon produce semiconductors and aircraft as well.
Despite this power, the so-called "Chinese Century" may underwhelm in absolute terms. Most rising countries evolve from improving existing technologies to inventing new ones, but China might remain primarily a fast follower. Economically, it may wield significant global influence, yet living standards could stay below those of the U.S. and Europe. Socially, China may remain repressive and culturally stifled without a flourishing arts scene. Geopolitically, it may remain inward-looking and fail to transform the global system like other historic powers.
Much of China’s problems aren’t solely Xi Jinping’s doing—urban sprawl, the real estate bubble, and demographic challenges existed independently of him. However, Xi’s concentration of power and personal limitations significantly hamper China’s potential greatness. Unlike Deng Xiaoping, who unlocked rapid growth by liberalizing the economy and appointing technocrats, Xi has consolidated power by purging rivals and installing loyalists, cultivating a personality cult. His leadership style resembles Joseph Stalin’s in terms of ruthless consolidation and paranoia.
This unchecked power brings risks: the “Bad King” problem, where a supreme leader may make poor decisions without restraint. Xi’s zero-Covid policy, persisting long past its effectiveness, severely damaged China’s economy and possibly triggered the real estate crash. Other missteps include a failed Belt and Road initiative, aggressive "wolf warrior" diplomacy hurting regional relations, crackdowns on industries and personal freedoms, and poor management of the real estate and tech sectors.
In response, starting in 2023, Xi launched an enormous industrial policy to pivot loans and subsidies toward manufacturing, aiming to make China the world’s supreme manufacturing power for national security. Though successful in some sectors like electric vehicles, this strategy risks creating unprofitable overproduction, leading to financial problems reminiscent of Japan’s or Korea’s past crises. The political challenge of unwinding unprofitable regional champions complicates industry consolidation.
As Xi ages at 72 without a clear successor, he faces internal political challenges, purges in the military and party, and increased paranoia to maintain control. His hold on power may grow more tenuous, and his increasing inward focus could distract China from global ambitions. His approach to controlling the private sector and suppressing dissent may hinder economic vitality.
For China’s people, decades under an aging, repressive Xi threaten slower economic growth and fewer opportunities. For the world, Xi’s blunders and isolationist tendencies could offer a reprieve from Chinese hegemony. While China has the power to dominate globally, Xi’s inward focus might buy breathing room for countries like Japan, India, Vietnam, Korea, and the U.S.
China Prepares to Unseat US in Fight for $4.8 Trillion AI Market
Bloomberg • July 30, 2025
Technology•AI•ArtificialIntelligence•Innovation•GlobalTrade•Chinese Tech
China is intensifying its efforts to challenge the United States' dominance in the rapidly expanding artificial intelligence (AI) sector, which is projected to reach a valuation of $4.8 trillion. At the World Artificial Intelligence Conference in Shanghai, humanoid robots showcased their capabilities in a boxing ring, symbolizing China's commitment to integrating AI into various facets of society.
A pivotal development in this endeavor is the emergence of DeepSeek, a Chinese AI startup that has made significant strides in AI model efficiency. By employing a "mixture of experts" architecture, DeepSeek has substantially reduced training costs and computational demands, enabling the deployment of advanced AI models on less powerful hardware. This innovation has disrupted the AI industry, challenging established hardware providers like Nvidia and prompting a reevaluation of global AI infrastructure. (professional.content.cirrus.bloomberg.com)
The Chinese government's "Made in China 2025" initiative underscores the strategic importance of AI and related technologies. This program aims to indigenize key sectors such as AI, 5G, aerospace, semiconductors, electric vehicles, and biotechnology, with the goal of securing domestic market share and establishing a competitive edge in the global market. (en.wikipedia.org)
Despite facing challenges, including trade tensions and technological restrictions, China has demonstrated resilience and innovation in the AI sector. The country's rapid advancements in AI development and deployment signal a strategic shift towards technological self-sufficiency and global competitiveness.
Media
New Media: Podcasts, Politics & the Collapse of Trust
Youtube • a16z • July 25, 2025
Technology•Web•New Media•Podcasts•Politics•Trust•Media
IPO
Figma’s Auction-Like IPO Set Up to Capitalize on Strong Demand
Bloomberg • July 22, 2025
Business•Startups•IPO•Figma•Technology
Figma Inc., the San Francisco-based design software company, is adopting an auction-like approach for its upcoming initial public offering (IPO) to maximize proceeds from its highly anticipated public debut. Prospective investors are being asked to specify the number of shares they wish to purchase and at what price, a strategy aimed at accurately gauging demand and setting an optimal offering price. (news.bloomberglaw.com)
The company plans to sell approximately 37 million shares, with a price range between $25 and $28 per share, potentially raising up to $1.03 billion. This valuation would place Figma's fully diluted worth between $14.6 billion and $16.4 billion. The IPO is expected to list on the New York Stock Exchange under the ticker symbol "FIG." (investing.com)
Figma's financial performance has been strong, reporting $228.2 million in revenue for the first quarter of 2025, up from $156.2 million in the same period the previous year. Net income tripled to $44.9 million during this period. The company is also focusing on artificial intelligence, with plans for significant investment in AI and potential acquisitions. (reuters.com)
The IPO follows the termination of a proposed $20 billion acquisition by Adobe in December 2023, which was blocked by antitrust regulators in Europe and the UK. Figma's decision to proceed with an IPO reflects confidence in its business strength despite broader market challenges. (ft.com)
The auction-style IPO approach, previously used by companies like DoorDash and Airbnb, aims to align the offering price with real-time demand while reducing the risk of overvaluation. By requesting limit orders, Figma seeks to obtain more detailed information from investors about the stock's value judgments, potentially capturing more hidden investment enthusiasm. (news.futunn.com)
In summary, Figma's auction-like IPO strategy is designed to capitalize on strong demand and accurately price its shares, positioning the company for a successful public market debut.
Education
Why You Should Still Study Computer Science
Youtube • 20VC with Harry Stebbings • July 26, 2025
Education•ComputerScience•CareerOpportunities•Innovation•Technology
In today's rapidly evolving technological landscape, pursuing a degree in computer science offers numerous advantages that extend beyond mere coding skills. This field equips individuals with a robust foundation in computing principles, including software engineering and data structures, essential for developing complex software applications. While artificial intelligence (AI) can assist in code generation, it currently cannot fully address intricate problems, underscoring the continued need for human expertise in the field. (forbes.com)
Studying computer science also hones critical thinking and problem-solving abilities. Courses covering algorithms, programming languages, and operating systems teach students to decompose complex issues into manageable components and devise logical solutions. These skills are highly valued across various industries, including project management, teaching, engineering, healthcare, and accounting. Moreover, computer science fosters innovation and entrepreneurship, enabling individuals to develop new technologies, establish tech companies, and address real-world challenges. Notable companies like Google and Amazon were founded by individuals with strong computer science backgrounds. (forbes.com)
The field's adaptability to new technologies is another compelling reason to study computer science. As technology continually evolves, a solid understanding of computing fundamentals allows individuals to quickly master emerging tools and platforms, a valuable asset in any profession. Additionally, computer science education often includes topics like ethics, privacy, and security, providing a comprehensive perspective on how technological advancements impact society. (forbes.com)
The demand for computer science professionals remains high, with roles in software development, IT administration, web development, and systems analysis. Some positions may require a bachelor's degree, while advanced roles might necessitate a master's or doctorate. This demand translates into competitive salaries and job security. For instance, the U.S. Bureau of Labor Statistics reports that computer and mathematical occupations had an average annual salary of $104,200. (bestcolleges.com)
In summary, studying computer science not only opens doors to lucrative and diverse career opportunities but also equips individuals with the skills to innovate, adapt, and make meaningful contributions to society.
Interview of the Week
"AI Is Too Busy to Take Your Job: The Electrifying Truth about our AIgorithmic Future
Keen on • July 25, 2025
Technology•AI•FutureOfWork•HumanCreativity•EnergyUsage•Interview of the Week
Yesterday, we focused on the death of the American way of work. But today the news on the AI front isn’t quite as dire. According to the New York based economic historian Dror Poleg, AI will be too busy to take your job. That’s the provocative thesis of Poleg’s upcoming book focused on the radical opportunities in our AI age. He argues that AI's massive energy consumption will actually preserve human employment, as society redirects computing power toward critical tasks rather than simply replacing human labor with algorithms.
Unlike Yuval Noah Harari's pessimistic "useless class" prediction, Poleg cheerfully envisions a future where everyone becomes valuable through constant experimentation and human connectivity. He believes we're entering an era where work becomes indistinguishable from leisure, interpersonal skills command premium value, and the economy depends on widespread human creativity and feedback to determine what's truly valuable in an increasingly unpredictable world. That’s the electrifying truth about our AI era. For Poleg, AI represents something even more transformative than electrification itself—a utility that will flow like water and affect everything, reshaping not just how we work but the very nature of economic value and human purpose.
“Energy is too valuable to waste on tasks humans can do... we as an economy, as a society, will basically want to throw as much electricity as possible at the things that matter up to the point that maybe automating different tasks that human can do... we'll decide to take electricity away from today's computer, even from people using Excel today and saying, Okay, that electricity is more valuable somewhere else."
Poleg asserts that AI is more than just a technological advancement; it is a pervasive utility. "I would say it's more significant... I think it's at least as significant as electricity and electrification. And in many ways... it is more of a utility than anything else for better or worse. So it will flow like water and it will affect everything."
He envisions a future that contrasts sharply with dystopian narratives: “My view of the future is actually exactly the opposite [of Harari's useless class]. I think that in the future everyone will be valuable and almost any activity would be valuable because we will not have any idea what is or who is valuable... as a society we will need as many people as possible to constantly do whatever they feel like, create whatever they want to create."
The nature of work itself will evolve, merging with leisure and human connection. “The general trend that I see is that work will become increasingly indistinguishable from leisure if we're looking long-term... we'll see more of these types of jobs, basically giving each other attention, helping each other know that we exist and sharing with each other more and more specialized and granular types of... service that only we can give to each other."
Poleg also highlights the renewed significance of physical, in-person interactions in this AI-driven age: “If you wanna know if something is true, the only way to know that is to be there or to know someone who was there... I think that also pushes us back towards offline. In-person physical interactions that will be at a premium.”
Regulation
Boston city council members introduce a bill to require drivers in Waymos
christian brits • July 28, 2025
X•Regulation
Date: All Waymo autonomous vehicles must have human drivers present while in operation. This measure appears intended to maintain human oversight and possibly address safety or labor concerns related to fully driverless operations.
Boston City Council Introduces Bill Mandating Waymo Drivers and AV Advisory Board Composition
Key Takeaway: Boston city council members have proposed a new bill that would require Waymo’s autonomous vehicles (AVs) to operate with drivers onboard and create an advisory board dominated by union representatives, signaling increased regulatory scrutiny and labor involvement in autonomous vehicle deployment.
On July 28, 2025, Boston city council members unveiled a legislative proposal aimed at tightening operational requirements for autonomous vehicles operating within the city, specifically targeting Waymo, one of the leading AV companies.
The bill's main provisions include:
Autonomous Vehicle Advisory Board: The bill also establishes an AV advisory board tasked with overseeing the deployment and regulation of autonomous vehicles in Boston. Significantly, the advisory board’s membership is proposed to be heavily weighted toward union representatives.
This development indicates a strategic move by the city council to increase union influence over the emerging autonomous vehicle landscape, potentially shaping policies around employment, safety, and labor standards. The requirement of drivers in Waymo vehicles pushes back against fully driverless AV operation, reflecting ongoing tensions between innovation, workforce protections, and public safety concerns.
```
M & A
Palo Alto Networks agrees $25bn takeover of CyberArk
Ft • former SoftBank executive Nikesh Arora • July 30, 2025
Business•MergersAndAcquisitions•Cybersecurity•IdentitySecurity•PrivilegedAccessManagement•M & A
Palo Alto Networks has announced a $25 billion cash-and-stock acquisition of Israeli cybersecurity firm CyberArk Software, marking its largest deal to date under CEO Nikesh Arora. This strategic move aims to bolster Palo Alto's end-to-end security offerings, particularly in identity security and privileged access management, areas where CyberArk excels with over 10,000 clients worldwide. (ft.com)
The acquisition values CyberArk shares at a 26% premium, following a trend of significant cybersecurity takeovers, including Google's $32 billion agreement to buy Wiz and Cisco's $28 billion acquisition of Splunk. As cyber threats like ransomware attacks grow in frequency and complexity, demand for sophisticated identity management solutions is surging. CyberArk's technology limits access to sensitive data, a critical defense in today's landscape of AI agents and machine identities. With CyberArk, Palo Alto aims to expand its market reach by offering comprehensive identity-based protections. The deal caused an 8% drop in Palo Alto's stock, despite its market capitalization remaining around $120 billion. (ft.com)
CyberArk, founded in 1999, specializes in privileged access management and identity security solutions. The company went public in 2014 and has been expanding its platform through acquisitions, including the purchase of machine identity vendor Venafi for $1.54 billion in 2024 and identity governance startup Zilla Security for up to $175 million in February. (crn.com)
The acquisition is expected to close in the second half of Palo Alto Networks' fiscal 2026, pending CyberArk shareholder approval. Following the announcement, Palo Alto Networks' shares dropped nearly 8%, while CyberArk's stock declined 1.8%. This deal follows Google’s planned $32 billion acquisition of cybersecurity firm Wiz earlier in 2025, marking another major consolidation in the cybersecurity industry. (apnews.com)
A reminder for new readers. Each week, That Was The Week, includes a collection of selected essays on critical issues in tech, startups, and venture capital.
I choose the articles based on their interest to me. The selections often include viewpoints I can't entirely agree with. I include them if they make me think or add to my knowledge. Click on the headline, the contents section link, or the ‘Read More’ link at the bottom of each piece to go to the original.
I express my point of view in the editorial and the weekly video.
Share this post