This week’s video transcript summary is here. You can click on any bulleted section to see the actual transcript. Thanks to Granola for its software.
Editorial
This week Dario Amodei’s Anthropic announced that it had developed a new model, Mythos, capable of outperforming existing cybersecurity software in discovering vulnerabilities and, by implication, exploiting them.
Anthropic chose to restrict release of the model until 40 selected companies had the chance to use it to patch those vulnerabilities.
Amodei won plaudits for that decision. But he still plans to release software he is simultaneously describing as dangerous.
In the same week, The New Yorker published a long, rambling portrait of Sam Altman, depicting him as untrustworthy and slippery. It feeds a media narrative that increasingly seeks to demonize the OpenAI founder.
Altman is moving into Musk-like territory in terms of media frenzy.
This personality-driven circus is mostly a sideshow. It seems motivated by subjective feelings, jealousy, envy, and dislike among them. It is largely devoid of serious discussion about the transformative impact AI is having on our lives, and will have on future generations.
Shallow and gossipy is what comes to mind.
The truth is that we do not have to trust Sam Altman. We do not have to trust Dario Amodei either. What matters is whether science and innovation deliver results. Demis Hassabis, interviewed this week about DeepMind’s AlphaFold breakthrough, feels like a much more pertinent focal point. We have to trust progress.
Anthropic’s Mythos model certainly seems extraordinary. Its ability to discover vulnerabilities that other software had failed to uncover for decades led some to conclude that software approaches to cybersecurity are dead. “Software was lunch. Execution is dinner” was the most memorable line. The meaning is clear enough: AI is becoming useful in its own right, not just as an add-on to existing software.
The market already understands this. Crunchbase’s Q1 data shows capital still flooding into AI at extraordinary scale. Carta’s compensation data shows scarce AI talent being repriced in real time. Andy Jassy’s shareholder letter reads like a full-throated defense of hyperscaler capex as an execution advantage, not a speculative indulgence. Investors are rewarding capability, yes, but more specifically they are rewarding the ability to operationalize capability at scale.
Anthropic’s handling of Mythos is a useful example of the deeper issue. Holding back a powerful model can look responsible. But if that same capability can materially improve cyber defense, restraint may be less effective than deployment. In practice, legitimacy may come less from caution than from solving urgent real-world problems.
Rather than demonizing individual personalities, or lionizing them, the real focus should be execution against real-world problems.
Of course, deployment is not frictionless. Azeem Azhar’s point, that the labs are already rationing access, matters because it reminds us that AI is not yet abundant where it counts.
Bloomberg’s report that OpenAI paused Stargate UK over energy costs and regulation says the same thing from another angle. The Big Technology piece on data center backlash extends it further. Execution is dinner, because dinner is physical. It depends on land, power, permits, chips, local politics, and cost. The next phase of AI will be shaped as much by infrastructure constraints as by model advances. That reality may favor more execution-centric systems, including China’s.
And even when the infrastructure exists, the institutions usually don’t. Tyler Akidau’s “We All Built Agents. Nobody Built HR.” and the Fast Company piece on managing AI as a new job both point to the same gap. It is relatively easy to buy intelligence. It is much harder to absorb it. Roles have to change. Accountability has to change. Management has to change. Companies rushed to adopt the tools before they redesigned themselves to live with the consequences. Again, this is far more important than personality profiles of CEOs.
Distribution is part of this too. AI SEO manipulation and the continued degradation of social media both show that deployment is not just about building the capability. It is about controlling the pathways through which people encounter, cite, trust, and depend on it. Retrieval, visibility, and placement increasingly shape outcomes as much as underlying quality. The winners will not just build the strongest systems. They will build the systems that become unavoidable. OpenAI and Anthropic are both good at that, even if parts of the media remain focused on more trivial issues.
None of this means the skeptics are wrong about everything. Every major technology transition looks messy in the middle. Constraints create friction. Security is vulnerable. Organizations may adapt more slowly than they need to. That is possible. This week’s evidence suggests that capability is outrunning the social, organizational, and political machinery needed to absorb AI cleanly. But that is not because of Sam Altman or Dario Amodei, both of whom are building credible businesses with real-world impact.
The winners in AI will be the actors who deploy effectively enough that the world accepts, needs, or cannot resist what they build. Trust will come from applying AI to real problems and producing good outcomes.
If that is right, then the central question is no longer whether Sam Altman or Dario Amodei should be trusted, or AI in the abstract. It is who can get AI into the world, at scale, in forms people depend on, before they can stop it. OpenAI and Anthropic are already on that list. So are Sam Altman and Dario Amodei.
Breaking: After Fire Bomb attack Sam Altman speaks out: https://blog.samaltman.com/2279512
Contents
Editorial
Essays
Venture
AI
“Cognitive surrender” leads AI users to abandon logical thinking, research finds
🔮 Exponential View #568: The labs are rationing. Did you notice?
Sebastian Mallaby on AI Safety and the Race for Superintelligence
Moving Up the Stack: Analytics Engineering in the Age of Agents
In Japan, the robot isn’t coming for your job; it’s filling the one nobody wants
Andy Jassy says AWS AI revenue hit a $15B run rate and Amazon’s internal chip business tops $20B
Regulation
Infrastructure
Interview of the Week
Startup of the Week
Post of the Week
Essays
Sam Altman May Control Our Future-Can He Be Trusted?
The New Yorker · Apr 6 · Tags: AI, Governance, Power
Ronan Farrow’s profile matters less as founder psychodrama than as a governance document. The reporting paints Altman as a leader whose public positioning, internal maneuvering, and institutional loyalties do not always line up, which makes credibility itself part of the AI story. That is the useful frame for this issue. In a market now asking who should be trusted to build, govern, and narrate AI, leadership character is no longer a side issue. It is part of the infrastructure.
Read more: The New Yorker
Anthropic Is Warning About Future Cyber Risks. Researchers Says Claude Code Is Already Dangerous.
Upstarts Media · Apr 8 · Tags: AI, Security, Agents
This is a useful counterweight to Anthropic’s polished Glasswing rollout. While the company was warning that its unreleased Mythos Preview model could supercharge offensive cyber work and unveiling a blue-chip coalition to contain the risk, LayerX researchers were arguing that Claude Code already lowers the bar for real-world abuse. The sharp point is not that Anthropic’s future-risk framing is wrong. It is that the present-tense tooling may already be powerful enough to matter, especially for smaller teams that treat coding agents as harmless productivity software rather than dual-use systems.
The key detail is the asymmetry. Anthropic is mobilizing Apple, Google, Microsoft, Nvidia, JPMorgan, and others around hypothetical next-generation vulnerabilities, while outside researchers say the currently shipping product can already be turned into an attack tool and that Anthropic never meaningfully engaged. That gap, between public safety theater at the frontier and messy product risk in the market, is becoming one of the central patterns of the AI cycle.
Read more: Upstarts Media
Hollywood Assistants Are Using AI Despite Their Better Judgment — Including in Script Development
Author: Mia Galuppo Published: Apr 3, 2026
The thesis here is that AI’s real entry point into Hollywood is not blockbuster screenwriting or synthetic stars, but the assistant class: the overworked, underpaid support staff quietly weaving generative tools into the daily mechanics of development. Galuppo reports that, under pressure from layoffs and heavier workloads, assistants are using AI for everything from trimming florist notes to recording meetings and generating script coverage. In that sense, adoption is not arriving as a grand strategic shift from the top. It is creeping upward from the bottom, where administrative overload makes even reluctant experimentation feel necessary.
The killer detail is that some assistants are already uploading unpublished scripts, deal terms, schedules, and internal notes into public AI tools, often without training or security oversight. The piece argues that the deeper risk is not just confidentiality, but apprenticeship: if the lowest rung of the ladder starts outsourcing the work through which taste and judgment are learned, what exactly is the next generation being trained to become?
Read more: Source
Really, you made this without AI? Prove it
The Verge · Apr 4 · Tags: AI, Culture
With AI content now indistinguishable from human work, The Verge argues we need a “Fair Trade” logo for human-made content. At least 12 competing AI-free certification schemes have emerged, from the Authors Guild’s “human authored” badge to broader services like Not by AI, but none has gained critical mass. The core problem is that verifying something wasn’t made with AI is far harder than labeling what was. Most credible services require creatives to manually show their working process to human auditors. Instagram’s Adam Mosseri has acknowledged it’ll be “more practical to fingerprint real media than fake media.” The piece surfaces a deeper question: with AI embedded in creative tools everywhere, where do you even draw the line?
It’s open season for refusing AI
Blood in the Machine · Apr 4 · Tags: AI, Regulation, Culture
Brian Merchant catalogs a striking wave of AI refusal across society. Sanders and AOC have proposed a federal data center moratorium; eleven states from deep red to dark blue are considering their own. Wikipedia banned AI-generated content by a 40-2 editor vote after a flood of errors required a dedicated WikiProject AI Cleanup team. Capcom declared it “will not implement any generative AI assets” in its games. The Seminole Nation became the first tribal council to enact a data center moratorium. The movement is broad enough that an industry group (datacenterwatch.org) was launched just to track the opposition. What makes this notable isn’t any single action — it’s the breadth and bipartisan character of the backlash.
Grammarly’s sloppelganger saga
The Verge · Apr 5 · Tags: AI, Culture, Media
Grammarly’s now-disabled “Expert Review” feature is a neat case study in where AI trust breaks. The company generated writing advice supposedly “inspired by” public figures and journalists, then presented those suggestions under their names with a verified-looking checkmark — even when the people involved had never consented, and the linked “sources” were often broken or irrelevant. After reporting from Wired and The Verge, public backlash, and a class-action suit from Julia Angwin, Grammarly pulled the feature. The larger point isn’t just that this particular product was sloppy. It’s that a lot of AI product design now depends on laundering synthetic output through borrowed human authority. That may be commercially tempting, but it’s also precisely the sort of move that deepens the trust gap AI companies keep saying they want to close.
Read more: The Verge
Social media has become a freak show
Author: Nate Silver Published: Apr 5, 2026
Silver’s thesis is that today’s social media ecosystem, and Twitter/X in particular, has become structurally hostile to quality, and the accounts that thrive on it are the strange beasts you’d expect when selection pressure rewards partisanship, outrage, and on-platform engagement over anything else. He walks through his own arc — FiveThirtyEight’s frustrations with Facebook’s News Feed, Twitter’s mid-2010s sweet spot for nerdy analytical writing, and the platform’s later drift into quote-tweet dunks and groupthink — to argue that every era of social media is an ecosystem with its own rules, and the current one punishes external links and off-platform traffic so severely that publishers are being quietly starved.
The killer detail is his reworked engagement chart for 2026: Catturd is pulling far more engagement than the New York Times, whose 53 million followers now routinely generate only a few hundred interactions on breaking news. Silver reaches for an ecological metaphor — the island effect, where isolated environments breed oversized oddities like Komodo dragons — and suggests X has become exactly that kind of island. Cut off from the broader web, it’s producing mutations that wouldn’t survive anywhere else, and he’s increasingly content to watch from the mainland.
Read more: Silver Bulletin
Industrial Policy for the Intelligence Age
Author: OpenAI Published: Apr 6, 2026
OpenAI’s thesis is that the arrival of superintelligence is close enough and disruptive enough that incremental policy tweaks won’t cut it, and that the United States needs something on the scale of the Progressive Era or the New Deal to keep the transition from breaking the social contract. The document frames itself as an opening bid rather than a finished platform, but the direction is striking coming from a frontier lab: shift the tax base off wages and onto corporate profits and capital gains, set up a Public Wealth Fund so ordinary citizens hold an automatic stake in the AI buildout, harden safety nets around health care and retirement, and seriously pilot a four-day, 32-hour workweek with no loss in pay.
The killer detail is the framing around a “Right to AI” and an “open economy,” paired with calls for huge investment in the power grid and a new industrial base to support multi-gigawatt compute. OpenAI is, in effect, asking Washington to pre-absorb the shock of its own product roadmap — and doing so publicly, while Sam Altman tells Axios that the urgency is now on the order of a generational realignment. Whether this is a sincere blueprint or a sophisticated piece of pre-emptive politics, it’s the most explicit thing the company has yet published about the world it believes it is about to create.
Read more: OpenAI
Venture
Q1 2026 Shatters Venture Funding Records as AI Boom Pushes Startup Investment to $300B
Source: Crunchbase News Published: Apr 1, 2026
The first quarter of 2026 was unlike any other. Investors poured $300 billion into 6,000 startups globally — up over 150% quarter over quarter and year over year, an all-time high not approached by any prior quarter on record. Q1 alone totaled nearly 70% of all venture capital spending in 2025. AI accounted for $242 billion — 80% of the total — up from the previous record of 55% set in Q1 2025.
The concentration is extreme. Four of the five largest venture rounds ever recorded closed in Q1: OpenAI ($122B), Anthropic ($30B), xAI ($20B), and Waymo ($16B) — collectively $188 billion, or 65% of all global venture investment in the quarter. Another 10 companies raised $1B+ rounds spanning generative AI, autonomous vehicles, semiconductors, data centers, robotics, defense, and prediction markets. The Unicorn Board added $900 billion in value during the quarter, the largest single-quarter valuation bump on record. US-based companies took 83% of global VC (up from 71% a year earlier). Late-stage dominated: $246.6 billion across 584 deals, with $235 billion going to just 158 companies raising $100M+.
At seed, funding rose 31% YoY to $12 billion — but deal counts fell 30% to 3,800. Bigger rounds, fewer companies. Early-stage was up 41% YoY to $41.3B. Despite the record investment, the IPO market slowed amid a broader software selloff — only 4 US venture-backed companies exited above $1B, versus 13 from China. M&A was stronger: $56.6B in startup acquisitions, the third-highest quarter since 2022. The pressure for the IPO window to reopen is now enormous: unprecedented private capital with nowhere to go.
Read more: Crunchbase News
Anthropic is having a moment in the private markets; SpaceX could spoil the party
TechCrunch · Apr 4 · Tags: Venture, AI
The secondary market tells the story: Anthropic is the hardest stock to source at Rainmaker Securities — “there’s just no sellers.” Buyers have signaled $2B ready to deploy into Anthropic, while $600M in OpenAI shares can’t find takers. Paradoxically, Anthropic’s public standoff with the DoD — initially seen as bad news — turbocharged demand by casting the company as a hero “taking on big government.” OpenAI shares are trading at ~$765B on secondaries, a discount to the $852B primary-round valuation. Goldman Sachs charges its customary 15-20% carry for Anthropic access; Morgan Stanley offers OpenAI shares to HNW clients fee-free. SpaceX remains the lone name that never experienced the 2022-24 private market correction. A revealing snapshot of where smart money is flowing.
How AI is changing the compensation game for VC-backed startups
Carta · Apr 8 · Tags: Venture, Labor, Compensation
Carta’s data makes the labor-market effect of AI unusually concrete. Net headcount growth at venture-backed startups has slowed sharply, some startup categories are now shrinking, and the market is bifurcating between AI-native companies and everybody else. At the same time, the price of scarce talent is rising. At startups valued between $1 million and $10 million, median equity grants for AI/ML engineers rose 59% from January 2024 to February 2026. Even GTM roles are being repriced upward at AI-native companies.
What makes the piece useful for this issue is that it shows AI reshaping not just products, but the social organization of work. Smaller teams mean each hire carries more leverage. Equity pools are being split across fewer people. The emerging division of labor is clearer: fewer general hires, more highly paid specialists, and a widening gap between firms built around AI and those trying to adapt to it.
Read more: Carta
AI
“Cognitive surrender” leads AI users to abandon logical thinking, research finds
Author: Kyle Orland Published: Apr 3, 2026
The paper behind this story makes a sharper claim than the usual hand-wringing about AI dependence: people do not just use large language models as helpers, they often relax their own scrutiny once a fluent answer appears. Across 1,372 participants and more than 9,500 trials, researchers found that subjects accepted faulty AI reasoning 73.2% of the time and overruled it only 19.7% of the time. The study’s language is useful: confident machine output can become “epistemically authoritative,” lowering the threshold at which people decide a problem no longer needs real deliberation.
One killer detail is the split between trust and capability. Participants with higher measured fluid intelligence were less likely to defer to bad answers, while people already predisposed to trust AI were much easier to mislead. That turns “cognitive surrender” into something more structural than simple laziness: the better these systems sound, the easier it becomes to offload judgment itself. The pull is not whether AI can reason on our behalf some of the time, but how often people will stop noticing when it cannot.
Read more: Source
Is ubiquitous A.I. writing “inevitable”?
Read Max · Apr 3 · Tags: AI, Culture, Media
Max Read surveys the accelerating collision between AI and professional writing. The NYT cut ties with a freelancer who used AI to write a book review that accidentally plagiarized The Guardian. Meanwhile Fortune boasts that AI-assisted stories account for 20% of its web traffic. Kevin Roose built a team of Claude agents to edit his book; Alex Heath has an AI agent connected to his Gmail, calendar, and transcription service that writes his first drafts. Hachette canceled the novel Shy Girl over AI suspicions. The question isn’t whether AI writing is coming — it’s whether the distinction between “AI-assisted” and “AI-generated” will hold.
🔮 Exponential View #568: The labs are rationing. Did you notice?
Exponential View · Apr 5 · Tags: AI, Infrastructure, Economics
Azeem Azhar’s argument is simple: stop asking whether AI is a bubble and start noticing that the frontier labs are already supply-constrained. OpenAI says it is turning away opportunities because compute is scarce; Anthropic has tightened usage caps enough that some users are newly hitting limits; H100 rental prices have rebounded to 18-month highs; and even open-weight strategy is shifting, with Alibaba closing off a model line that had been part of the open ecosystem. In other words, the bottleneck is no longer demand for AI but the physical and financial capacity to serve it. That matters because scarcity changes behavior: products get rationed, partnerships become strategic, and the economics of the next phase of the AI market start to look less like software abundance and more like industrial allocation.
Read more: Exponential View
Posthuman: We All Built Agents. Nobody Built HR.
O’Reilly Radar · Apr 8 · Tags: AI, Work, Agents
Tyler Akidau gets at one of the most under-discussed problems in the agent boom: companies rushed to build agent workflows, but almost nobody built the management layer for human-agent organizations. Roles are blurry, process ownership is vague, and the new coordination work lands in nobody’s formal job description. The useful move in the piece is to shift the conversation from model quality to operating design. The hard problem is not whether agents can do tasks, but whether institutions know how to absorb them without breaking accountability, training, and decision rights.
Read more: O’Reilly Radar
Can AI responses be influenced? The SEO industry is trying
The Verge · Apr 6 · Tags: AI, Search, Marketing
One of the better pieces yet on the next spam war. AI search is creating a fresh class of self-serving “best of” pages, prompt-laced recommendation traps, and other tactics designed to get models to cite brands as if they were neutral authorities. The important point is not merely that marketers will game AI systems, of course they will, but that retrieval pipelines are now part of the attack surface. If chat interfaces become the front door to the web, then optimization, poisoning, and citation laundering stop being bugs and start looking like structural features of the medium.
Read more: The Verge
Managing AI has become its own job
Fast Company · Apr 4 · Tags: AI, Work, Management
This Fast Company piece names a pattern showing up across companies rolling out AI: the promised productivity gains are real only if somebody absorbs the invisible coordination work when tools fail, outputs need checking, and accountability gets muddy. In practice, “using AI” has already become a layer of management overhead. That makes it a useful companion to the agent pieces in this issue. The adoption story is not just capability, but the creation of new supervision work inside organizations that were not designed for it.
Read more: Fast Company
Sebastian Mallaby on AI Safety and the Race for Superintelligence
Yascha Mounk · Apr 4 · Tags: AI, Regulation, Safety
Mallaby is good on the central contradiction of the frontier labs: almost every major lab was founded by people who said they were building AI because they were worried about what unsafe AI might become. DeepMind began with founders who met at a safety lecture; OpenAI was framed as the safer alternative to DeepMind; Anthropic emerged because OpenAI wasn’t safe enough. His claim is that this isn’t hypocrisy so much as a magnified version of the human bargain with technology: we move forward despite real risk. The sharper part of the conversation is his case against open-weight frontier models. Once dangerous capability is distributed beyond lab control, he argues, the ability to shut down abuse disappears. Even if you don’t buy his full alarm level, the interview is useful because it frames the next AI policy fight less as “progress versus safety” than as “what kinds of control do we lose once capability escapes the perimeter?”
Read more: Yascha Mounk
Moving Up the Stack: Analytics Engineering in the Age of Agents
The Analytics Engineering Roundup · Apr 5 · Tags: AI, Work, Data
Jason Ganz makes the most grounded labor argument in this batch: agents are not some distant future for data teams, they’re already changing the shape of the work. He compares the current shift to the arrival of dbt itself — a moment when repetitive, artisanal SQL gave way to more automated and leveraged ways of working — and argues that analytics engineers now have to “move up the stack” again. The concrete signals are notable: Hex says more than half of new cells are now created by agents; dbt’s MCP server is growing quickly as shared context for AI systems; and companies like Ramp are already deploying agentic analysts. The bullish read is that automation frees people for higher-value work. The harder question, which the piece doesn’t duck, is what new responsibilities replace the old ones when AI starts handling much of the query-writing and model-building layer.
Read more: Analytics Engineering Roundup
In Japan, the robot isn’t coming for your job; it’s filling the one nobody wants
Author: Kate Park Published: Apr 5, 2026
The thesis here is that Japan’s push into physical AI is being driven less by futurist ambition than by demographic necessity. Park frames the country as an early real-world testbed for AI-powered robotics because shrinking labor supply is no longer a forecast, but an operating constraint. That changes the adoption story. Instead of robots being sold primarily as efficiency tools or labor replacements, they are being bought as continuity infrastructure for factories, warehouses, inspections, and other systems that increasingly cannot find enough people to run them.
The killer detail is the scale of the demographic pressure underneath the narrative: Japan’s population fell for a fourteenth straight year in 2024, and the working-age share is down to 59.6% of the total. Investors and operators in the piece repeatedly describe physical AI as a survival response, not a moonshot. The article’s pull is that Japan may show what happens when agentic robotics leaves the lab and meets a hard macro constraint. If these deployments keep moving from pilots to customer-paid operations, physical AI starts to look less like a speculative category and more like a template other aging economies may soon copy.
Read more: Source
Andy Jassy says AWS AI revenue hit a $15B run rate and Amazon’s internal chip business tops $20B
About Amazon / Techmeme · Apr 9 · Tags: AI, Infrastructure, Economics
Jassy’s annual letter is one of the clearest defenses yet of hyperscaler AI spending. He says AWS’s AI revenue run rate reached $15 billion in Q1, that Amazon’s internal chip business is already generating more than $20 billion a year, and that much of next year’s capex is effectively pre-sold through customer demand. The significance is not just the numbers. It is the confidence behind them. Amazon is arguing that the apparent imbalance between infrastructure cost and current revenue is not a warning sign, but the shape of a market still starved for capacity.
Read more: About Amazon
Google makes it easy to deepfake yourself
The Verge · Apr 9 · Tags: AI, Media, Trust
YouTube’s new Shorts avatar tool is a neat example of where the platform wars are heading next. Google is making it simple for creators to generate a digital self that can appear in videos or star in new prompt-generated clips, with visible labeling and provenance markers attached. The product logic is obvious: if synthetic video is coming anyway, better to domesticate it inside a controlled creator workflow than let the whole thing remain a wild market of impersonation and slop.
The deeper point is that this shifts deepfakes from a fringe abuse case into a mainstream product feature. Even with consent gates, watermarking, and limits on reuse, the cultural line has moved again. The question is no longer whether realistic self-cloning should exist, but how quickly audiences normalize synthetic presence as just another mode of publishing.
Read more: The Verge
The AI Problem Matrix
Tomasz Tunguz · Apr 9 · Tags: AI, Work, Economics
Tunguz offers a simple but useful way to think about which kinds of work AI will actually expand versus merely compress. His 2x2 sorts jobs by whether demand is effectively infinite and whether correctness can be verified in a closed loop. Software engineering lands in the most explosive quadrant, because more output creates more value and tests can increasingly verify the work. Bookkeeping and tax prep, by contrast, are bounded by the number of transactions and filing cycles a company actually has.
What makes the framework worth adding this late in the week is that it cuts through a lot of vague labor-market talk. Rather than asking whether AI will “replace jobs” in the abstract, it asks where automation becomes an economic engine and where it remains a utility. That is a much better lens for thinking about why some roles are about to scale violently while others just get cheaper.
Read more: Tomasz Tunguz
Y2K 2.0: The AI security reckoning
Anil Dash · Apr 10 · Tags: AI, Security, Infrastructure
Dash argues that AI-assisted vulnerability discovery is pushing software security into a Y2K-style emergency, except this time the problem is not a known bug with a fixed deadline but a broad collapse in the old assumptions about scarce offensive expertise. If coding agents can find and chain exploits across widely used software stacks at machine speed, then the whole security model of the modern software supply chain starts to wobble at once.
The piece is strongest when it turns from spectacle to operations. Open source maintainers are already drowning in AI slop and underfunded patch work; now they may also face a surge of real vulnerabilities arriving faster than institutions can responsibly triage them. Dash’s point is not that catastrophe is guaranteed, but that we are entering a period where “just keep your software updated” stops sounding like sufficient strategy.
Read more: Anil Dash
Regulation
OpenAI pauses Stargate UK, citing energy costs and regulation
Bloomberg / Techmeme · Apr 9 · Tags: Regulation, Infrastructure, AI
This is one of the clearest reality checks on the AI buildout so far. OpenAI has put Stargate UK on hold, blaming high energy costs and the local regulatory environment. That matters because it turns a lot of vague talk about power constraints and policy friction into an actual canceled or delayed project. The useful lesson is that AI infrastructure is no longer bottlenecked only by chips and capital. It is also constrained by grids, permits, and whether governments can create conditions that make the economics viable.
Read more: Bloomberg
OpenAI made economic proposals — here’s what DC thinks of them
The Verge · Apr 8 · Tags: Regulation, AI, Politics
Tina Nguyen gets at the problem with OpenAI’s surprisingly redistributionist industrial-policy paper: in Washington, the issue is not whether some of the proposals are interesting, but whether anyone believes the company advancing them. OpenAI’s document floated heavier taxation of AI-driven capital gains, a public wealth fund, stronger worker transitions, and even a four-day week funded by productivity gains. On the merits, several policy people told Nguyen the paper added useful ideas to the debate. But the company released it into a credibility hole of its own making.
That is what makes the piece worth reading. Against the backdrop of the new New Yorker reporting on Sam Altman’s history of saying one thing in public and another in lobbying fights, the policy paper starts to look less like a roadmap and more like a test of institutional trust. In other words, OpenAI is now asking Washington to prepare society for the consequences of AI abundance at the same moment many of its critics doubt it would accept meaningful constraints on its own power.
Read more: The Verge
Infrastructure
AI companies are building huge natural gas plants to power data centers. What could go wrong?
TechCrunch · Apr 3 · Tags: Infrastructure, Energy, AI
The AI energy FOMO is having grandkids. Microsoft + Chevron are building a 5GW natural gas plant in West Texas. Google + Crusoe: 933MW in North Texas. Meta added seven gas plants to its Louisiana Hyperion site, bringing it to 7.46GW — enough to power South Dakota. Gas turbine prices are up 195% vs 2019, with new orders backed up to 2028 and six-year delivery times. The bet: AI will need exponential power forever and natural gas is essential. The risk: behind-the-meter generation still drains a shared resource, gas production growth in the three biggest shale regions has slowed, and if prices spike, everyone from hospitals to factories pays the cost. A clear-eyed look at the infrastructure bubble within the AI bubble.
The Ridiculously Nerdy Intel Bet That Could Rake in Billions
Author: Lauren Goode Published: Apr 6, 2026
The thesis here is that the next choke point in AI may not be chip design itself, but packaging: the highly specialized process of stacking and connecting chiplets, memory, and interconnects into systems that can actually deliver frontier performance. Goode argues that Intel, long cast as the company that missed the last wave, thinks advanced packaging gives it a second shot at relevance. In a market where hyperscalers increasingly want custom silicon but still depend on a handful of manufacturing bottlenecks, packaging starts to look less like back-end assembly and more like strategic leverage.
The killer detail is Intel’s internal shift in expectations. Its CFO says packaging revenue forecasts have moved from the hundreds of millions to well north of $1 billion, while sources say Intel has been in active talks with Google and Amazon for packaging work. That matters because it suggests value in the AI stack is spreading into previously obscure layers of semiconductor production. If packaging becomes the new constraint, the next question is not just who designs the smartest chip, but who can physically turn those designs into scalable systems.
Read more: Source
The AI data center backlash is now impossible to ignore
Big Technology · Apr 10 · Tags: Infrastructure, Politics, Energy
Alex Kantrowitz shows that resistance to AI infrastructure is no longer a niche environmental complaint or a handful of local zoning fights. It is becoming a visible political constraint on the AI buildout itself. The piece ties together public anger over land use, power consumption, and local quality-of-life damage with more formal policy pushback, including moratorium efforts and broader political appetite to slow construction. In other words, AI’s physical footprint is starting to create a backlash large enough to shape the pace of deployment.
What makes it useful for this issue is that it extends the scarcity story beyond chips and electricity prices into legitimacy and consent. If residents, lawmakers, and local activists increasingly treat data centers as an extractive burden rather than a civic asset, then infrastructure becomes a governance problem as much as a financing or engineering one. That is exactly the kind of friction likely to matter in the next phase of the AI race.
Read more: Big Technology
Cloudflare made a WordPress for AI agents
The Verge · Apr 10 · Tags: Infrastructure, AI, Platform Power
Cloudflare’s EmDash launch is interesting not because it obviously dethrones WordPress, but because it shows what an agent-native publishing stack now looks like. The pitch is explicit: rebuild the CMS around structured content, an MCP server, TypeScript-first code, and sandboxed plugin execution so AI agents can manipulate the site cleanly instead of scraping around legacy HTML and PHP abstractions. In other words, the content management layer itself is being redesigned for machines as much as for humans.
The revealing part is the backlash. WordPress developers do not just object to the branding or the Cloudflare self-interest; they use EmDash as a mirror to argue that WordPress’s real problem is architectural debt. That makes this a useful infrastructure story for the issue. Agentic software is now starting to pressure old web platforms at the level of data models, permissions, and extensibility, not just at the level of interface gimmicks.
Read more: The Verge
Interview of the Week
The Many Faces of AI
Source: Keen On Published: Apr 9, 2026
“Doing science is like reading the mind of God.” — Demis Hassabis, quoted in The Infinity Machine
This week’s New Yorker uncomplimentary profile of OpenAI’s CEO is entitled “The Many Faces of Sam Altman.” But not all AI leaders are quite as many faced as slippery Sam. Take, for example, Demis Hassabis, the North London based co-founder and CEO of Google’s DeepMind. In his new biography, The Infinity Machine: Demis Hassabis, DeepMind, and the Quest for Superintelligence, the British journalist Sebastian Mallaby argues that Hassabis is, in contrast, one faced. And that face is not only decent, but informed by the enlightened ethics of Baruch Spinoza and Immanuel Kant.
Mallaby presents Hassabis as the anti-Altman. He’s stayed at DeepMind for sixteen years, lived in the same London house, drives a decade-old car. Rather than power, Google’s AI supremo seeks scientific enlightenment. Like Spinoza, his God is the master watchmaker of the universe. And so doing science, Hassabis explained to Mallaby in one of their many conversations in the backroom of a North London pub, is like reading the mind of God. Decent Demis. Honest Hassabis. Let’s just hope this modest and thoughtful tech leviathan can bring Kantian ethics to Silicon Valley’s sprint for artificial general intelligence.
It is a sharp fit for this issue because it reframes the AI contest as a fight not just over models or markets, but over legitimacy. Who seems trustworthy? Who appears consistent? Who is pursuing intelligence as a public good versus a private empire? Even if you do not buy the contrast in full, it is a clean way to think about the widening split between AI leadership styles.
Listen: Keen On
Startup of the Week
Spain’s Xoople raises $130 million Series B to map the Earth for AI
Source: TechCrunch Published: Apr 6, 2026
Xoople’s raise points to a durable thesis beneath the model race: ownership of high-quality, operational geospatial data can become a compounding advantage for enterprise AI products. Instead of competing on interface novelty, the company is betting on proprietary data infrastructure and integration into existing workflows.
It fits this issue as a startup pick because it captures where defensibility may accrue in the AI stack: not just model capability, but trusted domain data and distribution into real systems.
Read more: TechCrunch
Post of the Week
I replaced two WordPress sites with a reusable RSS platform in 24 hours
Source: X (@kteare) Published: Apr 5, 2026
A reminder for new readers. Each week, That Was The Week, includes a collection of selected essays on critical issues in tech, startups, and venture capital.
I choose the articles based on their interest to me. The selections often include viewpoints I can't entirely agree with. I include them if they make me think or add to my knowledge. Click on the headline, the contents section link, or the ‘Read More’ link at the bottom of each piece to go to the original.
I express my point of view in the editorial and the weekly video.





















