Playback speed
×
Share post
Share post at current time
0:00
/
0:00
Transcript
1

Individual Freedom and Global Companies

Can Government help Innovation?
1

A reminder for new readers. Each week, That Was The Week includes a collection of selected readings on critical issues in tech, startups, and venture capital. I chose the articles based on their interest to me. The selections often include viewpoints I can't entirely agree with. I include them if they provoke me to think. The articles are snippets or varying sizes depending on the length of the original. Click on the headline, contents link, or the ‘More’ link at the bottom of each piece to go to the original. I express my point of view in the editorial and the weekly video below.

Hat Tip to this week’s creators: @RikeFranke, @danshipper, @Kyle_L_Wiggers, @Coldeweys, @ArjunKharpal, @jeffbeckervc, @ttunguz, @steph_palazzolo, @amir, @joannachiu, @DanMilmo, @mgsiegler, @mikeisaac, @natashanyt, @danbladen, @I_Am_NickBloom, @kingthor (https://mastodon.social/@Thorin), @legind, @fredericl, @usepylon, elonmusk, @stillgray, @EndWokeness, @HillaryClinton, @GavinNewsom

Contents

Editorial


Ulrike Franke’s essay that heads up this week’s ‘Essays of the Week’ starts with a striking sub-heading:

We are living through a change in the balance of power between states and the private sector. The implications for modern conflict are vast.

This seemingly inherent contradiction between massive global technology platforms, measured in users or revenues, and nation-states is much discussed in this newsletter. Whether or not the companies align with national foreign policy, they cannot be ignored.

Her piece charts the impact of Starlink, Amazon’s AWS in Ukraine, and Drones from DJI, a Chinese company.

We are increasingly living in a world, in which not only the most important technological breakthroughs are happening in the private sector (think ChatGPT), but one in which capabilities immediately needed for warfare are in the hands of the private sector. The primary responsibility of the state – to keep its citizens secure – is now increasingly dependent on goods and services that only companies have. While these companies still need to be headquartered somewhere geographically, they increasingly consider themselves as international, not linked – and responsible – to one state.

This fact might underline why Hilary Clinton and Gavin Newsom both call for state power to be used against tech execs in this week’s ‘Post of the Week.’

This bifurcation and associated tensions are likely to grow as nation-states' innovation capability falls far behind that of private companies.

Tomasz Tunguz's Essay—Writing Software for Robots, Dan Shippers ‘Why Generalists Own the Future, and Jeffrey Becker’s The AI Native Founder all indicate that innovation is accelerating and, in the process, changing what humans do and do not do.

The Ancient Greek philosopher Heraclitus observed that the world is in a constant state of change. This is not a deterministic change created by some non-human and inevitable “technology.” The change is the consequence of billions of human decisions and the application of human intelligence to problems that need solving. Everything that ‘is’ is in the process of changing. Nations are not an exception. Human work is also going to change. AI software will address specialism more than generalism.

In 2024, nations employ hundreds of millions of people, many of whom are employed to preserve or enforce the status quo. Many do fabulous work maintaining the rule of law or democratic choice. Others face off against positive change that benefits humanity.

Changes put stress on systems every day, and resistance to change is expected. But in 2024, the focus of resistance to change has increasingly taken the form of demonizing technology, especially AI, or demonizing the individual freedoms that technology permits. Joanna Chiu discussed the Chinese Government’s efforts to block ChatGPT, which is a great example. But Hilary Clinton’s call to jail social media users for “disinformation” is, too.

These new words in our vocabulary related to speech are worrying indications of a desire to restrict individual freedom. ‘Misinformation’ and ‘Disinformation’ started as terms of abuse, used to weaponize debate with a kind of bullying mentality. Even ‘lies’ are used in that way. Now, we are moving to a discussion of criminalizing these things.

Actual misinformation and disinformation, and yes, lies, can be combated by debate and facts. Resort to criminalizing speech is an autocratic move that will not help us bring about a better human future but a controlled and restricted one.

United Airlines' embrace of Starlink is a real signpost of what we can expect. Norway’s announcement that well over 90% of all new car registrations are electric vehicles is another. Discord’s implementation of end-to-end encryption also indicates the direction in which the change will occur.

The goal of all human progress is to improve life. 2024 seems to mark a moment when those improvements can accelerate and reach the globe. Most can benefit everybody. AI in education is a great example. However, fear-driven national bureaucracies can and will seek to slow down or stop many resulting individual freedoms.

Companies can be flawed, as can their leaders, but using science to deliver gains seems tied to corporations more than nations. The tail and the dog may be reversing.

Essays of the Week


How companies go to war

  • THEMES: GEOPOLITICS, WAR

We are living through a change in the balance of power between states and the private sector. The implications for modern conflict are vast.

Launch of SpaceX's Starlink.

Launch of SpaceX's Starlink. Credit: Brandon Moser / Alamy Stock Photo

On 24 February 2022, Russia invaded Ukraine. Missiles fell on cities all over the country. More than 100,000 Russian soldiers, with tanks and armoured vehicles crossed the border, starting the largest war on European soil since the Second World War.

A few hours before Russian tanks began rolling into Ukraine, alarm bells began ringing in Microsoft’s Threat Intelligence Center. Microsoft had detected a new malware, aimed at Ukraine’s government ministries and financial institutions. The wiper malware – software developed to erase data on infected machines – was called FoxBlade. Microsoft worked quickly with the Ukrainian government, providing technical advice on how to fight the cyberattack. Within three hours, defences against the malware had been developed.

On the same day, as the invasion was ongoing, members of the Ukrainian government met with representatives of Amazon Web Services. The discussion was about bringing Amazon ‘Snowball devices’ – suitcase-sized data storage units in shock-proof gray containers – into Ukraine to help secure, store, and transfer data to the cloud, so that the physical destruction of hardware and servers within Ukraine would not destroy the data. Together, Ukraine’s and Amazon’s representatives sketched out a list of the data most essential to the Ukrainian state: the population registry, land and property ownership records, tax payment and bank records, education registries, and more.

Two days later, the first set of Snowballs arrived in Ukraine via Poland. Over the next weeks, these Snowball devices became the foundation for the effort to preserve Ukraine’s data. Ukraine’s largest bank, serving 40 per cent of the Ukrainian population, moved all its operations to the cloud. By December 2022, over 10 petabytes of data had been moved to the cloud – equivalent to several times the content of the US Library of Congress.

While Microsoft had helped thwart the Foxblade malware attack, on 24 February another cyberattack disrupted broadband satellite internet access throughout Ukraine. Modems that communicated with US company Viasat’s satellite network went offline. The attack had significant spillovers into other areas and countries, with thousands of wind turbines in Germany going offline, but, most importantly, internet access for many people in Ukraine was cut off. When it became clear that internet connectivity was disrupted for a prolonged period, Mykhailo Fedorov, the Ukrainian Vice Prime Minister tweeted at SpaceX CEO Elon Musk: ‘@elonmusk while you try to colonize Mars — Russia try to occupy Ukraine! While your rockets successfully land from space — Russian rockets attack Ukrainian civil people! We ask you to provide Ukraine with Starlink stations.’

Starlink stations are internet terminals that connect to the thousands of satellites that SpaceX has put into low earth orbit over the last few years. Just hours later, at 11:33pm on 26 February, Elon Musk answered, again via a tweet: ‘Starlink service is now active in Ukraine. More terminals en route.’ Over 30,000 Starlink terminals were delivered to Ukraine in the first 15 months of the war, providing secure communications to the military as well as the government and the public. Starlink has become the backbone of Ukraine’s military communications. Ukrainian forces use it to live-stream drone feeds, correct artillery fire and communicate internally.

Microsoft. Amazon. SpaceX.

In these three stories about the first days of the war, the players – indeed, the heroes – are not states or governments. The protagonists of the stories are companies. Companies providing goods and services that are vitally important to Ukraine’s survival and its war-fighting efforts.

None of them are military companies. This is not a story about Raytheon selling missile defences to the Ukrainian government, Rheinmetall delivering tanks, or Lockheed Martin producing military equipment for the US to give to Ukraine. Amazon, Microsoft, Google, SpaceX, DJI, and many other companies operating in Ukraine, primarily or exclusively produce for the civilian market. Also, none of the firms are founded or headquartered in Ukraine or Russia, the states at war. And yet, these private, civilian companies are playing a crucial role in this war.

Civilian companies go to war. And the balance of power between the private sector and the state is shifting in a fundamental way as a result. Ukrainian soldiers and officials have testified as to the importance of these services in many instances: ‘I’d say the effectiveness of our work without Starlink would drop something like 60 per cent or more,’ a company commander of a Ukrainian mechanised brigade told the Washington Post. Mykhailo Fedorov, Ukraine’s Minister of Digital Transformation, noted that ‘cloud services basically helped Ukraine survive as a state’. And a Ukrainian platoon commander stated: ‘Without Starlink, we would have been losing the war already.’ One can find hundreds of these kinds of quotes, underlining the importance of satellite-based internet connectivity, cloud services, company-provided cyber defences (Google, in particular, is active in this area).

It is not just a story about software, about the cyber realm, about the internet, about the areas that seem removed from the fighting and the frontline. Civilian companies are not just providing the cloud, but also what is in the clouds. Drones have played a crucial role in this war.

..More

Why Generalists Own the Future

In the age of AI, it’s better to know a little about a lot than a lot about a little

DAN SHIPPER

September 6, 2024


A common refrain I hear is that in the age of AI, you don’t want to be a “jack of all trades and a master of none.”

For example, my good friend (and former Every writer) Nat Eliason recently argued:

“Trying to be a generalist is the worst professional mistake you can make right now. Everyone in the world is getting access to basic competence in every white-collar skill. Your ‘skill stack’ will cost $30/month for anyone to use in 3-5 years.”

He makes a reasonable point. If we think of a generalist as someone with broad, basic competence in a wide variety of domains, then in the age of AI, being a generalist is a risky career move. A language model is going to beat your shallow expertise any day of the week.

But I think knowing a little bit about a lot is only a small part of what it means to be a generalist. And that if you look at who generalists are—and at the kind of mindset that drives a person who knows a lot about a little—you’ll come to a very different conclusion: In the age of AI, generalists own the future.

What generalists are

Generalists are usually curious people who like to hop around from domain to domain. They enjoy figuring things out, especially in areas that are uncertain or new. They’re good at solving problems that domain experts struggle with, because they’re able to bring bits of knowledge from diverse fields together.

As Nat notes, because of their propensity to hop domains, generalists tend to possess a wide set of shallow skills. But measuring them against their rudimentary coding abilities or their working knowledge of French baking technique misses their true advantage: the ability to adapt to new situations, and the desire to do so.

Where generalists thrive

In Range: How Generalists Triumph in a Specialized World, David Epstein argues that generalists are especially good in what he calls “wicked” environments: “In wicked domains, the rules of the game are often unclear or incomplete, there may or may not be repetitive patterns and they may not be obvious, and feedback is often delayed, inaccurate, or both.”

According to Epstein, this is where generalists thrive. They are able to use their diverse experiences to attack problems in unique ways and see solutions that no one else can see.

He contrasts wicked environments with what he calls “kind” environments, where feedback is immediate and there are clear, repetitive patterns that lead to success. These are the domains where the experts tend to shine. They can apply their specific expertise to solving problems over and over again, because they’ve seen those problems in some form before.

Kind environments, interestingly enough, are also where LLMs thrive. A few weeks ago, I likened large language models like GPT-4o and Claude Sonnet 3.5 to having “10,000 Ph.D.’s available at your fingertips.” They are quite proficient at most areas of specialist knowledge in the world. But they are still not very good at figuring out entirely novel problems.

This view of LLMs suggests the reverse of Nat’s thesis: trouble for experts and opportunities for generalists. LLMs are weak in wicked domains and strong in kind ones where experts thrive. If you’re an expert navigating a novel problem, an LLM won’t imagine a new solution for you. But they become a gift for generalists, who can use them to get up to speed in new domains much more quickly, and resurface and apply knowledge from other fields easily. Generalists can use their adaptability and imagination to work through any “wickedness” that a language model can’t handle on its own.

In an allocation economy, where you’re compensated not based on what you know, but on your ability to deploy intelligence, language models aren’t a threat to generalists—they are a potent weapon.

The past and future of generalists

Generalists have been out of style since at least the time of Adam Smith, who popularized the notion of specialization and division of labor as a driver of economic growth in the 18th century. In fact, we probably have to go back to ancient Greece to find an example of a society where generalists were the norm instead of the exception.

Ancient Athens was a direct democracy. Citizens, as a rule, participated in all aspects of civic life, from politics to warfare. Any citizen could be judge, jury member, senator, and soldier. In The Greeks, the classicist H.D.F. Kitto writes that Athenian society was driven by the ideal that in Athens, “a man owed it to himself, as well as to the [city], to be everything in turn.” This ethos, Kitto argues, implied “a respect for the wholeness or the oneness of life, and a consequent dislike of specialization.” In short, Athens was a society of well-rounded individuals—or generalists. (Of course, it must be noted that citizenship was limited to adult males. Athens was not a utopia.)

But as time passed, and life in Athens became more economically, socially, politically, and militarily advanced, this society of generalists began to fissure. Progress equals complexity, and complexity requires specialization. As Kitto explains it, “If one man in his time is to play all the parts, these parts must not be too difficult for the ordinary man to learn. And this is where the polis broke down.“

As AI becomes more capable of specialized tasks, we might see a return to the Greek ideal of the well-rounded citizen. This time, though, we can do it in the context of an advanced and complex economy because a citizen armed with AI will be far more capable of playing many roles than one without.

..Lots More

This Week in AI: Why OpenAI’s o1 changes the AI regulation game

Kyle Wiggers, Devin Coldewey

10:05 AM PDT • September 18, 2024

People walking in a maze shaped as a brain
Image Credits: Hiroshi Watanabe / Getty Images

Hiya, folks, welcome to TechCrunch’s regular AI newsletter. If you want this in your inbox every Wednesday, sign up here.

It’s been just a few days since OpenAI revealed its latest flagship generative model, o1, to the world. Marketed as a “reasoning” model, o1 essentially takes longer to “think” about questions before answering them, breaking down problems and checking its own answers.

There’s a great many things o1 can’t do well — and OpenAI itself admits this. But on some tasks, like physics and math, o1 excels despite not necessarily having more parameters than OpenAI’s previous top-performing model, GPT-4o. (In AI and machine learning, “parameters,” usually in the billions, roughly correspond to a model’s problem-solving skills.)

And this has implications for AI regulation.

California’s proposed bill SB 1047, for example, imposes safety requirements on AI models that either cost over $100 million to develop or were trained using compute power beyond a certain threshold. Models like o1, however, demonstrate that scaling up training compute isn’t the only way to improve a model’s performance.

In a post on X, Nvidia research manager Jim Fan posited that future AI systems may rely on small, easier-to-train “reasoning cores” as opposed to the training-intensive architectures (e.g., Meta’s Llama 405B) that’ve been the trend lately. Recent academic studies, he notes, have shown that small models like o1 can greatly outperform large models given more time to noodle on questions.

So was it short-sighted for policymakers to tie AI regulatory measures to compute? Yes, says Sara Hooker, head of AI startup Cohere’s research lab, in an interview with TechCrunch:

[o1] kind of points out how incomplete a viewpoint this is, using model size as a proxy for risk. It doesn’t take into account everything you can do with inference or running a model. For me, it’s a combination of bad science combined with policies that put the emphasis on not the current risks that we see in the world now, but on future risks.

Now, does that mean legislators should rip AI bills up from their foundations and start over? No. Many were written to be easily amendable, under the assumption that AI would evolve far beyond their enactment. California’s bill, for instance, would give the state’s Government Operations Agency the authority to redefine the compute thresholds that trigger the law’s safety requirements.

The admittedly tricky part will be figuring out which metric could be a better proxy for risk than training compute. Like so many other aspects of AI regulation, it’s something to ponder as bills around the U.S. — and world — march toward passage.

..More

Google wins court challenge to the EU’s $1.7 billion antitrust fine over ad product

PUBLISHED WED, SEP 18 2024 4:06 AM EDT UPDATED WED, SEP 18 2024 9:42 AM EDT

Arjun Kharpal @ARJUNKHARPAL

KEY POINTS

  • The European Union’s second-highest court on Wednesday said a 1.5 billion euro ($1.67 billion) antitrust fine imposed on Google by regulators should be annulled.

  • The case stems from 2019 when the European Commission, the EU’s executive arm, said Google had abused its market dominance in relation to a product called AdSense for Search.

The European Union’s second-highest court on Wednesday said a 1.5 billion euro ($1.7 billion) fine imposed on Google by regulators should be annulled, siding with the U.S. tech giant after it challenged the ruling.

The case stems from 2019 when the European Commission, the EU’s executive arm, said Alphabet owned Google had abused its market dominance in relation to a product called AdSense for Search. This product allowed website owners to deliver ads into the search results on their own pages.

Google acts as an intermediary allowing advertisers to serve ads via search on third-party websites.

But the commission alleged that Google abused its market dominance by imposing a number of restrictive clauses in contracts with third-party websites, which ultimately prevented rivals from placing their search ads on these websites.

The commission fined Google 1.49 billion euros at the time. Google appealed, sending the case to the EU’s General Court.

The General Court said Wednesday that it “upholds the majority of the findings” but “annuls the decision by which the Commission imposed a fine of” nearly 1.5 billion euros.

The court added that the commission “failed to take into consideration all the relevant circumstances in its assessment of the duration of the contract clauses” that it had deemed abusive.

A Google spokesperson told CNBC that it would review the full decision closely.

“This case is about a very narrow subset of text-only search ads placed on a limited number of publishers’ websites. We made changes to our contracts in 2016 to remove the relevant provisions, even before the Commission’s decision. We are pleased that the court has recognized errors in the original decision and annulled the fine,” the spokesperson said.

A spokesperson for the commission said it takes note of the judgement and will reflect on the possible next steps.

The commission could appeal this decision which would send it up to European Court of Justice, the EU’s top court.

There has been a slew of court cases involving the EU and U.S. tech companies reaching their conclusions recently.

This month, the ECJ upheld a 2.4 billion euro fine imposed on Google for abusing its dominant position by favoring its own shopping comparison service. And the same court ruled that Apple must pay 13 billion euros in back taxes to Ireland, ending a decade-long case.

Norway: electric cars outnumber petrol for first time in ‘historic milestone’

Nordic country, paradoxically a major oil producer, has set target for all new cars sold to be zero emission

Agence France-Presse in Oslo

Tue 17 Sep 2024 08.39 EDT

Electric cars now outnumber petrol cars in Norway for the first time, an industry organisation has said, a world first that puts the country on track towards taking fossil fuel vehicles off the road.

Of the 2.8m private cars registered in the Nordic country, 754,303 are all-electric, against 753,905 that run on petrol, the Norwegian road federation (OFV) said in a statement.

Diesel models remain the most numerous at just under 1m, but their sales are falling rapidly.

“This is historic. A milestone few saw coming 10 years ago,” said OFV director Øyvind Solberg Thorsen.

“The electrification of the fleet of passenger cars is going quickly, and Norway is thereby rapidly moving towards becoming the first country in the world with a passenger car fleet dominated by electric cars.”

Norway, paradoxically a major oil and gas producer, has set a target for all new cars being sold to be zero emission vehicles – mostly EVs since the share of hydrogen cars is so small – by 2025, 10 years ahead of the EU’s goal.

In August, all-electric vehicles made up a record 94.3% of new car registrations in Norway, boosted by sales of the Tesla Model Y.

In a bid to electrify road transport to help meet Norway’s climate commitments, Norwegian authorities have offered generous tax rebates on EVs, making them competitively priced compared with fuel, diesel and hybrid cars.

Norway’s EV success is in sharp contrast to struggles seen elsewhere in Europe.

..More

Enter The AI-Native Founder

September 18, 2024

By Jeff Becker 

Evidence suggests that there’s a new breed of founder in tech.

It’s difficult to source the origins of what we might label an AI company; AI has been around for decades. In the past 10 years, there has been an increasing creation rate of companies for which AI is at the core of their technology. Since 2017 the number of such companies in the U.S. has doubled. Concurrently with this shift, there has been the advent of a new type of entrepreneur.

Jeff Becker, general partner at Antler
Jeff Becker, general partner at Antler

Today it’s the AI-native founder who wins the race. AI-natives are technically fluent in artificial intelligence and socially adept at navigating its impact. They are born with the internet in their pockets. The ability to learn anything at any moment is an expectation, rather than a desire.

Together with AI, there’s an entirely new method of company-building emerging; instead of building an AI company, these founders are building their companies with AI.

AI buildup

These companies are not building their own models. Instead, they are standing on the shoulders of giants. They are architecting systems that are more efficient and scalable than their incumbent peers — similar to the established tech players that rode the wave of the cloud and mobile supercycles. This is true of all new generations of founders who receive the technological baton from their predecessors.

AI-nativity is such a radical departure from the old ways of constructing companies that it has the potential to completely change the way we work, hire and grow.

Why have an SDR team mapped 2:1 to their AE’s when you can use something like Valley and achieve the same efficiency with 1:1 and save 90% compared to the additional FTE? Or, if you have the cash, why not maintain 2:1 but have the efficiency of 4:1 where every seller’s calendar is filled to maximum capacity? Or, when it comes to engineering, why write code without copilot by GitHub? And of course, the list of AI tools that create leverage for founders is expanding every single day.

The why culture

These new founders are embedding this culture of “why” into their businesses. They’re hiring for it too, with AI-related job postings on the rise again. AI-nativity is becoming a prerequisite to moving quickly and capitalizing on what is always an infinitely long product roadmap for the companies with the most long-term potential.

The rise of AI-native founders is also unlocking new possibilities in venture capital.

For example, traditional hard-to-back industries will start to become fair game as their operating economics improve. Consider Harvey AI, a California legal-tech firm. Pre-AI-nativity, backing a services business such as a law firm wouldn’t make sense from a VC perspective. But with AI augmenting the work, the overhead and margins make it look more attractive and scalable, and suddenly it becomes a high-leverage business. We are already seeing that trend across the S&P 500 with fewer employees needed to generate increasing amounts of revenue.

..More

Writing Software for Robots

In a few years, most feature flags & linting tools & other developer tools will be implemented predominantly by robots. Already, 50% of code at Microsoft & other large internet companies is written by AI.

This idea expands far beyond developer tools. Robots will manage sales development, paralegal work, medical intake, & many other tasks.

The tools of the last 15 years have been built to drive productivity. But in the future, these tools will drive robotic productivity rather than humans. What will it mean for software vendors?

It’s still early, but here are some guesses.

First, integration with the large-language models will be essential. The better the integration of a feature flagging service into a large-language model, the more the usage of the platform & presuming the product charges as a function of usage, the better than a dollar retention & ultimate satisfaction of the customer.

Second, documentation will need to change. Robots can ingest documentation like Neo in the Matrix : read it once & fly a helicopter - if the instructions are well structured & tested. In addition to human documentation, I wonder if software vendors will publish documentation bespoke for large language models that improve the accuracy & performance.

Third, reporting, evaluation, & testing will become the dominant human UX for many of these tools. If a large language model is sending thousands of emails or implementing feature flags across a huge code base, a team’s ability to understand all of the implications in a code base cannot be met by existing ways of working. So reporting, testing, & evals will be the dominant human UX.

The reshaping of tools for AI parallels the advances within manufacturing robots. Many of the tools companies that serviced GM & Ford started selling tools to humans. When robotic labor increased in prevalence, the tools needed to be reimagined for a robotic, rather than a human, arm.

What are some other implications when AI uses software on our behalf that you foresee?

Video of the Week


AI of the Week


Why OpenAI’s Reasoning Model Is Special

Why OpenAI’s Reasoning Model Is Special
image via Grok

By Stephanie Palazzolo and Amir Efrati

Sep 16, 2024, 7:00am PDT

OpenAI finally released its Strawberry reasoning artificial intelligence last week—or rather, an initial, less-complete version known as o1-preview. We first reported about the breakthrough behind Strawberry 10 months ago when it was still called Q*, and more recently told you what was coming, though we expected a more inspiring name than o1-preview!

The reasoning model differs from prior large language models like GPT-4 in one key way: When training the reasoning model, its capabilities grow at a higher rate the more computing power you give it, thanks to the way it makes sense of, or “thinks,” about data it has already reviewed. In essence, it creates new data, or thoughts, without needing as much information as prior models did.

The same thing happens when the reasoning model is answering questions from OpenAI customers, including ChatGPT users. When o1-preview spends more time, or compute power, to answer a question, the answers improve at a higher rate compared to other LLMs.

This type of improvement is known as log-linear compute scaling, in AI parlance.

OpenAI leaders themselves commented on these improvements in different ways. Boris Power, OpenAI’s head of applied research, attempted to lower expectations by saying on X that the new release is “not a mass product that just works and unlocks new value for everyone effortlessly.” CEO Sam Altman and Mark Chen, the company’s VP of frontier-model research, reacted with pride and provocation, respectively.

In some ways, the “new value” that Power was talking about is plain to see: o1-prevew is better at solving complex math and coding problems and asking users clarifying questions when it needs more details.

Among the highest praise came from Terence Tao, a preeminent mathematician and professor at UCLA. He said o1-preview was like “trying to advise a mediocre, but not completely incompetent, graduate student. However, this was an improvement over previous models, whose capability was closer to an actually incompetent graduate student.” 

He could see future models acting like a competent grad student, “at which point I could see this tool being of significant use in research level tasks.” 

That’s a big deal.

Some existing OpenAI customers also had compliments. Insurance firm Oscar Health, for instance, said o1-preview would help handle complex paperwork and health rules to determine the cost of certain medical services like newborn delivery, identify fraud or waste in medical bills, and extract data from medical record charts. Oscar’s post might be partly about marketing its AI products, but the post had supporting evidence.

Speaking of health, o1-preview also appeared to score well in a test in which AI models try to diagnose patients in a simulated medical clinic.

Where o1 Falls Short

In other ways, o1-preview falls short. One early tester told me that it struggles with long questions, meaning that the questions have to be broken down into several parts. OpenAI itself has admitted that o1-preview is on par with, or even worse than, GPT-4o in some cases, such as writing or editing text. And o1-preview still gets stumped by some simple puzzles that any middle schooler could solve.

The new model and its “mini” version are also missing a number of features you’d expect in a product. Unlike some of OpenAI’s other models, the new models are text-only for now, meaning that users can’t upload pictures and files to ask questions about them. ChatGPT subscribers are limited by weekly rate limits of 30 messages for the o1-preview model and 50 for the mini version—an amount that you could easily blow through in an hour or two if you’re not careful. (The company later said it was extending the limits.)

And it’s expensive. For developers who use the o1-preview model through OpenAI’s application programming interface, the new model is more than six times more expensive than OpenAI’s GPT-4o model, its prior flagship LLM. So o1-preview is not the most financially sound option for every developer.

All this suggests that the o1-preview release may have been rushed, either because of the company’s ongoing fundraising efforts or because of growing pressure from competitors. We should also point out that there’s a fuller, better version of o1-preview (it’s just called o1) that OpenAI didn’t launch but still published evaluation results about it.

OpenAI will have to put in extra work to make sure developers understand how to use the new models effectively. For instance, one founder of a legal AI startup I spoke with said that they don’t use o1-preview to answer every question from customers, even if it is better at reasoning. Instead, the founder uses the model to decide which smaller LLM should handle each step in the process of drafting various legal documents. (You can analogize this to a manager delegating tasks to their subordinates.) 

The founder said they also use o1-preview for tasks that might have previously taken lawyers days to complete, so customers aren’t put off by the model’s longer response times.

..Lots More

Calif. Gov. Newsom Says He’s Worried About AI Bill

By Laura Mandaro

Source:The Information

Calif. Gov. Gavin Newsom said he is worried about the “chilling effect”  SB 1047, a state bill regulating artificial intelligence developers, would have on the state’s AI industry. Newsom has until the end of the month to sign or veto the bill and he hasn’t yet said what he plans to do.

The bill, SB 1047,  would penalize the makers of large AI models if they’re found to cause harm and has been strongly contested by the tech industry as well as some national Democrats.

Newsom made the comments while on stage with Salesforce CEO Marc Benioff at the software company’s annual Dreamforce event in San Francisco Monday. While on stage, he also signed three bills that seek to prevent AI powered election interference, including one that requires social media sites to label or remove deepfake content related to elections. Earlier in the day he signed legislation protecting entertainment companies from using AI generated likenesses of actors without their consent.

..More

New data reveals exactly when the Chinese government blocked ChatGPT and other AI sites 

Rest of World received exclusive access to a platform that tracks patterns and timing of Chinese online censorship.

By JOANNA CHIU

18 SEPTEMBER 2024

  • GFWeb provides precise tracking of when Chinese authorities block domains, including sites like ChatGPT, Hugging Face, and Perplexity.

  • Chinese censors appear focused on blocking AI tools for content generation, such as video and image editing applications.

  • Censorship spikes might be linked to significant events, such as China’s introduction of AI regulations.

A few months after OpenAI launched ChatGPT in November 2022, the service began to take off in China, with citizens using it to satirize pro-government figures and for homework help. Because OpenAI restricted access to China-based users, local developers created mirror sites to facilitate access to the service. But the ChatGPT boom in China was short-lived. The Chinese government blocked ChatGPT’s domain on March 2, 2023, new research has found. 

Historically, tracking when exactly Chinese authorities blocked specific domains was difficult because researchers had to choose to test individual domains. But according to a newly launched platform, GFWeb, which granted Rest of Worldexclusive first access, the same month that the Chinese government blocked ChatGPT for the first time, authorities also blocked dozens of alternative chatbots and websites that use ChatGPT’s technology. Rest of World also discovered that Hugging Face, the popular machine-learning platform, was blocked in China months before the company reported issues.  

GFWeb is now available to the public for free and continuously tests millions of websites from both inside and outside China to identify when exactly they are no longer available to users in China. It detects which sites are blocked by leveraging the Great Firewall’s unique filtering behaviors. The service is primarily funded by the nonprofit Open Technology Fund and received research input from faculty at the University of British Columbia, University of Toronto, University of Chicago, and Stony Brook University.

“This system not only enhances our ability to track the timing and scope of censorship events but also helps identify patterns and shifts in the strategies employed by the Great Firewall,” Nguyen Phong Hoang, the platform’s developer and a University of British Columbia computer scientist, told Rest of World. “I hope GFWeb can empower researchers, policymakers, and the general public to gain deeper insights into the evolving landscape of China censorship.” 

It was previously unclear when Hugging Face was first blocked in China. In October 2023, the company reported “regrettable accessibility issues” in the country. In fact, GFWeb data suggests that Huggingface.co was actually blocked on May 7, 2023, months before the company identified the issue. 

Data from GFWeb allows observers to spot long-term trends. For instance, it shows that Chinese authorities are particularly concerned with AI tools used for content generation. Besides websites that appear to use ChatGPT’s technology, the majority of blocked AI websites include tools that assist with video and image editing. That includes services like OpenArt and VoiceDub. 

This suggests the Chinese Communist Party is “quite sensitive to content-generation platforms not controlled by the regime. That’s the main threat,” Jeffrey Ding, assistant professor of political science at George Washington University and a leading expert on China’s technological capabilities, told Rest of World

..More

Google says UK risks being ‘left behind’ in AI race without more data centres

Exclusive: Tech company wants Labour to relax laws that prevent AI models being ‘trained’ on copyrighted materials

Dan Milmo Global technology editor

Thu 19 Sep 2024 14.08 EDT

Google has said that Britain risks being left behind in the global artificial intelligence race unless the government moves quickly to build more datacentres and let tech companies use copyrighted work in their AI models.

The company pointed to research showing that the UK is ranked seventh on a global AI readiness index for data and infrastructure, and called for a number of policy changes.

Google’s UK managing director, Debbie Weinstein, said that the government “sees the opportunity” in AI but needs to introduce more policies boosting its deployment.

“We have a lot of advantages and a lot of history of leadership in this space, but if we do not take proactive action, there is a risk that we will be left behind,” she said.

AI is undergoing a global investment boom after breakthroughs in the technology led by the release of the ChatGPT chatbot, from the US company OpenAI, and other companies like Google, which has produced a powerful AI model called Gemini.

However, government-backed AI projects have been early victims of cost-cutting by Keir Starmer’s government. In August, Labour confirmed it would not push ahead with unfunded commitments of £800m for the creation of an exascale supercomputer – considered key infrastructure for AI research – and a further £500m for the AI Research Resource, which funds computing power for AI.

Asked about the supercomputer decision, Weinstein referred to the government’s forthcoming “AI action plan” under the tech entrepreneur Matt Clifford. “We’re hopeful to see a really comprehensive view around what are the investments that we need to make in the UK,” she said.

Google has outlined its UK policy suggestions in a document called “unlocking the UK’s AI potential”, which will be released this week, in which it recommends the creation of a “national research cloud”, or a publicly funded mechanism for providing computing power and data – two key factors in building the AI models behind products such as ChatGPT – to startups and academics.

The report adds that the UK “struggles to compete with other countries for data centre investment” and welcomes Labour’s commitment to build more of the centres as it prepares to introduce a new planning and infrastructure bill.

Other recommendations in the Google report include setting up a national skills service to help the workforce adapt to AI, and introducing the technology more widely into public services.

It also calls for changes to UK copyright laws after the abandonment this year of attempts to draft a new code for using copyrighted material to train AI models.

Data from copyright-protected material such as news articles and academic papers is seen as vital for models that underpin tools like chatbots, which are “trained” on billions of words that allow them to understand text-based prompts and predict the right response to them. The same concerns apply to models that make music or images.

The Google document calls for the relaxation of restrictions on a practice known as text and data mining (TDM), where copying of copyrighted work is allowed for non-commercial purposes such as academic research.

The Conservative government dropped plans to allow TDM for commercial purposes in 2024, amid deep concerns from the creative industries and news publishers.

“The unresolved copyright issue is a block to development, and a way to unblock that, obviously, from Google’s perspective, is to go back to where I think the government was in 2023 which was TDM being allowed for commercial use,” said Weinstein.

The report also calls for “pro-innovation” regulation, signalling support for the regulatory setup that is in place, where oversight of AI is managed by various public regulators including the Competition and Markets Authority and the Information Commissioner’s Office.

“We would encourage the government to continue looking first to the existing regulation, as opposed to creating new regulation,” said Weinstein.

UK ministers are in the process of drafting a consultation on an AI bill that is reportedly focused on making a voluntary AI model testing agreement between the UK government and tech companies legally binding, as well as making the UK’s AI Safety Institute an arm’s length government body.

..More

News Of the Week


Evan Spiegel's Spectacle

Snap's presentation was arguably more impressive than the product...

Evan Spiegel's Spectacle

Yesterday, I wrote up some initial thoughts on Snap's latest stab at their Spectacles product. Unlike the first "toy" version eight years ago, which were really just a camera, we're now fully in AR mode, mixed with some AI, naturally. I based those thoughts off of some first-hand accounts of using them. But actually, that's a sort of misleading way to frame these because they're purposefully not meant for consumers, but instead for developers. And I think that's the correct way to do this in 2024 – and what Apple should have done with the first Vision Pro – because the tech just isn't quite here yet. It's closer than ever, but we still have a ways to go – as such write-ups make clear.

I also finally watched Evan Spiegel's keynote address where he showed off the latest Spectacles. And that's arguably far more interesting.

First and foremost, Spiegel is very good at this. I've praised him and Snap before in this regard, but he's natural in a way you just don't see anymore with the loss of you-know-who. I won't say he's the heir apparent to Steve Jobs, at least when it comes to such presentations at keynotes, but I won't not say that either.

Again, I've felt that way before, but that was during staged, pre-recorded events during the pandemic. This event was live. Sure, it was a friendly audience (as were all the audiences during SteveNotes), but there's something about the way Spiegel talks about this technology. And the fact that he's not only not afraid to demo it – live – he's eagerto do so. You can tell he's excited about what they've built. Again, Jobs used to convey the same enthusiasm that seems impossible to fake.

And I mean, these new Snap Spectacles are interesting, but they're not the coolest-looking glasses in the world. They're not like the first version in playful colors. These are big and blocky. Sure, that's a style, but as Spiegel goes on to showcase, it's a frame clearly borne out of the technology inside. Still, he owns the look. And again, these aren't meant for consumers right now, so I think that look is fine. They key is that they fit on your face without some extra puck and tether, as Spiegel notes – and which is likely a shot not just at the Vision Pro, but perhaps at what Meta is getting ready to show off next week. We'll see...

Anyway, the entire event is pretty good, but where Spiegel really shines is during this Spectacles unveil. If you start the video at around the 48-minute mark, you'll see him in action.

..More

Instagram, Facing Pressure Over Child Safety Online, Unveils Sweeping Changes

The app, which is popular with teenagers, introduced new settings and features aimed at addressing inappropriate online contact and content, and improving sleep for users under 18.

An illustration of a child’s face behind a grid as a phone magnifies the child’s eye.

By Mike Isaac and Natasha Singer

Mike Isaac covers Meta and Silicon Valley. Natasha Singer covers children’s online privacy.

Published Sept. 17, 2024Updated Sept. 18, 2024

Instagram unveiled a sweeping overhaul on Tuesday to beef up privacy and limit social media’s intrusive effects for users who are younger than 18, as the app faces intensifying pressure over children’s safety online.

Instagram said the accounts of users younger than 18 will be made private by default in the coming weeks, which means that only followers approved by an account-holder may see their posts. The app, owned by Meta, also plans to stop notifications to minors from 10 p.m. to 7 a.m. to promote sleep. In addition, Instagram will introduce more supervision tools for adults, including a feature that allows parents to see the accounts that their teenager recently messaged.

Adam Mosseri, the head of Instagram, said the new settings and features were intended to address parents’ top concerns about their children online, including inappropriate contact, inappropriate content and too much screen time.

“We decided to focus on what parents think because they know better what’s appropriate for their children than any tech company, any private company, any senator or policymaker or staffer or regulator,” he said in an interview. Instagram’s new effort, called “Teen Accounts,” was designed to “essentially default” minors into age-appropriate experiences on the app, he said.

The changes are one of the most far-reaching set of measures undertaken by an app to address teenagers’ use of social media, as scrutiny over young people’s experiences online has ramped up. In recent years, parents and children’s groups have warned that Instagram, TikTok, Snapchat and other apps have regularly exposed children and teenagers to bullying, pedophiles, sexual extortion and content promoting self-harm and eating disorders.

..More

Introducing live video in the Substack app

Broadcast live to your subscribers anytime, anywhere

ZACH @ SUBSTACK

SEP 18, 2024

Today we’re excited to announce that we’ve begun rolling out live video in the Substack app, making it easier than ever to engage with your audience in real time. 

Live video arrives as more writers and creators use Substack to reach their subscribers while events unfold, often hosting dynamic conversations about breaking news and live events via Chat. With publishers requesting ever-richer ways to connect, live video provides a new way to meaningfully engage with your audience.

Going live from the Substack app will immediately notify your subscribers, allowing you to break news as it happens, share behind-the-scenes footage, bring your audience into exclusive events, or host interactive AMAs. You can even paywall a live video to make it available only for your paid subscribers, creating an intimate viewing event.

Want to expand your reach beyond your subscribers? Try inviting another creator to go live with you. Collaborations are a powerful source of growth for Substack writers, and we expect collaborative live videos will be too. 

Streaming capabilities are now available to bestsellers in the iOS and Android apps, with plans to expand to all Substackers in the coming months. This is just the beginning, and we have more improvements and features on the way. And if you aren’t a bestseller but would like to request early access to start streaming, you can do so here.

..Lots More

Amazon Forces Teams Back to the Office

Dan Bladen, Kadence

Amazon’s Return to the Office ... is it the Exception or the new Rule?

I got caught up with Kadence friend Nick Bloom - world leading authority on all things remote/hybrid work - to get his thoughts.

Amazon’s decision to bring employees back to the office full-time is making headlines, but for most companies, that approach won’t work.

The rest of us rely on hybrid work to attract and retain the best people.

📊 Research shows that hybrid workers are 13% more productive, and 59% of employees prefer employers offering flexibility.

But Amazon’s move might give some companies more reason to reassess their hybrid and remote policies.

As always, the key is in the data—let the numbers guide your strategy to retain top talent and drive success.

Strong End-to-End Encryption Comes to Discord Calls

BY THORIN KLOSOWSKI AND BILL BUDINGTON

SEPTEMBER 19, 2024

two people using computers to communicate with a key on one screen and a lock on the other

We’re happy to see that Discord will soon start offering a form of end-to-end encryption dubbed “DAVE” for its voice and video chats. This puts some of Discord’s audio and video offerings in line with Zoom, and separates it from tools like Slack and Microsoft Teams, which do not offer end-to-end encryption for video, voice, or any other communications on those apps. This is a strong step forward, and Discord can do even more to protect its users’ communications.

End-to-end encryption is used by many chat apps for both text and video offerings, including WhatsApp, iMessage, Signal, and Facebook Messenger. But Discord operates differently than most of those, since alongside private and group text, video, and audio chats, it also encompasses large scale public channels on individual servers operated by Discord. Going forward, audio and video will be end-to-end encrypted, but text, including both group channels and private messages, will not.

When a call is end-to-end encrypted, you’ll see a green lock icon. While it's not required to use the service, Discord also offers a way to optionally verify that the strong encryption a call is using is not being tampered with or eavesdropped on. During a call, one person can pull up the “Voice Privacy Code,” and send it over to everyone else on the line—preferably in a different chat app, like Signal—to confirm no one is compromising participants’ use of end-to-end encryption. This is a way to ensure someone is not impersonating someone and/or listening in to a conversation.

By default, you have to do this every time you initiate a call if you wish to verify the communication has strong security. There is an option to enable persistent verification keys, which means your chat partners only have to verify you on each device you own (e.g. if you sometimes call from a phone and sometimes from a computer, they’ll want to verify for each).

Key management is a hard problem in both the design and implementation of cryptographic protocols. Making sure the same encryption keys are shared across multiple devices in a secure way, as well as reliably discovered in a secure way by conversation partners, is no trivial task. Other apps such as Signal require some manual user interaction to ensure the sharing of key-material across multiple devices is done in a secure way. Discord has chosen to avoid this process for the sake of usability, so that even if you do choose to enable persistent verification keys, the keys on separate devices you own will be different.

While this is an understandable trade-off, we hope Discord takes an extra step to allow users who have heightened security concerns the ability to share their persistent keys across devices. For the sake of usability, they could by default generate separate keys for each device while making sharing keys across them an extra step. This will avoid the associated risk of your conversation partners seeing you’re using the same device across multiple calls. We believe making the use of persistent keys easier and cross-device will make things safer for users as well: they will only have to verify the key for their conversation partners once, instead of for every call they make.

Discord has performed the protocol design and implementation of DAVE in a solidly transparent way, including publishing the protocol whitepaper, the open-source library, commissioning an audit from well-regarded outside researchers, and expanding their bug-bounty program to include rewarding any security researchers who report a vulnerability in the DAVE protocol. This is the sort of transparency we feel is required when rolling out encryption like this, and we applaud this approach.

But we’re disappointed that, citing the need for content moderation, Discord has decided not to extend end-to-end encryption offerings to include private messages or group chats. In a statement to TechCrunch, they reiterated they have no further plans to roll out encryption in direct messages or group chats.

..More

Why United chose SpaceX’s Starlink to power its free Wi-Fi

Once negotiations are complete and hardware secured, United plans to get the actual retrofits done within two days for each plane

Frederic Lardinois

3:23 PM PDT • September 17, 2024

Workers replace an electrical cable on a Boeing 777-200 airplane in a United Airlines maintenance hangar at Newark Liberty International Airport in Newark, New Jersey.
Image Credits: Angus Mordant/Bloomberg / Getty Images

Late last week, United Airlines announced that it signed an agreement with Elon Musk’s SpaceX to bring its Starlink internet service to its entire fleet and — for the first time — offer free Wi-Fi to all passengers. To dig a bit deeper into why United went with Starlink, what that rollout will look like, and what it means for passengers and crew, we talked to United’s Chief Customer Officer Linda Jojo.

“If I could have done this change earlier, I certainly would have, because we’re proud of a lot of things, but we do think that our customers deserve a better Wi-Fi experience than the one they have today,” Jojo told me when I asked why the company is changing providers now.

Currently, United is using a mix of four different providers— Gogo, Thales, Panasonic and Viasat — all with different capabilities and limitations. You may find yourself on one flight that lets you stream video, for example, while your connecting flight only supports basic web surfing. While the airline has attempted to unify these systems behind a single sign-in experience, Jojo admitted that it’s not always possible to shield customers from the underlying complexity.

Meanwhile, the expectation, in part set by United’s competitors like Delta Air Lines, is that Wi-Fi on flights should be free. Yet United’s current set of providers simply didn’t have the capacity that would’ve allowed for offering free Wi-Fi to everyone on the plane, Jojo said.

United Airlines Boeing 787 Dreamliner aircraft interior.

Image Credits: Nicolas Economou/NurPhoto / Getty Images

“If we went free with what we had, we were going to enable a worse experience than what we had with the paid option, because the paid was just enough friction — $8 for a [MileagePlus] member — to say ‘I’m going to be really intentional about connecting,” she said. “We know the architecture and the setup today is not going to be good enough.”

The search for a better solution led United to consider low Earth orbit (LEO) satellites. They are, by definition, closer to the aircraft than those in a geosynchronous orbit, and hence can offer lower latencies, more capacity, and higher speeds. And when it comes to offering satellite-based internet access with global coverage and enough bandwidth, Starlink is pretty much the only game in town.

“If we were going to try it, we were going to try it with Starlink,” Jojo said. “We first started looking at it for our regional fleet to see if we were going to try it out. And we quickly said, ‘there’s nothing to try out here. We can see that it’s going to work.’ We could see what JSX and others were doing. We could tell from where the satellites were, that the coverage was there.”

..More

Startup of the Week


Post of the Week


Discussion about this podcast

That Was The Week
That Was The Week
That Was The Week is an editorialized and curated weekly look at developments in tech, startups, and venture investing with a video and podcast for paid subscribers. All free subscribers get a 6-month complementary paid subscription.
Listen on
Substack App
Apple Podcasts
Spotify
YouTube
Overcast
Pocket Casts
RSS Feed
Appears in episode
Keith Teare