close
close

Opinion | Google’s agreement with publishers shows how to tackle AI with innovation, not bureaucracy

Opinion | Google’s agreement with publishers shows how to tackle AI with innovation, not bureaucracy

Artificial intelligence is unlikely to end life as we know it—despite the predictions of some doomsayers—but it will almost certainly radically upend the news business. And the new way California publishers are responding to this very real existential threat—with research and innovation funding paid for by the companies that stand to profit most from it—offers valuable lessons for lawmakers seeking to avert much more distant and hypothetical threats to humanity using means ill-suited to the purpose.

Under a looming deal, Google will invest about $50 million in an AI “accelerator” that will develop new tools for journalism. Google will also invest tens of millions of dollars in existing California news organizations, with the state contributing $70 million. The initiative aims to strengthen democracy through better reporting, distribution and business models for struggling media. Details of the program, which will be administered in part by UC Berkeley, are pending.

That is, instead of restricting technological innovation through blunt regulations, the agreement works cooperatively with companies to combat the potential harms of AI through better AI solutions – while protecting local jobs.

California’s novel approach to combating an artificial intelligence threat emerged in the context of a very different fight over journalism funding. Through their allies in the legislature, local newsrooms had sought to secure subsidies through two bills: California Senate Bill 1327 taxed internet advertising, and California Legislature Bill 886 taxed tech companies like Google and Meta for displaying news clippings.

The two bills had nothing to do with AI per se; they were more about publishers wanting to grab a share of the value of their content for the big platforms that dominate online advertising. But anyone can see that generative AI is the train heading to the news industry, and publishers have made similar arguments (and lawsuits) against the way the algorithms behind Gemini and ChatGPT were trained on their articles.

But because the bills were politically hotly contested, publishers opted to negotiate. As a result, tens of millions of dollars that would otherwise have gone to tech companies are now being spent on supporting local newsrooms and developing tools to ensure that the journalists working there are as able as anyone else to benefit from the productivity gains of AI.

The approach couldn’t be more different than that of SB 1047. Authored by state Senator Scott Wiener, this bill would require artificial intelligence models to undergo state-approved security reviews and impose heavy fines on developers whose tools cause harm through malicious users.

SB 1047 represents the classic temptation of government: restriction and control. Critics of the bill, such as Stanford’s Andrew Ng, argue that it is particularly devastating to combative startups that rely on freely available (“open source”) AI models and do not have expensive legal teams to navigate the regulatory maze.

In recent years, many tech workers have shown a willingness to leave the state in droves to respond to new taxes and regulations. San Francisco’s once bustling downtown is largely empty, and the city is facing a budget crisis.

What benefit will San Francisco residents, who are struggling to find a job, get from further restrictions on artificial intelligence? AI companies will simply leave the state, create new wealth elsewhere, and continue developing technologies that could threaten public safety.

What if Wiener instead took a page from those who struck a deal with AI companies to fund innovations that address the technology’s potential dangers—and ensure that innovation happens here in California? Such a deal would improve the state’s economy by making it a hub for new, sophisticated countermeasures against malicious AI.

Of course, there are also critics of the contract that the publishers have signed with Google. They focus primarily on the journalistic aspect of the agreement.

“State-funded ‘journalism.’ What could go wrong?” tweeted Mike Solana, editor of Pirate Wire, a libertarian-leaning technology and culture newsletter.

Left-wing critics like Senator Steve Glazer fear that the deal not only undermines what they see as better bills that hold Google and Meta liable for the news content they monetize, but could also benefit large investors who will continue to plunder newsrooms through automation.

But these critics miss the bigger picture. The sheer number of jobs threatened by AI dwarfs even the largest fines tech companies could pay. And traditional journalism is unable to compete with the enormous amount of written content that can be created and widely distributed by small teams using sophisticated technologies. To succeed, journalists must find sustainable business models that capture the attention of paying consumers.

These tools don’t have to be particularly complex to have an impact. For example, modern journalism requires incredible writing and publishing speed to be competitive. On social media, the stories published first have a head start and are shared the most.

“I think generative AI like ChatGPT can make us faster and better,” wrote Nicolas Carlson last year when he was editor in chief of Business Insider, recalling how he used AI to go through Donald Trump’s impeachment briefs.

This is true for me too. As a former journalist who now researches psychedelics policy and other topics, I’ve used AI to quickly transcribe large volumes of bureaucratic state meetings – saving me hours of work. I’ve also programmed a script using OpenAI’s GPT language model that summarizes background information trained on my writing style and subject matter expertise.

AI takes over the tedious tasks of my job while I focus on reporting and analysis, allowing me to publish high-quality information faster.

The California journalism program is a prime example of how government can respond to AI in many areas of society. Law enforcement agencies and security firms could develop cybersecurity tools. Healthcare providers and hospitals could counter medical misinformation.

The approach is similar to the argument put forward by MIT’s Erik Brynjolfsson for dealing with technological challenges: “Race the machine through cooperation rather than raging against it through constraints.”

Let us respond to technological progress with optimism and growth rather than fear and contraction.

Greg Ferenstein is founder of Frederick Research, a public affairs firm specializing in emerging markets.

Leave a Reply

Your email address will not be published. Required fields are marked *