Opinion

How California can take the lead on preventing AI harms

Image by Lightspring via Shutterstock

OPINION – The potential transformative impact of recent advances in AI technology, such as San Francisco-based OpenAI’s ChatGPT and GPT-4, has been compared to the personal computer, the internet, and the industrial revolution. We are just starting to grapple with the implications of a society where everyone can chat with a digital oracle equipped with the entirety of human knowledge immediately, at zero cost.

Undoubtedly, AI can yield many benefits for consumers in categories such as health, productivity, education, and the arts. But the extraordinary pace of AI development also threatens to overwhelm our fragile institutions and labor markets. While courts are flooded with AI-generated lawsuits and social media apps are awash in AI-generated art, actual lawyers and artists may see demand dry up for their services.

Scarier still, modern chatbots can easily be instructed to carry out malign tasks on behalf of users. During the creation of GPT-4, OpenAI’s external research group tested whether the AI could learn to trick humans to carry out tasks in the real world. GPT-4 was able to hire a TaskRabbit worker to solve an online “CAPTCHA”, a key technology that prevents online spam and abuse, by lying about its identity and convincing the human that it was a visually impaired person, not a chatbot.

If GPT-4 can trick one human, it can trick many. Somebody less restrained than a researcher could certainly use GPT-4 to initiate millions of bogus telemarketing calls or disinformation-spreading advocacy conversations on behalf of corporations or adversarial nation states.

It is exactly this type of problematic AI activity that one of us, former California Senate Majority Leader Hertzberg, had in mind when he introduced and passed the California Bot Disclosure Law (SB 1001). This law, the first of its kind in the nation, requires that chatbots disclose their identity to humans when they have the intent to convince them to purchase goods or services, or to engage in political persuasion.

This is an important law already on the books, and by requiring AI disclosure, Californians are already protected against some of these harmful uses of AI. But recent events require an expansion of the law: Californians should always be informed if they are communicating with a bot.

AI transparency is a commonsense policy. For users who opt into using AI services, disclosure poses no problem at all – interacting with a chatbot is the point. But for those at the front lines of service and support, from government call center employees to TaskRabbit workers, greater AI transparency will make sure that the needs of humans will come first, and that AI cannot trick people into doing the AI’s bidding.

Somebody less restrained than a researcher could certainly use GPT-4 to initiate millions of bogus telemarketing calls or disinformation-spreading advocacy conversations on behalf of corporations or adversarial nation states.

Expanding the AI transparency law will also have the effect of protecting California’s human workforce, by preserving demand for human workers. When consumers are unable to tell the difference between a human and a bot, corporate executives will replace staffers bots to save costs. However, businesses may be more hesitant to lay off contact center employees if they must disclose that potential and current customers are not worthy of talking to a real person.

California’s labor leaders should therefore embrace AI disclosure as a key component of preserving demand for California workers, particularly in the knowledge economy. As teachers, healthcare workers, and public sector employees ponder the viability of their careers in the AI age, their unions should go to bat for their future today.

The law also needs real teeth. The California legislature should consider including a carefully crafted private right of action as a powerful disincentive for bad actors. Legislation should pin liability on the ill-intending people and businesses that facilitate malicious AI-consumer interactions without disclosure, while companies which produce the core AI technology should be granted a safe harbor for misuse by third parties.

The pace of AI and chatbot technology is accelerating by the week. California’s lawmakers must act quickly to protect Californians from severe potential AI harms.

Bob Hertzberg is the former California Senate Majority Leader and former Speaker of the California Assembly. Roddy Lindsay is the co-founder of Hustle, a messaging and video platform for organizers and other humans.

Want to see more stories like this? Sign up for The Roundup, the free daily newsletter about California politics from the editors of Capitol Weekly. Stay up to date on the news you need to know.

Sign up below, then look for a confirmation email in your inbox.

 

Support for Capitol Weekly is Provided by: