News
California and AI: Capitol Weekly conference recap

At the latest Capitol Weekly-University of California Student and Policy Center conference this week, “California and AI,” panelists explored the policy implications of the transformative technology sweeping the globe and embraced by fields as diverse as medicine and hospitality.
“Policymakers need to understand that thoughtful, assertive regulation that allows us to experience benefits from AI while also protecting us from its harms is not only good policy but good politics,” said Jonathan Mehta Stein, the chair of the California Initiative for Technology and Democracy’s board of directors, during the first panel, “The Here and Now – A Levelset.”
The state of play
Mehta Stein was joined by Mona Pasquil Rogers, director of California public policy for Meta, the owner of Facebook and Instagram; Camille Crittenden, executive director of CITRIS and the Banatao Institute, a University of California research center focused on information technology; and Ramesh Srinivasan, a professor in the UCLA School of Education & Information Studies, in a panel moderated by Sacramento Bee opinion writer Robin Epley.
The discussion began by describing the intersection between government and AI, including the lack of regulation, insufficient understanding of how AI systems work and deference to Big Tech companies.
The panelists then highlighted how the legislative process of regulation often lags behind the constantly growing tech industry. While Mehta Stein and Crittenden emphasized the importance of multi-stakeholder systems to address this, Srinivasan proposed that technological innovations could also be part of the solution.
“Any algorithms that affect different publics can involve the public in designing them for the community,” said Srinivasan.
When asked about balancing innovation with civic engagement and public accountability, Pasquil Rogers said that Meta partners with different groups to help dissolve disinformation.
However, her fellow panelists noted that recent studies and investigations by the news media have exposed AI models and social media platforms for disseminating inaccurate information about elections. They argued that could be addressed by watermarking information and relinquishing more power to other stakeholders.
California’s role in AI
In the second panel, “California – A Leader in AI Technology,” moderated by Politico AI and automation reporter Chase DiFeliciantonio, the conversation became more focused on the Golden State’s place in the world of AI.
The panelists — Sen. Jerry McNerney, D-Pleasanton; Sara Flocks, the legislative and strategic campaigns director for the California Federation of Labor Unions; Raissa D’Souza, founding co-director of the UC Davis AI Center in Engineering; and Andrea Deveau, managing partner of the lobbying firm Deveau Burr Group – began by pondering how AI should be regulated in California, proposing that the state quash monopolistic ownership of the tech industry, establish statewide regulatory standards, increase public involvement and collaborate across industries.
“From the tech side, we’re regulating deep fakes, election integrity, child protection…” Deveau said. “There are north of 50 bills dealing with AI right now, so it’s difficult to find all these bills to implement in consistency. We need to work across the board with tech, labor, education and government.”
That said, McNerney argued that California’s creation of guardrails doesn’t necessarily slow down innovation – rather, that “regulation drives innovation.”
Deveau also said that regulation should start in California due to its homegrown expertise and the difficulty of regulating from the federal level.
However, according to Flocks, statewide AI regulation is long overdue.
“We represent 3.2 million union members in the state,” said Flocks. “AI is being used in every workplace right now, and 20 percent of managers are using automated decision making for labor without any oversight. We are already behind in regulating the deployment of these tools.”
McNerney discussed SB 2419, or the “No Robot Bosses Act”, which would require human oversight in employment-related decisions, had passed the U.S. Senate. He also mentioned that his SB 69, which would create an AI expertise program within the California Attorney General’s office, has passed the Senate and is now onto the Assembly.
Flocks emphasized the importance of continuing regulation to save the flailing job market.
“The threat of job loss and market insecurity is here,” said Flocks. “Fifty-seven percent of managers have been asked if they can replace jobs with AI, so college graduates are unable to find jobs because AI is already doing it.”
Keynote: Innovation and risk
The conference’s keynote speaker, Sen. Scott Wiener, D-San Francisco, next opened his address by calling AI innovation “extraordinary.”
“I’m proud that in the district I represent, in San Francisco, we are the beating heart of AI innovation globally,” he said.
Wiener listed the ways in which AI models have had positive impacts on society, including faster development of medicinal drugs, increases in cultural and agricultural productivity, more efficient transportation. These innovations, however, are also met with risks, he said.
Wiener noted a report from OpenAI that stated their newest AI model has a medium risk of supporting the creation of weapons, while its competitor Anthropic reported that its model was displaying a capacity to be deceptive and blackmail.
“The question for us,” Wiener said, is “how do we emphasize the benefits and try to get ahead of the risks and try to reduce those risks?”
Wiener said he believes that is where California’s Legislature steps in. Since Congress hasn’t made progress when it comes to the creation of laws regarding artificial intelligence, Weiner said California is primed to lead the country on AI policy.
He spoke about his SB 1047 from 2024, which would have required AI labs to undergo safety evaluations of their models before releasing them. It passed the Legislature, but was vetoed by Gov. Gavin Newsom.
Following the veto, Newsom established a working group of AI experts. This working group released their report a month ago and Wiener, along with other legislators, are incorporating parts of that report into a new bill he authored, SB 53.
Toward the end of his speech, Wiener brought up a more controversial topic: the recent reforms on the California Environmental Quality Act, or CEQA. One of the reforms exempts advanced manufacturing companies from CEQA, allowing them to build facilities without going through the standard environmental review process.
“In addition to innovating here and creating great new technologies, we should also be producing here as well as manufacturing in that advanced manufacturing space,” he said.
The importance of regulation
The final panel of the day, “The Promise and the Peril: Legislative and Regulatory Policy,” was moderated by Capitol Weekly’s editor, Rich Ehisen, who was joined by Shane Gusman, a partner for the consulting firm Broad and Gusman Governmental Advocacy; Jason Elliot, president of the consulting firm Versus Solutions; Sacha Haworth, executive director of the Tech Oversight Project, a nonprofit watchdog of Big Tech; and Jon Ross, a partner at the lobbying firm KP Public Affairs.
Newsom’s working group on AI was a hot topic for the panelists, with Ehisen asking if anyone felt optimistic that something real would come out of the recent report the working group released.
Elliot, who worked on the working group’s report, said it drew on various case studies around Big Oil, Big Tech, etc., displaying what could potentially happen if AI regulation fails to advance, both positives and negatives were mentioned.
“That report did its best to not demonize or validate or sideline, but try and provide an evidence-based anchor for the legislature to go about business,” Elliot said.
Elliot also discussed AB 1831, a 2024 bill by Assemblyman Marc Berman, D-Menlo Park, which closed a loophole when it comes to the production of child pornography, also criminalizing computer-generated and artificially-generated images.
Elliot said other crimes involving AI, such as scamming seniors, need to be addressed in a similar manner.
“We need to make it clear through statute that those things remain illegal,” he said. “If it’s illegal before AI, it’s illegal with AI, trademark Google.”
Haworth echoed the sentiment, that when it comes to protecting consumers from AI, it has to start with regulation from our government.
“Self-regulation has been proven to never happen,” she said.
This story was produced by Capitol Weekly interns Emily Hamill and Acsah Lemma as part of Capitol Weekly’s Public Policy Journalism Internship program. Additional reporting and editing by Brian Joseph and Rich Ehisen.
Want to see more stories like this? Sign up for The Roundup, the free daily newsletter about California politics from the editors of Capitol Weekly. Stay up to date on the news you need to know.
Sign up below, then look for a confirmation email in your inbox.
Leave a Reply