Podcast
Special Episode: California and AI – The Here and Now

CAPITOL WEEKLY PODCAST: This Special Episode of the Capitol Weekly Podcast was recorded live at the California and AI, which was held in Sacramento on Tuesday, July 8, 2025
This is Panel 1: The Here and Now: A Levelset
Mona Pasquil Rogers, Meta; Jonathan Mehta Stein, California Initiative for Technology and Democracy (CITED); Camille Crittenden, Ph.D., CITRIS; Ramesh Srinivasan, UCLA School of Education & Information Studies
Moderated by Robin Epley of the Sacramento Bee
This transcript has been edited for clarity.
ROBIN EPLEY: What an introduction. I actually didn’t go to UC because I couldn’t afford all the extra SAT tests. But apparently your SAT scores matter now. So make sure you look those numbers up. Anyway good morning everybody. Welcome. Can you guys all hear me okay? Is it too loud? Too soft? Okay. Good morning. There we go. All right. Drink some more coffee. You guys all right? Well, as Tim said, I am an opinion writer with the Sacramento Bee. It’s my pleasure to be with you all this morning. And my honor to moderate today’s conversation on one of the most urgent and complicated issues of our time. The intersection of artificial intelligence, technology and democracy.
We’re living in an era where the tools that shape our public discourse influence our elections and govern our access to information are largely designed and controlled by private companies, and often without clear public oversight. Meanwhile, the policies and civic institutions meant to protect democratic values are struggling to keep up with the speed of innovation, coming mainly from right here in California. Throughout the conference today, we’ll be exploring how California, as the front line between technological innovation and civic reform, can lead the way in shaping a future where technology doesn’t work in opposition of democracy, but for democracy. And to help us do that I’m thrilled to be joined by four extraordinary panelists this morning.
All the way at the end, we have Mona Pasquil Rogers, a former lieutenant governor of California, who brings deep insight from both public service in her current role at policy at Meta. Jonathan Mehta Stein, executive director of the California Initiative for Technology and Democracy and a longtime advocate for civic empowerment and equitable policy.
Doctor Camille Crittenden, whose work with CITRIS focuses on how emerging tech can truly serve the public good. And Professor Ramesh Srinivasan of UCLA, a nationally recognized scholar on technology, culture and democratic values. Thank you so much for joining us today. We’ll begin with a few questions for each panelist, and then we’ll open the floor to questions at the end. So I encourage everyone to listen closely. Think critically and then engage.
Before we jump in, I actually want to be transparent with you all. I actually used I for the first time to write that introduction and some of the questions you’ll hear today. So if you didn’t like it, blame ChatGPT. This is my first time using AI in this way. I’ve kind of kept it at arm’s distance, and I was really, really shocked at how good it was. But I hope it’ll add another layer to our discussion today. About, you know, how these tools are already influencing our world and the way we communicate and ask questions and seek answers. So with that, let’s get started. I have kind of a question for the whole group at first, so we’ll just kind of go down the line. We’re living in a moment where technology is evolving faster than the policies meant to regulate it.
So to get us started, I’d like to ask each of you. What is one thing you think the public or policymakers are fundamentally misunderstanding about the relationship between technology and democracy today?
CAMILLE CRITTENDEN: Thank you, Robin. It’s a pleasure to be here. And thank you to the organizers for inviting us all. I think this is going to be a really fantastic day, and I am honored to be on the panel with such esteemed panelists. So to respond to your question, I think one thing that policymakers perhaps, but the public for sure, perhaps misunderstands about AI is that the high profile, headline grabbing news about what misinformation can can come about or what kind of manipulated images there are, you know, the Pope and the puffy coat and you know, Nancy Pelosi looking like she is drunk because they slowed down the video. Like those kind of obvious manipulations get a lot of attention and they should.
However, there are also more kind of behind the scenes algorithms that are driving a lot of the decisions that are being made, especially in public service, that I think need to be attended to Perhaps more. So, for instance, using facial recognition technology and policing, doing that kind of predictive policing, where there are algorithms that show particular neighborhoods that the police should be patrolling more intensively, often are biased in those outcomes. It can affect access to health care. It can affect access to housing, to employment and training. So these are the kind of things that I think we should take into our consciousness a little bit more. They’re operating quite in the background, but they have a huge and real effect on a lot of people.
RE: Thank you, professor?
RAMESH SRINIVASAN: I’m really happy to be with you all today. Thank you all for and thank you to Catherine and Tim and all of you for inviting me and having me here today. I believe I’ll just speak from my own heart, first and foremost. I believe more than ever in our state and the ability for our state to lead. Related to many, many issues, given what we’re seeing occurring on the national level, including in relation to AI, like, initiatives, like the Stargate Initiative, actually deeply concerned me on a number of levels that we can get to later today. For me, the elephant in the room is that almost everybody, including our policymakers, do not understand how these AI systems work and function. The AI that we’re discussing today is quite different than the AI of the past, though large language models have existed since actually the 19 late 70s and 80s themselves. These AI systems work based on historical data, but not deep history.
Past data as the basis for prediction of the future and present, and uses correlation in lieu of rationality and intelligence. Pattern recognition, which is part of what we do as human beings. So for me, we have to start first and foremost with trying to demystify these AI systems and understand what they’re pulling from, most notably our data and our environmental resources, and ensure that there is profound and clear democratic literacy around what is or is not occurring. Because the backdrop of all of this is everything and anything being turned into data. So for me, it starts with basic literacy of how these systems function and the resources, which are all of us on many levels that they’re pulling from.
RE: Well said, Jonathan?
JONATHAN MEHTA STEIN: Yeah, I echo the, the thanks. It’s an honor to be here. Robin, you asked what the public and policymakers misunderstand, and I think one answer to that is each other. And specifically what I mean by that is I think policymakers misunderstand the public on this, that the public does not trust Big Tech or its leadership. And I don’t say that to be polemical. I don’t say that to be inflammatory.
There is a new Berkeley IGS poll that will be coming out later this week. I’m not at liberty to share the exact results. But the results, it polls how much California voters trust a variety of stakeholders, and the numbers around Big Tech and their executives will shock everybody. It is almost impossible for California voters to trust Big Tech less than they currently do, right? But by contrast, the California public does want Sacramento and our leaders here to begin solving some of these problems, right? Another Berkeley IGS poll from fall of 2003 showed that almost 75% of California voters feel that big tech and AI and social media pose a threat to our democracy, and almost 75% feel it is the responsibility, that is the word in the poll, the responsibility of our leaders in Sacramento to begin solving those problems with huge majorities of all races, all ages, all genders, all regions and all political parties. Right?

California and AI, Panel 1: Robin Epley, Sacramento Bee. Photo by Joha Harrison, Capitol Weekly
So the public wants solutions. Big tech, for their part, after years of saying that they want guardrails, they want to work with California lawmakers, spent the last few months in Washington lobbying for Ted Cruz’s ten year AI regulation moratorium. Right? So the public wants all of us to begin solving these problems. Big tech, through its actions, are trying to keep all of us from being able to do anything, right.
And so, given that stark divide, whose side are Sacramento leaders on? Who are they listening to? I think there’s one, there should be one clear answer. But the reality is that we continue to see state lawmakers, including, at the highest levels, highly differential, deferential to industry. And I think the the one key misunderstanding I want to start with is that I think policymakers need to understand that thoughtful, assertive regulation that allows us to experience the benefits of AI while protecting us from the harms and the threats is not just good public policy. It is also good politics.
RE: I definitely want to come back to that. Mona.
MONA PASQUIL ROGERS: Hi. Thank you very much. Thank you to Tim and Catherine. You know, I’ve known Catherine for a long time, and when she calls, I know you, you do whatever she says whenever she asks. So it’s really an honor to be here with everyone today. So I don’t necessarily believe that this is a misunderstanding, but it’s a fear that I have that we are not that that that the way we’re looking at this is that technology and democracy are inherently at odds. And I said, you know, when you when you think of like, maybe platforms like ours and they, people tend to look at only through the lens of risk.
And I hope that, you know, look, there are risks, but I also hope that people look at, that how technology has evolved, how AI has evolved, but how, you know, communities around the world have had access to more information, how developers around the world, some kick ass people that I’ve met here in California who have been looking at ways to to to solve mysteries and diseases and find cures through AI and through, for on our platform, which is an open source platform llama. We’ve invited all those academics and we’ve invited civil society to to to participate with us and make it make it good but, and responsible. So, you know, it’s, I hope that people think about the whole picture.
RE : Thank you so much. So another question for the group. If we had to design a California model for tech and democracy, something that prioritizes inclusion, transparency, civic engagement, what would that model include?
CC: Thank you. One thing that I’ve been thinking a lot about that might not come up otherwise is the necessity of high speed broadband, and that relationship between civic engagement, civic infrastructure, civic participation and technology from the broadband side is so essential. And so I would say California has been a leader in making these investments in high speed broadband. In 2021, about $6 billion was appropriated out of the budget to increase this middle mile network. A lot of that has been built out. About half so far has been built out. And I think that’s really representative of a commitment to enabling people to participate in the civic process.
So if you live in, say, a rural region, it’s going to affect how you can get access to education and training. It’s not just do you have access, but how fast is the access? How good is the quality of the access? If you live in a family that has three children, are you able to all go to school at the same time? You know, on online, this was especially apparent during Covid. But it’s also a gateway to understanding what job opportunities are out there to job training, especially in remote and under-resourced areas. If you have access to good health care through telemedicine, we have some leaders in the California Telehealth Network here. And so that access to high speed broadband is really an essential piece of civic infrastructure.
RE: Absolutely. Anybody else want to talk about what a California model would look like? Yeah. Go ahead.
MPR: Well, I think look, California is, when you think of California, you think of innovation. I’m a proud third generation California and my siblings and I’ve gone to UC and CSU, the best institutions right in in the country, in the world. And you want that innovation to stay here. You want people to, you know, you you want to build on the legacy and really double down on access. Do people have access to information and and are we stronger as a community for it?
JMS: Yeah. When I think about the well constructed California model, I think less about certain policy specifics and more about process. And what I mean by that is I hope that the California model is one in which we combine regulatory and administrative and legislative rulemaking in a way that is sort of immediate and nimble and able to evolve. And so, so like what I mean by that is, if you look at the last technological revolution that the dawn of AI is most often compared to, which is the Industrial Revolution. Over 100 years ago, we saw this explosion of new technology, which led to an explosion of new industries and markets and even a whole new economies.
That led to a huge amount of increase in wealth. But what we also saw was that, one, that wealth was highly concentrated. That was the era of robber barons. And two that that increasing growth and wealth came with huge new problems. That era also came with sweatshops and child labor and huge public health crises and faulty products and, and all sorts of issues. And it took the, the regulatory and, and legislative and even civil society response that protected us from all those things was decades behind. Right. It took decades for us to build the administrative state, to build more growth in the labor movement and so forth.
And then if you if you fast forward to the the the dawn of social media, 20, 30 years ago, we saw the same pattern over again. Congress took a totally laissez faire approach to regulation. There was an explosion in new technologies that led to an explosion in new industries and new products, and tons of new wealth. That wealth was again highly concentrated, and that wealth came with or that growth came with huge new problems. Right? Screen time addiction, a mental health crisis in teen girls, political polarization, deepfakes and online disinformation. And now, today, several decades later, we are beginning to tackle those problems.
And in the meantime, we’ve created the most anxious generation in history. And so what I’m hoping is that at the dawn of a new technological revolution, that the California model is one that learns from our mistakes and that learns from our past. And instead of again allowing a huge explosion in growth and wealth that is highly concentrated, and that comes with problems that we only address in 2040 or 2050 that we build legislative and regulatory capacity, including, like the administrative state, in a way that is upfront and nimble and able to evolve and grow as the technology grows.
RS: Yeah. I mean, if you look globally and historically, you can always see that technological shifts and scientific shifts occur in very rapid and discontinuous ways, and it takes, often a great deal of time for the societal, the policy, the sort of, you know, the deliberative process to catch up with it.

California and AI, Panel 1: Camille Crittenden, Ph.D., CITRIS; Photo by Joha Harrison, Capitol Weekly
That said, I just want I really want to emphasize the power of thinking concurrently in terms of protection of democracy, support of democracy, support of a thriving economy, supportive, an economy that where there is not greater and greater inequality being pulled and swooped up, really by the directions that technologies are taking. And for us to think concurrently about those issues as forms of innovation, along with sort of what we might call blind technical innovation, not blind, but sort of innovation in and of itself in a sort of internally referential way. And this is the key here. Somehow we have. And this is this is something set that is set foot in our nation, which is why there’s so much anxiety and concern and there’s, I believe, a neo fascist populism that has taken over the federal government. Right.
And this is because of economic anxiety, economic deprivation and the absence of a, of a, of a sort of economic set of paths that people can believe in and work toward. So why can’t we engineer breakthrough technological innovations in ways that are not zero sum, that can benefit multiple stakeholders concurrently at the same time? I worked on policy like this when I worked for Sanders in 2020, and I continue to work on policy like this with Ro Khanna right now.
There’s many, many examples of how we can do this. But I think first and foremost, we need to understand that our data on an internet, we all paid for US taxpayers. I know that was a long time ago, 1969 Boelter Hall at UCLA, is being used to basically maximize wealth, power and valuation in extremely few hands in what we could call an oligarchic system. Right? And that is not necessarily where we need to go. That’s not innovative. That’s not what anybody wants, I believe.
I’m just going to say that almost like a politician, even though maybe that’s not true. And I instead, I believe we can engineer systems that are of mutual gain. Right. But the key is to not see regulatory or policy processes as, like buzz kills, like blocking innovation or what have you, but a way of directing innovation to support collective interests and collective well-being while continuing to support our wonderful brothers and sisters in Silicon Valley who have breakthroughs in technology that are executives, investors and so on. I have many friends in that world, and I truly believe the vast majority of them, just like all of us, want systems that lift us all up rather than create zero sum schisms that lend themselves to neo fascism.
RE: So I just want to ask a quick follow up question. As Jonathan very deftly pointed out, there’s a history of technological innovation outpacing the legislation that kind of governs it, and that can lead to not so great things. And I’m wondering, given that history, not just, you know, the turn of the last century, but pretty much any point of innovation in time, it feels like legislation naturally lags behind innovation. And I’m wondering, is it is it sort of folly to expect the weight of bureaucracy to keep pace with the speed of innovation? Is it are we are we destined to kind of lag behind and see, you know, kind of maybe push boundaries too far and then have to reel it back in. Or can we expect, you know, legislation and policies to keep up with the pace? Anybody feel free.
CC: I might just comment that it’s true that innovation and technological, technological advancement often goes faster than what we can respond to as policymakers or the public. But then I think it’s up to us to try to be more nimble in thinking, thinking ahead about what the next thing is going to be, what the next challenge, the next potential threat, what the next opportunity might be. But it would also be a call for me for closer collaboration between policymakers, civil society groups, academia. Because only by having that kind of multi-stakeholder view are you going to be able to better anticipate what the possible advantages and what the possible harms are.
RS: It’s just really exactly what I was just sharing, what we consider innovation and policy or kind of thinking systemically about technologies and its effects and its relationships to all of us is not a bureaucratic wormhole. It can actually be very, very agile. Right. Any, for example, algorithms that affect different publics can involve those publics in the design and potential audit of those types of systems. Right. We can think about things like data unions, universal basic income, I would say even more progressive, better measures like universal basic dividend, which I can talk about later. We can think about those measures concurrently as we’re thinking about supporting and and supporting our tech industries. Right. So we there are many agile, simple proposals that involve humans working with other humans, benefiting multiple stakeholders that we can implement right now. And that actually is innovation to me.
JMS: So, Robin, you asked if it’s possible for policymakers and the policymaking community to ever keep up with technological innovation. And I would say, let’s just look at Europe, I recognize that some stakeholders here in California don’t want a European style approach to AI and tech regulation, but from a procedural or a process perspective, Europe has shown that you can build a set of regulations through multi-stakeholder, to Camille’s point, a multi-stakeholder process that involves industry, civil society, lawmakers, researchers, academics, and so forth. Put the collective wisdom of that to use in protecting society in concert with the evolution of new technologies, not years or decades behind. The CITED team. I’m here representing the California Initiative for Technology and Democracy.
We have a team member, David Evan Harris, who’s constantly going back and forth between Brussels and San Francisco so that he is in constant communication with EU leaders and regulators who are helping to make these laws. And so if we want an example of getting it right, the EU is at the cutting edge, whether we want to adopt their exact policy proposals is a different matter. They’re showing us that the process can work.
MPR: I think the answer is really about, it’s all of that, but it’s about making sure that the voices in our state, in our community, particularly small businesses. As an example I met with the chamber a few weeks ago where they talked about there were a lot of proposals that they weren’t aware of and they didn’t, and that there, we were in the Central Valley, and that some of their small businesses were just starting to embrace AI and and how can they how can they make sure that they’re elected to know that they’re, they’re just jumping onto this bandwagon and to, to be mindful of whatever regulations don’t stifle their opportunities.
So I think that it’s we have to talk to our communities, our electeds, we’ve got to it’s our responsibility, right, to communicate with everybody and make sure there’s collaboration, what fits us. It’s not one size fits all, but you’ve got to make sure that all the voices, particularly those of, like, small and medium sized businesses are heard.
RS: I want to add just two quick things. Like Jonathan, great really important point he made about the EU and the EU process. The process that’s key. I was just in, in at a big EU meeting and they’re all bumming out big time because they feel I’m sorry, I’m being very casual, very Californian, but like.
RE: That’s very technical speak.
RS: Yeah. I mean they’re basically like we’re not up on tech. We’re not booming in tech. Right. And they have some good reasons for discussing that. 27 member states, they’re implementing their policies in different ways. They could figure out ways to streamline things. But I was telling them, you may not want to go the direction we have gone where we see massive disinformation, loneliness and anxiety, economic sort of, I don’t know if I want to say plundering, but at least extraction from oligarchs to pretty much the rest of us. And so that’s not really a great direction either. And I don’t know if the Chinese direction is great either. Right.
So we can think concurrently about what we call regulatory actions as well as technological design by bringing people together. And then the one issue with the small business side, I so agree with that, is that many big VCs in in the industry, I went to school with some of these guys, they were like, we were all roommates for years. They want to they’re there’s more and more research showing that their interest, more and than ever, is in investing in moonshots and unicorns, things companies that they think will be potential, you know, near monopolies, right, rather than other areas of growth in tech related to public housing, education, medical, etc. that could actually be small and medium sized businesses that actually are the breadbasket of employment. But I also want to encourage also cooperative thinking and municipal thinking and non and multi-stakeholder thinking. That is not just for profit companies as well here.
RE: Thank you. Mona, let’s go back to you. Coming from both government and now the tech sector. How do you see companies like Meta balancing innovation with public accountability, especially when it comes to misinformation and elections?
MPR: Thank you for the question. I think that with innovation, I’ve said this already, comes big responsibility. And I think that that repeat the question, I just I literally just saw someone I haven’t seen in a long time.
RE: No. That’s fine.
MPR: I was like, hi, Catherine, I’m sorry. My mind wanders.
RE: My mind wanders too. Don’t worry. How do you see companies like Meta balancing innovation with public accountability, especially when it comes to misinformation and elections?
MPR: So we do a lot. We collaborate with a lot of people. We work with academics. We work with civil society. Partnerships and collaboration with our elected officials and groups are very, very important. Communities are very important to us. We you know, and I think we’ve addressed misinformation and elections. Nick Clegg did a great news post on this in last December.
But Mona from the block is going to just also remind everybody that, you know, if, as you’re looking at, say, a social media platform and something doesn’t seem right, something is way outlandish. It is also my responsibility as the oldest of five and the auntie of six to go. Did you check that? Did you? Did I check that? Is. Does this seem right? So I think, one of the things that we do that I’ve seen us do multiple times is we work with different groups. What? When things are reported, we act. Sometimes. Can we do? Sometimes we make mistakes. Can we do better? Everybody can, but we do. We are responsive to our groups. We work with groups to help us to be, to be better at our company. And I hope other companies do the same.
RE: So you see it more as a personal responsibility to kind of point out when something seems off?
MPR: It’s both.
RE: Okay.
MPR: It’s both, right? It’s not just it’s not one or the other. If you can’t, you’ve got to be able to. And I’m sorry. This is the 63 year old Mona talking. And you’ve got to be able to also make your own decisions and look at things and say, this doesn’t seem right, or and question it. And if it’s wrong, elevate it, bring it up. And now it’s also our responsibility. And we do a lot of work with. We invest in, we make a lot of investments. We work with a lot of different groups to make sure that misinformation is, is dealt with, is off, but also in elections. We take it very seriously and, and start our election centers.

California and AI, Panel 1: Ramesh Srinivasan, UCLA School of Education & Information Studies. Photo by Joha Harrison, Capitol Weekly
I feel like they’re constantly going where we want to encourage people to know when they need to vote, where they need to vote, and we take down wrong information. But that is, that’s what we do when everybody, you know, everybody makes a decision for themselves at their companies. But that’s what I’ve noticed that we’ve done.
RE: I want to open it up to the rest of our panelists. Now, if you have a response to that. And Doctor Crittendon.
CC: Yeah, thank you, I appreciate that. I want to put one caution out there about the do your own research kind of mantra. Because there was a study done recently of these large language models. So they looked at at Claude, at copilot, at ChatGPT. Et cetera. Specifically around election information. I think this was done at NYU, and there were teams of journalists and teams of academics, and also civil society was in the room. So asking these questions about where is my local polling place? How do I register to vote? Can I vote in a different language? And each of those platforms had slightly different answers, which is really concerning for our own, you know, legitimacy of our elections and ability for us to participate as voters.
So, yes, do your research. But then it comes also down to that kind of digital literacy that we were talking about earlier. How do you evaluate the information that you’re getting? How many sites do I have to check in order to be confident that the information that I’m getting is actually true?
RE: Just one the Sacramento Bee.
MPR: Sorry. That’s right. That’s that’s so right. But I would say that’s where partnerships come in. We work with the Secretary of States across the country and, you know, and all of the local registrars to make sure they have the information, we have their information we are posting. And so I think that’s where it is true. You have to do your own research, but you also have to have really good partnerships to make sure that those, that information is correct.
JMS: I think, I think one of the challenges is that we’ve now seen a number of studies demonstrate not just studies academic studies, but also news investigations that have shown that these large language models are providing inaccurate information about elections and other things. And that’s not to say that they only provide inaccurate information or they’re not useful in other ways, but in key ways as it pertains to our democracy and to our elections, they’re sometimes providing inaccurate information. The, the, the real value of of partnerships and and of like a diligent effort to work with groups in order to get things right would mandate that companies.
Once in a while say this product is not ready for market. It gives voters inaccurate information about how to vote or where to vote, or the information they need in order to vote. But none of the companies, because they are locked in a race to market with each other, are ever willing to say, this gets things right about our elections 95% of the time, and that 5% is sowing distrust and confusion in a way that’s really damaging to our elections and their integrity.
So we’re going to hold off for three months while we figure this out. No one’s willing to do that. And so then you end up with a number of products on the market that are all, let’s say, theoretically 95% accurate. And that overlapping 5% inaccuracy is just creating confusion, right? So I totally appreciate the importance of working with elections officials. I’ve done it over the course of my entire career. The the question, though is what do companies subject to market forces do with those collaborations when they know their products aren’t working perfectly.
RS: Yeah, several different things. Thank you for that point. It sets me up so well. It’s like right in the middle here. So so I. Initial studies have shown that most dominant generative AI systems are independent of which one we’re speaking about tend to flatten what we might call informational diversity. This is an unrealized dream tied to the internet and the web that we still have to work toward. I don’t want to conflate these systems, which are generally on, you know, corporate, not accountable to the public and extractive, again, of economic resources and environmental resources, which we really have to talk about, with Wikipedia.
But this is the same thing we saw with Wikipedia, dominantly authored by men from the global North. Right? So informational diversity and the biases that drive various systems, including ones that Camille alluded to earlier, like predictive policing ones, are likely to be reinforced and baked into these systems. So here’s another great opportunity for actual partnerships that give up some level of power and decision making to these stakeholders and collaborators. I think anything that’s generated or pushed out by generative AI systems should, especially images and videos. Why are they not being watermarked, like right away? Like we should at least know when something’s being changed or shifted or modified or produced by some sort of generative AI system.
But I also think this sort of literacy thing is really difficult, and I don’t want to just infantilized younger folks for all of us, I include myself. I have an iPhone eight in my pocket right now that all my students crack up when I show them that, it’s because it feels like we’re overwhelmed with information, right? It’s like, and that’s not Meta’s fault, but that’s sort of a function of this digital world as it stands.
It’s really hard to make sense of things. So you basically take what’s provided to you on these various feeds as reality, because speed seems to be the mantra of these times, which is overwhelming, distressing, disorienting, and very difficult for all of us who multitask all the time to actually do. So We want to deal with actual literacy. There needs to be some power given up by our friends in Big Tech that actually explain to us what we see and why we see what we see and what is known about us. Because this is a surveillance advertising model that actually has gone gonzo. So real partnerships wouldn’t involve just in occasional situations working with, you know, various officials, but actually will involve giving up some power.
And I want to mention this, this is critical around the world, especially in the global South, because we’ve seen how misinformation and disinformation amplified by various platforms, including Facebook, etc. in parts of the world where Facebook is the media network has led to not not just widespread conspiracy theories, but also to genocides in particular cases and the rise in support of authoritarian leaders like Rodrigo Duterte and others. I would say even our own dear president as well.
RE: This is a question for perhaps Jonathan and Ramesh. Although feel free to jump in, ladies. How do we safeguard marginalized communities from being disproportionately harmed by emerging tech systems like AI and public decision making?
JMS: Yeah, I mean, so this is such a good question.
RE: Thank you.
JMS: Credit goes to ChatGPT though.
RE: I love it when people say that.
JMS: Let’s be honest.
RE: Okay, fine. It was ChatGPT.
JMS: I don’t think the. Okay. So I don’t think the answer is well, there are topic specific or issue specific solutions. Yes, but I think what we have to confront, if we go upstream just for one second, I think what we have to confront in order to protect marginalized communities and our democracy from these issues is just confronting the like this, this big tech exceptionalism or AI exceptionalism. That is to say, if in any other industry that impacts our health and our well-being, we have come to accept and industries have come to accept regulation, oversight, accountability, administrative agencies that have an eye on everything that they’re doing that is normal. Right. And and the idea that right now we accept a different approach for one industry and one industry only. It just baffles me. Right.
So take cars for example. Right. Cars changed everything about our lives. They change the way we interact with society. They they have changed the built landscape around us, in our communities, in our country. Cars have innumerable benefits, they also have lots of dangers. And so we accept it as common sense that we need speed limits, that we need seatbelt mandates and airbag mandates, that we need a federal agency that inspects cars, that assigns safety standards or set safety standards, assign safety ratings and so forth. And I mean to take it further. Seatbelt mandates and airbag mandates did not stifle innovation. They did not stifle product profits. They gave consumers confidence in the products they were using, and gave industry direction and clarity about how to innovate and where to innovate and to take it further. Sorry.
RE: I was going to make a joke, so it’s fine.
JMS: But to take it further, in some cases in this exact analogy, innovation actually sorry, regulation spurs innovation. And this is the well-known story everybody in this room knows. California’s emission standards forced automobile manufacturers to be better, to do better. And then those rules spread across the rest of the country or had impact across the rest of the country. So the idea that we accept that to be true or necessary or common sense in in the example of cars and any number of other products, but Big Tech is able to roll out a new app, or a no social media platform, or a new AI assistant, or a new automated decision making system. And there’s nothing in place to protect our safety and to protect our rights.

California and AI, Panel 1: Robin Epley, Sacramento Bee; Camille Crittenden, Ph.D., CITRIS; Ramesh Srinivasan, UCLA School of Education & Information Studies; Jonathan Mehta Stein, California Initiative for Technology and Democracy (CITED); Mona Pasquil Rogers, Meta. Photo by Joha Harrison, Capitol Weekly
And right now, that’s accepted as a very viable set of affairs. Is this sort of big tech exceptionalism and AI exceptionalism that I’m talking about? And I think we have to be able to say we don’t want innovation that stifles innovation. We don’t want regulation that stifles innovation. We don’t want regulation that drives tech out of California. But we do expect that this industry will be treated the same as all other industries that have this pervasive, all consuming impact on our lives.
RS: And many of us who know about tech, including big tech, know that at big tech, they often do AB tests, right. Like the get out the vote button. That was a very famous case that Facebook implemented, you know, several years ago. And so we know that they’re constantly testing. So why not some testing that actually protects the benefit of citizens, workers, consumers, the environment, etc. to see how much does that affect the dependent variable of engagement that I know is being maximized at all costs? Doesn’t have to be at all costs, right?
Maybe there are some ways in which we can think about design of systems for again, as I’ve been saying again and again, like for mutual gain rather than necessarily this implicit zero sum agenda. And so generally we see when systems are designed here and I mean not just technology ones, sociotechnical ones, right, that are social, political, economic as well as technological because tech undergirds all of these things. When we can think of it as an amazing process of systemic design to lift all up. And generally such systems, when they come from the top down, tend to always harm those on the greatest margins of our society the worst. That’s what we see with environmental issues.
For example, environmental issues related to AI, including drinking water sources and stuff like that. Right. So instead, if you’re able to design systems that consider those who are likely to be most marginalized. And we know all, we all know who that, women, people of color, people within economic class levels, etc., right? So if we design systems that actually consider those groups first and foremost, most importantly, by having them be at the table at the, you know, really designing and building and regulating and auditing these systems, then likely they’re they’re they’re likely to work for more populations. Right. So you start at the margins and you consider those cases in design, not us considering them, but us together, considering them.
RE: That leads me right into a question for Doctor Crittendon here. Much of your work involves making sure that emerging technologies benefit the public good. So from your research, where do you see the biggest disconnect between what engineers are building and what society actually needs? And additionally, how do we ensure that ethical considerations aren’t an afterthought?
CC: Thank you. I do see that disconnect. One thing I have noticed is that engineers, computer scientists, love to solve problems. So if you pose the problem as not just a technical challenge, but also a social challenge, an ethical challenge, they will likely embrace it and try to conform to those those criteria and those guardrails and come up with something really innovative. Fortunately, at UC and UC Berkeley, we do have classes on ethics as part of an engineering curriculum.
So I would advise that for other academic institutions, I think places like universities like CITRIS, the College of Computing, Data Science and Society at Berkeley also do a really good job of convening these different stakeholders. So for instance, CDSS and the Stanford Center for human, Human compatible artificial intelligence hosted a summit on generative AI about a year ago. And out of that came a lot of really great ideas. They recently also co-authored a report in response to the frontier models legislation that came about recently. So bringing academic institutions together with nonprofits can also really help to to bridge some of those gaps.
RE: Anybody else want to chime in or. Okay. Ramesh, you’ve, let’s see, can a more inclusive or global tech model still emerge from California, or does it require rethinking who controls the means of digital production?
RS: Yeah, I would say yes and yes.
RE: That’s good.
RS: Which is which is the way in which California can, can sort of embark on a path that lifts all of us up is by starting by considering all Californians. Right. And that involves those of those tasked to represent public interests, different types of different leaders, to civil society leaders working directly with technologists in these in these areas. So. You know, there is so much we can do right now. And it just starts with this big question. Of people having greater awareness of what is being grabbed from them and turned into data. Sorry, I’m being again very casual here, but like 24/7 365 we know that these devices are sort of Trojan horses in the sense that they’re they’re constantly gathering data from us all. Right.
So I think we need to stop sort of being okay with the, with the status quo, which is sort of creating so much anxiety because we don’t really know what is being known about us that is transforming and impacting our own behaviors. And we start with those sorts of issues. So, for me this starts with the individual psychological personal and I would say surveillance level. And then it moves toward the economic level. How is the extraction of such data, forming such incredible gulfs economically. And then grafting itself into certain types of divisions. But it’s just a reminder. Also, this is a planetary question. All these technologies.
Yes. They’re kind of starting here in California in many cases. But I personally have been looking for, you know, my entire career at the implications and effects of these systems on populations around the planet, indigenous peoples, peoples of the global South, peoples in East Africa, in South Asia, and so on.
And the ramifications are massive. So there’s huge opportunities we can, because these technology questions, we might think of them as California only, but the rare earth minerals, you know, come from different parts of the world. I’ve gone to mines in, you know, remote parts of the Congo and gotten kicked out a bunch of times trying to find what’s happening in the labor practice, the environmental practices. You know, in many cases, these stories are African. These minerals start in places like Africa and other parts of the world, lithium and so on in Afghanistan, Bolivia, etc. and they often end up in Africa because these, our devices are designed to die their necro-political, so to speak. Planned obsolescence models. Right.
So these stories are actually very global in their reach. And so yes, it’s a Californian story. And we’re so rich and diverse and amazing here in California. I really believe in California more than ever. I love my people for standing up against ICE raids in Los Angeles, like we’re all standing up any way we can. I mean, you all know where I stand on this stuff. Okay, it’s all good, but like. But at the same point, if we actually think about what works for Californian people, we can also think about what can work for our wider worlds. Because in a weird way, these technologies were supposed to bring us together, create a global village. This was the title of my first book, but in many ways we feel more separated in certain ways than ever. We don’t really understand one another around the world. And so I think there’s possibilities to change all of this for the better.
RE: Necro-political, what a good word. I want to touch this really quickly on something Jonathan said at the beginning, and then we’ll go to questions from the audience. But you had mentioned how there wasn’t a lot of trust in big tech companies, and I was wondering if I know you can’t talk about the survey, but can you expand a little bit on why people don’t trust Big Tech?
JMS: Well, I mean, I think anyone in this room could would have their own answer for that. I think that it’s, I’ll give you an example that is like, difficult to hear. And Mona you’re going to like this the least, I think, out of anyone here. But we now know that when girls and young girls and teenage girls took down a selfie on Instagram or made a comment about their own bodies or their own looks that conveyed insecurity or vulnerability. That Meta would target them with a weight loss advertising and beauty image advertising and beauty image
RE: Can confirm.
JMS: Okay, so when you know, kids are seeing that they’re soaked in it every day, their parents see it. And there is a recognition increasingly that the business model is, I mean, you know, Ramesh has used this word, extractive. Like it is, it is using our thoughts, our information, our relationships, our data to learn as much about us as they can, including the most vulnerable, most intimate parts of us, and then using it primarily to make as much money as possible. And so, you know, I could name a dozen examples, but that one really makes crystal clear if you replicate that a thousand times for maybe for you. It’s not body image and your weight. It’s something else.

California and AI, Panel 1: Mona Pasquil Rogers, Meta. Photo by Joha Harrison, Capitol Weekly
But, you know, it’s all just grist for the mill for these companies. You can’t trust an entity that has that relationship to you that sees you as something to be consumed. So I don’t have one answer for that. No one has one answer for that. This one probably has 100 answers for that. The the irony of all this is that there there was at the founding of these companies, this grand vision of bringing us closer together, of connecting us, of helping us understand each other and so forth. And I’d like to believe.
RE: Of doing no evil.
JMS: Yeah. I’d like to believe that even when the CEOs of these companies go stand behind Trump at his inauguration, or are lobbying for Ted Cruz’s AI moratorium that there’s thousands and thousands and thousands of employees at these companies who are attracted to them because of their original vision and their original quest to bring the world together and to make the world a better place.
RS: Three, three key things. Also, I want to just add on that it’s also our hormones being targeted dopamine, cortisol and adrenaline. Three key hormones. These are also part of the extractive process. And this isn’t about canceling nobody. It’s just about simply pointing out the the inputs and outputs. And what are the externalities that are not being considered here, including environmental ones. Content moderator workers in places like Nairobi are being are watching, you know, 14 hours a day in some cases of pornographic or bestiality or extreme violence. Right. Is that the way we want to lead on a global level as Californians? That’s another question, right. And they’re being farmed out through independent contractors, so there’s no liability, or almost no accountability for the, chat, for the Open AIs of the world. And then last key point is, actually I’ll just stop with that.
RE: Well, I just wanted to give Mona a chance to respond. If you had anything to say or that’s. And that’s fine. I you know, I don’t expect you to be the spokeswoman for all of big tech, so it’s okay.
CC: I might just have two.
RE: Yeah, absolutely.
CC: Two more quick examples of distrusting technology. So one, I’m sure you all read about the interactive agent with one of these large language models that encouraged a young boy to actually commit suicide through their interactions. So there are probably many, many examples where these personas have been very helpful in being companions or you know, artificial friends. But for a certain population, people who already might have these kind of tendencies, or especially for children, people who are under the age of 18, there was recently a Supreme Court decision around age assurance for pornography where the majority of the court, all the conservatives said, yes, you need to implement age assurances for this kind of adult content. So I think having that distrust of technology, saying, I’m going to interact with this platform in a way that I’m assuming from the outset it’s not going to actually cause me harm or encourage me to harm myself. I think that’s one example.
A second example, just real quickly, is from Amazon. This isn’t one that you might anticipate, but they found that there were cases where people were looking up a particular chemical to ingest for suicide, and it needed to be accompanied by an antiemetic agent so you wouldn’t throw it up. So Amazon was suggesting, oh, people who bought this might also buy that. So they were making these recommendation engines are not neutral. You know, they’re making these recommendations in a way that is actually going to be harmful. And so without that kind of interference auditing, then these are the kind of things that are going to propagate just because of the way the algorithms work?
RE: On that note, we’ll open it up to the audience.
CAMILLE WATTS-ZAGHA: Hi, I’m Camille Watts-Zagha. Thank you so much. So I just wanted I wanted to thank Professor Srinivasan for reminding us that AI has been around for, I guess, some decades. And so I’m a little bit curious. Specifically, I missed the part when you said what changed or what now? And then I also was just wondering, for all the group, you know, I’m not really sure if we’re all assuming there’s a specific threat we all know about. I know we’ve all talked about many, many kind of common threats in the world, and I just wondered which one is particular to today’s AI. And, you know, I know actually our the moderator said do no evil. So there could be just a promise or a possibility that some of you are thinking about, you know, I guess bring people together. But is there really some specific opportunity particular to today’s AI that you’re thinking of rather, you know. So if you could just tie those things together since there were so many that were brought up.
RS: Yeah. Yeah. So main ways it’s changed is large language models have existed for a very long period of time. But AI systems in many cases in the past, many of you also know about this. I used to do AI development, and when I was a graduate student, I was in an AI lab at MIT. It was it was there was a lot more interest in semantics, basically meaning making whether you can build a system that can interpret and understand phenomenon that’s quite different than using data, even exquisite amounts of data, astronomical amounts of data with astronomical energy, costs for correlation. Right. So pattern matching. Right. These systems are actually ahistorical. Many AI systems they use they use past data, but they’re ahistorical, if you know what I mean.
So like data from the past is used to order the present and construct the future. But that’s the opposite of history. History is taking a long view and being interpretive and critical. This is more using the recent past to order the future. But if you also look at this, AI systems of the past and present are actually not intelligent in the way we humans are intelligent. We are, we are emotionally intelligent. We are irrationally intelligent in some ways, like great behavioral economists Dan Ariely, others also the great Kahneman, who just passed away, wrote extensively about this.
Non-humans are intelligent, like our bird friends, our orcas that are adapting to attack fossil fuel boats. That’s a great adaptation. That’s what’s up. So so, so non-humans have distributed cognition as well. This is why I’m in academia. Because I can talk like this and and and so, so so these have completely shifted intelligence is manifold and beautiful and magical and multiplicitous these these technologies are one instantiation of intelligence. And that’s why many of them are described as stochastic probabilistic parrots. Right. And that’s often used as a critique. But Sam Altman said himself, I’m a stochastic parrot. So, you know, in a way that’s an appropriate response to it.
So for me, it just is about opening up. Let’s lift the lid up on all these different types of intelligence, know what we should protect as we human beings for ourselves and what should be delegated to these technologies. And then last quick thing I’ll say is that folks like Eric Schmidt and before that, Mark Zuckerberg would say things like, we are the best ones equipped to regulate ourselves because we’re the only ones who really, I’m paraphrasing, Maybe not Mark, Eric Schmidt was saying more of this, because we’re the ones who most understand these systems we’re building. That’s a design strategy to be unintelligible. That’s not actually in good faith. Those sorts of comments, in my opinion.
MARIO GUERRERO: Good morning. My name is Mario Guerrero. I’m the legislative director for the University of California, and I work on AI legislation. So I have a question for you. University is on record and supporting, you know, safe AI, use of AI, and we have our own white paper on what should happen. But listening to you, you know, I go across the street and I see some of the big bills that cost a lot of money. And so even UC has concerns about, you know addressing these huge cost bills like overnight. So I believe one of you brought up the the car situation. Right. Seatbelt. You know, airbag, etc., etc..
So my question to you is, would it be worthwhile to take a phased approach in some of these, these big items, these testing? I apologize, forget the center, these testing centers and these audits and all of these things, right? These big bills that have a lot of these things, would it be worthwhile to maybe do a big one or do them all but phased in so that we do them and there’s some kind of timing to it? And maybe there’s some maybe our, our AI partners can better put their hands around it and maybe not have as much as opposition. So this is I was just thinking about this as, as we’re talking about it because it’s obviously needed. But there is a huge cost that even UC also, you know, talks about and has concerns about. Thank you.
CC: So thank you for that question. And I, maybe my colleagues will have a better specific response to the question that you raised. But I did just want to put a shout out to the University of California, because we were one of the first universities in the country to come up with a set of responsible AI principles. There was a working group that was established under previous President Janet Napolitano and then continued through President Drake’s era. A working group from all of the locations of UC, including faculty, staff, others to try to arrive at this set of principles.
So they include ten items that you might all anticipate. You know, privacy, fairness, accountability, appropriateness, things like that. And I will say that it’s not just sitting on a shelf, now. That the UC has a continued AI council that’s made up again of people from all throughout the, the system, and they’re, we have subcommittees and transparency in risk assessment, in knowledge, skills and awareness. So we’re really trying to imbue a lot of the processes at the University of California with these principles. So then when you go talk to, say, procurement officers, they’re having very specific conversations with our vendors and they’re big vendors, UC is the second largest employee in the state, employer in the state of California, after the state government itself.
So we have quite a bit of leverage in those conversations that we’re having with vendors around responsible AI. So I apologize that I didn’t exactly answer the question that was posed, and maybe others have a better one, but I did want to just put a plug in there for the University of California.
AUDIENCE: What a great panel. Thank you so much for this. And and Mona especially. You have a tough seat to sit in. I can tell you, many of us would probably prefer if Mark Zuckerberg were sitting there because you’re such a nice person. So it’s not fair. I do want to ask the question about the elephant in the room, and that is a thing called section 230, and the gift that the federal government gave to Meta and all the tech companies 20 years ago, basically saying unlike any other company, as Jonathan noted, you don’t have any accountability for the harms that you caused. So I want to ask the panel. We haven’t talked about this, but how helpful would it be if Section 230 were at least reformed in a reasonable way, so that you don’t have to keep defending, Mona, what is really the indefensible?
JMS: I mean, since we have one more minute left on this panel, I’ll say yes, very clearly, section 230 has been, has caused a full range of easily, not easily, a full range of avoidable harms to our kids, to our communities, to our democracy. And it was a mistake at the time. It is an even more extreme mistake today because we have seen how bad it has the, the, the harms have been. It desperately needs to be reformed.
RS: If I’m not mistaken, it’s from 1996. There was like pretty much no commerce and surveillance, certainly anything close to where we’re at now at that time period. So it’s a legacy of the past. It needs carve outs and reforms right away. Yeah. Communications Decency Act, for those of you who haven’t looked it up, you know, and people like Ron Wyden were trying to rein that in right around the time of 9/11. Note what happened with the Patriot Act and how that skirted all of those discussions at that time, too? I mean, that’s a long time ago.
RE: Mona, no? Okay. Just checking. Well, I want to say thank you so much to all of our panelists today. Thank you so much to the University of California for hosting us. And please stick around for the rest of the panels today and have a great time. Thank you.
Thanks to our sponsors:
THE TRIBAL ALLIANCE OF SOVEREIGN INDIAN NATIONS, WESTERN STATES PETROLEUM ASSOCIATION, KP PUBLIC AFFAIRS, PERRY COMMUNICATIONS GROUP, CAPITOL ADVOCACY, THE WEIDEMAN GROUP, CALKIN PUBLIC AFFAIRS, STUTZMAN PUBLIC AFFAIRS, LUCAS PUBLIC AFFAIRS and CALIFORNIA PROFESSIONAL FIREFIGHTERS
Want to see more stories like this? Sign up for The Roundup, the free daily newsletter about California politics from the editors of Capitol Weekly. Stay up to date on the news you need to know.
Sign up below, then look for a confirmation email in your inbox.
Leave a Reply