EP 85: SB 1047 passes the CA house - we discuss it and AI safety with a16z's Martin Casado.
SB 1047 just passed the California assembly. The proposed legislation in California impacting large AI models - has sparked fierce debate and criticism. You’ve probably seen multiple op-eds, tweets, articles and as we write this, it just passed the California Assembly with minimum votes a few minutes prior.
I (Sriram) need to come clean here : I’m not a fan of SB 1047. I believe it to be harmful to startups and AI model development and can have vast negative consequences for the AI ecosystem in California. However, I do acknowledge that others (especially on X) disagree with me.
To cover all of this, we brought on one of the people who has really lead the charge against SB 1047 and happens to be one of the leading investors in AI - and a partner and a friend - Martin Casado, General Partner at a16z. Martin has emerged into one of the key voices in this conversation and in my view, has made it his personal mission to try and educate people on the harms it bring. Note: this was recorded last week.
We cover a lot of ground on SB 1047 and AI safety in this conversation. I try - with my biases - to steelman some of the arguments from the “other camp” of AI safety. We talk about how former Speaker Nancy Pelosi and other promiment Democrats and Republicans have come out against the bill. We talk about some of the key individuals involved from California State Senator Scott Wiener and Dan Hendrycks to others in the AI safety world like Max Tegmark. We talk about the specifics of the bill and also the *spirit* of it. This turned into a fascinating - and timely - conversation not just as this bill heads to CA Gov Newsom but also as a bell weather for AI regulation around the world.
2:30 - What is SB1047?
4:10 - What is the origin of SB1047 bill?
6:14 - Should AI be regulated?
13:36 - Who is funding this bill? ‘Baptists vs bootleggers’ and Nick Bostrom’s Superintelligence book being the origin point.
16:35 - Scott Wiener’s support of the bill
19:34 - Open source benefitting software world and risks to open source due to this bill
22:10 - Are large models more dangerous?
24:10 - Is there a correlation between size of models and risk associated?
26:45 - How would Martin frame any regulations on AI? What’s a better way?
28:46 - Nancy Pelosi opposes the bill. Some famous researchers are for the bill. Who comprises the two opposing camps and what is the motivation?
33:10 - Why does Pelosi oppose the bill?
35:00 - Leopold Aschenbrenner and “Situational Awareness” paper
37:20 - Non-system people viewing systems - computer systems are not parametric but sigmoidal.
41:30 - AI is the ultimate ‘Deus ex machina’ (God in the machine)
46:00 - Anthropic’s investment in AI safety
48:15 - If you’re a AI founder, what can you do about this bill today?
50:00 - Why is this bill a personal issue for Martin?
Enjoy!
- Aarthi and Sriram
Transcript:
00:43 - Welcome Martin Casado of a16z
[Sriram Krishnan] (0:43 - 2:45)
Ladies and gentlemen, we have a very special episode for you here today. One of the most interesting and talked about topics in the technology industry, especially in Silicon Valley over the last several months, is something you might have heard of as SB1047, or to use the expanded name, the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act. Now, we're going to have a lot of questions in there.
So this is a bill, a draft bill that has been proposed, which honestly, you know, I don't AI, which honestly, you know, a lot of people, including me and many others I work with, think is going to be quite harmful. One of the people who's been leading the charge on talking about this is my partner and friend, Martin Casado. Martin leads a lot of our AI and infrastructure investing.
Honestly, one of the nicest people I know, but he's really worked up about this topic. Now, in the spirit of honesty, I would say I kind of established a bit of my Bayesian priors, if you will. I am not a fan of this act.
I think it's quite harmful. For all these, I think, you know, Martin and others will get into, but I want to try and have as much as possible, bring some of the other opinions and takes because this is a little bit of a heated discussion. So what I'm going to try and do over the next hour or so is ask Martin a bunch of stuff about this act, why he's so worked up on it, you know, what his views are, but also try and have discussion, you know, from others who, you know, maybe agree with the act or have other points of view.
Let's see where this goes. Should be a fun discussion. But with that, Martin, we never come on the show, you know, thank you.
[Martin Casado]
Thanks for doing this. I'm so glad to be here. It's going to be so much fun.
2:30 - What is SB1047?
[Sriram Krishnan] (2:30- 2:45)
All right. Maybe one good place to start is for our audience who may not be, you know, have been paying attention or may not be paying attention too deeply. Could you give a quick overview of what SB 1047 actually is?
[Martin Casado] (2:46 - 4:11)
Maybe the most important thing to realize is this has been such a moving target. It evolves almost daily. And it actually just changed again in the last couple of days.
So I think probably the best way to describe it is to provide an over a general overview of this spirit rather than the letter, because the letter is really in flux. So the spirit of the bill is, is that if you're working on state-of-the-art AI models, and AI is a very definition of a vague definition. If you're working on state-of-the-art AI models that are over a certain threshold, right now it's a hundred million dollar training run.
Then two things happen. One of them is you should do some reporting to a state agency and where this agency sit has moved around on what you're doing to keep it safe. And then another one is if somehow it is the result of some catastrophic harm, then there is some liability if you did not do kind of best practices to keep it safe.
And there's a lot of details around this such as, well, okay, if it's open source, then it only applies if you're fine tuning the model over $10 million and a bunch of details. But I think for the purposes of the start of this conversation is models over a certain level are audited by a state entity somewhere. And there is some liability if you don't do best practices around safety.
4:10 - What is the origin of SB1047 bill?
[Sriram Krishnan] (4:12 - 4:32)
Maybe I think one good place to start this is, where did this even come from? So this is a California draft bill, right? And, you know, I would say in some ways, you know, it seemed to have come out of nowhere or maybe caught a lot of people by surprise.
Do you remember when you first heard of this and what your original reaction was?
[Martin Casado] (4:32 - 6:00)
So I actually, this is kind of funny, I've been trying to think about when I did first hear of it. So here's where I think it actually came from. And then maybe I'll answer the personal question.
California wants to be the EU when it comes to tech policy. We saw this with GDPR, right? So like the EU goes and does something, which so often tends to be a terrible idea.
Like GDPR was probably the best thing that could ever happen to like this social networking monopolies, because all of a sudden it makes it hard for startups to compete. And so EU was looking at passing kind of AI safety stuff. And then of course, California wants to Now at the time that they did it, there was this executive order from the Biden administration that they sort of mimicked.
But since then, the federal government has really softened their stance and changed their posture on this. But California has not. So California and Scott Weiner in particular is uniquely pushing probably the most kind of comprehensively negative for innovation effort, even though the federal government has softened its stance.
And maybe the only analog is the EU. But I would say even in that case, it's not working. Now, as far as when did I hear about it?
This was kind of kept pretty secret until pretty late in process. And so I've been involved in a number of AI safety discussions. I went to the Chuck Schumer hearings in DC.
And so somewhere along the way, the fact that we had kind of the most pernicious bill in California jumped up. And then I just realized that was probably the best use of my attention and efforts.
6:14 - Should AI be regulated?
[Sriram Krishnan] (6:00 - 6:58)
Like I said in the beginning, I think this bill is a very bad idea, but I'm going to try and throw at you some of the arguments from people who may believe in this, or maybe believe in some of the risk posed by AI. I guess the first question would be, there are a lot of industries which have regulation, right? You know, the airlines industry, for example.
And, you know, why and you know, why should or shouldn't AI be regulated? Because from a very 10,000 foot level, this seems to be, you know, you mentioned $100 million runs, you mentioned a flops limit. This seems to be like, hey, you know, only if some of these really, really bad things happen, and only for these really, really maybe large companies, which can maybe afford these training runs, we need some level of regulation.
Now at a very 10,000 feet level, I can see why people might think that sounds reasonable, because I see other industries how to react to this.
[Martin Casado] (6:58 - 10:16)
Yeah, no, and I totally agree. In fact, AI systems or software systems, which have a very rich and robust regulatory regime that has been developed over 30 years, to which they're under, many of which have been passed in California. So I think it comes down to the following, which is if you want to provide new regulation on top of software or a system, I think you want two things to be true.
The first one is you want to make sure that you're in line with the existing regulatory regime, because you have to work within that, and there's a lot of lessons learned. And the second one is you want to understand the marginal shift, the marginal risk. Like, was there a paradigm shift that necessitates new regulations?
Yes or no? So in the case of AI, neither in SB1047, neither of these are true, right? So for example, it actually changes the doctrine for how we approach AI regulation, and it throws out 20 years of discussion on things like liability and open source and size limits.
And not only that, it actually points to things that we've just shown not to work. Like, we had size limits on compute in the late 90s, and that was such a laughably bad idea that just basically went away. I'm old enough to remember the rise of the internet and the rise of the web and what that did, right, to the policy regime.
In fact, I was at Stanford and helped create and co-teach a course on cybersecurity policy, right? I did this with William Perry, who was the Secretary of Defense for the Clinton administration. Now, in the case of the internet, the big question was, is there a paradigm shift here that necessitates new regulations?
And you can make two very strong arguments that there weren't. The first one, there was this notion of asymmetry. And the notion of asymmetry was the more you rely on this stuff, the more vulnerable you are.
So if you're in a conflict with another, say, nation state or state actor who's not reliant on it, you're more vulnerable. So that's very different than mutually assured destruction. So now we've got this big kind of risk posture difference.
The second one is, we actually had examples of risks. So we actually had examples of new types of attacks. So there's very famously a worm, the Morris worm, which took out actual computers and critical infrastructure.
And so at the time we were having that discussion, we're like, actually, you know, there is a good argument for a paradigm shift. There's a good argument for marginal risk. And then you actually point from first principles and you actually point to very specific instance.
And even in that case, the regulation that we came up with is much more even-handed pro-innovation than what we're doing with AI. Now, in the case of AI, we have neither of these things. If you go to, you know, say, professors at Berkeley who are working on this, they'll say, it's very important we understand the marginal risk, but we don't understand it yet.
So how do you create a policy for something where you don't even understand the risk? And we're, you know, let's call it four or five years into this. And we have not seen any demonstration of, you know, new types of threats that you couldn't do with existing software systems.
And so, again, I know this is a long-winded answer. I'm just going to summarize it very quickly. A hundred percent, we should regulate, you know, industries.
A hundred percent, we should focus on safety. Software is no exception. We have that for software.
AI is covered under that existing regime. And we've not done the work to demonstrate that this requires new regulation.
10:20 - Is AI a paradigm shift?
[Sriram Krishnan] (10:16 - 11:20)
On the new paradigm shift, there could be an argument that could be made that attention, transformers, everyone's mind being blown by chat GPT and, you know, all the funding, you know, some from the place that we work at and others and the, you know, and the attention going into these large language models and training them kind of constitutes a shift, right? Like, and, you know, it's probably fair a lot of people, including, you know, I would say maybe both of us, like, we're very optimistic about what's technology can bring. No, I don't know that quantifies a paradigm shift.
It definitely seizes the world's imagination and attention. So maybe that's one argument. The other argument I would say is, while I would agree with you that we haven't demonstrated AI this today, you could make an argument that, well, this bill is only talking about future potential catastrophic scenarios.
So in a way, why do you even care today? Right? Like this is talking about scenarios, which, you know, maybe, you know, most developers won't hit and it's only for the largest of companies.
So why do you care so much about these things?
[Martin Casado] (11:21 - 11:33)
Yeah. So the first, let's talk to the first one. So what I mean, paradigm shifts, I've been in the risk profile, not in the technology.
We're venture capitalists. We see paradigm shifts constantly and we don't create new regulations for ourselves. I mean, this is what we do.
[Sriram Krishnan] (11:34 - 11:34)
Yeah.
[Martin Casado] (11:34 - 13:34)
What you need to argue is, is there like this, this changes what you can do relative to software connected to a network. I mean, that really is the bar and nobody's shown that. And like, really like, like some of the premier experts take Don Song, right.
Who is actually very much pro safety. You know, one of the people behind this bill is Dan Hendricks, who is in the center for AI safety. His advisor will say it's too early to do policy.
His advisor who thinks AI safety is very important. It's too early. We don't know what the marginal risks are.
Let's understand the marginal risk is still a research question. So when I talk about that, I'm talking about marginal risks, not a paradigm shift in technology. Right.
So the second question is there's two answers. The first one is it actually does apply today. So if the SB1047 was enacted today, you could argue that Meta would not release their open source models.
We actually saw this in the EU where they decided not to release the models in the EU because they're similar, similar legislation. And the reason for that is because their training runs are over a hundred million dollars and there's so much ambiguity around the liability, right? It is such a poorly written bill.
Nobody knows what it means that it changes the risk profile for these large companies to release open source. I know we haven't talked about open source yet, but just, you know, for those that are not follow this stuff daily, it's very important to the startup ecosystem to have that out there. And so, and by the way, I would say the second point to this is technology evolves very quickly.
So even though it does matter today and it does have impact today, even if that wasn't the case, at some point we catch up to it quickly. And the greatest example of this is the executive order where they had this silly flop number 10 to the 26. So they said, if the models get to this size, 10 to the 26, then they should be under this set of regulations.
And the industry caught up to that number within a year and a half. And then, and then of course, you know, like this kind of idea, this is some great future thing. And even small companies caught up to this number very quickly.
So like our ability to understand how quickly technology evolves is very poor.
13:36 - Who is funding this bill? ‘Baptists vs bootleggers’ and Nick Nostrum’s Superintelligence book being the origin point.
[Aarthi Ramamurthy] (13:35 - 13:51)
You had mentioned California trying to one-up you. What do you think is sort of the motivation for SB1047 for this entire suite of consistent AI safety regulations coming in? Who's funding this?
What do you think is really going on?
[Martin Casado] (13:52 - 16:02)
So I think this is the classic Baptist and Bootleggers that Marc likes to talk about. I think this is the case here. I mean, there are a number of people who really believe in AI existential risk.
And it was very interesting. If you track it back, a lot of this belief predates things like Transformers, Sriram, and things like ChatGPT. It's actually, a lot of it's rooted in Nick Bostrom, who's a great philosopher.
So Nick Bostrom was a philosopher in Oxford. He wrote this book called Superintelligence, which is actually great. I love the book.
But it talks about a platonic ideal of AI. It doesn't talk about any specific system. It's kind of almost like a thought experiment.
And that very famously, actually, if you read the Walter Isaacson book on Elon Musk, that very famously created this kind of whole movement to protect humanity against AI. And I think it was kind of always viewed as this almost a lark for rich, smart people. And nobody took it really seriously.
Then open AI started actually showing very promising results. And then somehow these two things got conflated, which is all of the concerns from Bostrom's existential platonic view of AI got wrapped into actual systems. It turns out there's pretty large funding apparatus behind this, like Eric Schmidt very famously, Dustin Muskowitz very famously, Reid Hoffman to some extent.
And so they started deploying, Jan Tallinn, they started deploying a lot of money. And so again, Eric Schmidt was behind the executive order. And some of that money went to an institute called the California AI Institute.
And then listen, their stated goal is to protect humanity from existential risk. You just go to the website, right? So you've got billionaire funded that comes from this kind of Doomer line that goes back to Bostrom and very clearly starts prior to Transformers or any of that stuff.
And I think they really believe. I think that the whole, they're tied up with the effect of altruism. I think they really believe.
I think they know better than, they believe they know better than we do and they're smarter so they can protect us and they can come up with rules. And so there's this big funding apparatus. Now, as far as the bootleggers, I have a hard time not teasing it apart from the primary person backing this bill, which is Scott Wiener, which is he doesn't.
[Sriram Krishnan] (16:02 - 16:10)
So by the way- But this is California Senator Scott Wiener, who maybe a lot of you may have heard of him, but he's been at the center of this.
[Martin Casado] (16:10 - 17:01)
Yeah, that's right. Yeah. He's the one that's basically writing and sponsoring the bill.
The one last thing I want to say about CAIS before I go to Scott Wiener, because these are the two kind of prime actors behind this. CAIS is by a guy named Dan Hendricks, who's a great actually AI researcher. And he's done a bunch of great work and I've been a fan of his work.
16:35 - Scott Weiner’s support of the bill
Now, Senator, California Senator Scott Wiener is the one that's kind of backing this bill. And by the way, so prior to all of this, I was a huge fan. He's done great work on housing.
He's done great work on kind of LGBTQ issues. I've been a huge fan. He clearly has no background on this stuff.
And he has now kind of taken it as his raison d'etre in the face of tons of opposition. I can't imagine he believes, I think he believes AI regulation needs to exist. I don't think he has any idea why this is so bad.
[Sriram Krishnan] (17:01 - 17:21)
On the other side, I would say there's been an alliance of people who've been speaking up against it. I saw this tweet or post the other day, which talked about Anderson Horowitz and these tech broke billionaires. So I want to ask you two questions.
The second part of the question is, who's kind of the people speaking up against it? And the first part is like, Martin, when did you become a billionaire? I just did.
[Martin Casado] (17:21 - 17:22)
I know I got promoted.
[Sriram Krishnan] (17:22 - 18:37)
I think Bostrom, by the way, kind of recanted some of this and he actually maybe regrets what he said into motion now. It may be a bit too late. I want to zoom in on some of the details, right?
Because I do think there are some set of people like, for example, look at Zvi Moskowitz. I look at some of the people who might work at it on topic or an open AI or some other companies who find this thing reasonable and it gets lost in the details. I want to get focused a little bit on the details.
The first is the 100 billion number, right? So if I'm kind of, I would say very high level, the bill basically says that if you're not covered, this bill doesn't apply to you at all. Unless your training model takes $100 million, you spend $100 million in training it or $10 million in fine tuning it, right?
And I guess the first question is, okay, if that is a case, sure, Meta may be impacted, but the vast majority of startups may not be impacted. It's only like a few big guys. So if I'm a startup founder, if I'm everyone else, why do I care?
Maybe Meta just deals with this. They have a bunch of lawyers and open AI and a couple of everyone shouldn't be impacted. So why is that number a reasonable sort of like big guy cutoff?
[Martin Casado] (18:37 - 20:31)
Yeah. So I have one, I would say fairly biased view on this. And then one, I think is fairly neutral.
My biased view is in this game of AI, even for startups, $100 million is not a lot. I personally work with a number of companies that will be doing training runs of that size and they're private. They're not Meta.
And anybody that's paying attention to this industry realizes that this is just not that high of a bar, even for private companies. Okay. So that's a biased view clearly.
And I think people can take issue with that view and say, it's not a startup, et cetera. The other view is, well, there's a second view, which is very hard to actually know how to account for that $100 million, which is, does that just mean one single run? Is that multiple runs?
Like these things tend to have drops along the way. Does that mean like if there's one model that you've been working on for five years and then you've raised, let's say 130 million, 80% of that is GPUs. And this is a result of like, that is a pretty modest, that's like a series C startup.
Like we're not talking about something very large. We're talking about a pretty modest startup. And then if you go away from the startup ecosystems and say somehow private companies are just exempted, even then one of the big problems is, is so much of the Silicon, the tech ecosystem has benefited from the releases of open source from large companies like Google and like Meta, right?
19:34 - Open source benefitting software world and risks to open source due to this bill
I mean, you could argue that the cloud wouldn't have happened. You could definitely argue that mobile phones like Android wouldn't have happened if you didn't have these releases, right? So every set of the software stack has benefited from open source and it's been the lifeblood of innovation, private innovation.
And now if there are liability and reporting constraints and ambiguity, you're less likely, these large companies are less likely to release those. And as a result, researchers aren't able to benefit, academics aren't able to benefit, and certainly startups aren't able to benefit from this. And again, we've actually seen this happen in the EU where Meta decided not to release it because of ambiguity around the laws.
[Sriram Krishnan] (20:31 - 20:48)
Maybe this cuts to something at the heart of it. By the way, I'm happy you addressed the number because I think the idea is that, like the number can be fixed. Like it's not like, you know, Scott Wiener can be like, well, let's make the number 200 million or 300 million.
Like I think the number could be like fixed. I think there's really about, we need open source in there, but I guess there is, we've been dancing.
[Martin Casado] (20:48 - 21:23)
Well, sorry, let me just say one more thing. So maybe you're going to get to this before, but so then I'll just tease it up and we can get to it later, which is there isn't a shred of evidence, not one that safety has anything to do with the number of blobs that went into a training run. So for example, I could probably use a million dollars to train something with a bunch of classified information to do something very dangerous, a million bucks.
And I could use a hundred million dollars to train something that which is totally benign, right? These things, these are orthogonal axes and it's just bad policy to try and conflate the two.
[Sriram Krishnan] (21:23 - 22:27)
I guess actually this is a good segue into something I want to get at, because I think we can have this discussion on two levels. One is let's call it the implementation details, the millions of dollars, the flops number. But I guess there is a spirit of the exercise here, right?
Which is if, and I would say kind of sum it up and I think folks would disagree with me and you would probably agree with this. If you're releasing a very, very, very large model, like a very sophisticated, but large model, right? You ought to be rude to make sure you have a safety plan and maybe run some checks and maybe be held to the same kind of liability that a lot of other industries are held to.
22:10 - Are large models more dangerous?
If only for the largest of people, right? Now we can maybe debate on the ambiguities and maybe imagine a world where these ambiguities are resolved or you have the deficits there, but I guess there's a spirit of the exercise here. And I guess the question to you, Martin, would be, do you agree with the core premise or the spirit of what this bill is trying to achieve?
[Martin Casado] (22:27 - 22:39)
Well, can I ask you a question? Because you have this assumption that I just don't buy, which is you say, if you're working on a large model, this should happen. Why do you think a large model is more dangerous?
[Sriram Krishnan] (22:39 - 24:10)
Well, I would say that, I was going to try and get this later, but I would say that anybody who used ChatGPT had their minds blown. That's the simplest level. And we have seen since ChatGPT with the advancements from Claude and Llama and all these folks, that these models seem to be getting better.
Now we can kind of debate which benchmark and so on, but I would say the models we have today are better than the models we have a year ago. And I think one argument could be, if they are getting more sophisticated, how do we ensure that this sophistication doesn't go off in a direction that creates risk? For example, what if this sophistication doesn't go off in a direction where you're like, hey, model, jailbreak out of your sandbox environment and go do some naughty things out there on the internet or go create a new neurotoxin or whatever.
And the idea would be that if it's only the SOTA, the really largest models are the most sophisticated. And given that there is debate on whether that could be a risk or not, why don't we just play it safe? And only for the largest, most sophisticated models, which take a lot of money and resources, just given the amount of GPUs and computer takes, let's put them to a small set of checks.
I guess that would be maybe the strong form version of the argument. Great.
24:10 - Is there a correlated between size of models and risk associated?
[Martin Casado] (24:10 - 26:36)
So let's just inspect that a little bit because I think this is a great one. So let's just say that you put this policy in place. What do you think is more dangerous?
So we actually have a lot of experience with these models. We kind of know how they work. We kind of actually know specifically how the mechanics work.
So let's say you put this in place and someone decides, you know, a hundred million dollars threshold, you know, whatever. I'm not to that, but what I'm going to do is I'm going to actually build a model to actually do, to create neurotoxins. That's what I'm going to do.
By the way, it turns out that these large models can't do novel things like that because they do basically averaging. They do smoothing. I mean, this is basically the whole neurotoxin thing has been debunked for this reason.
And it's kind of a very interesting- You just dropped in- No, no. I'm going to work back there, right? Early on, someone actually, it was so funny, someone showed like, oh, this came up with a novel neurotoxin.
And probably the best person in the world to talk to this is Vijay Pande, who's another partner of ours, but like I said, he's a Stanford professor in bio and in informatics, right? So he really knows this. And he looked at all the components of this, quote unquote, novel neurotoxin and they all had high toxicity.
So I was like, sure, if you take a bunch of toxic things and put it together, it's still toxic, right? And so he goes to me, he says, Martin, he says, that's like building an airplane that doesn't work. It's like, you don't need AI to do that.
Clearly, you can kind of aggregate a bunch of stuff. So there's no shred of evidence, no shred that if you're just throwing in a bunch of data at scale, you get these emergent properties. On the other hand, there's extreme evidence that if you do something much smaller and much more targeted, much more focused, you could create something that creates novel weapons, for sure.
You use data that's sensitive, you do like alpha folds type stuff. Now you've lost all your generality. It's almost the anti-scale argument.
So this is a great example of where scale actually is saving you because you're averaging out the answer. And actually a very focused effort would be far more dangerous on a specific example that the AI dimmers use. So this policy is just wrong-handed.
It's almost like punishing you for doing something that's more safe and allowing you to do something that's less safe. And it's one of their examples. And by the way, there's so many of these.
So I just want you to, there literally is no connection between size and risk. And the one example that you use, which allowed me to live on it, thank you very much for bringing it up, kind of shows that, I mean, it just turns out that as you get more general, you get more dangerous and you also get smaller. It's almost like a negative relationship.
26:45 - How would Martin frame any regulations on AI? What’s a better way?
[Aarthi Ramamurthy] (26:36 - 27:11)
Going back to what Sriram had said, right? Like there's like the implementation of it when it's flops the right way to go look at it. Maybe not.
Then there is like, should we even do this? And maybe to set aside like the size versus like size better proportional to risk. If you had to do this, I guess, Martin, like how would you come up with any sort of like framing or regulations here?
Because I think right at the beginning, you'd said similar to say airline industry or anything else. There is validity in some sort of regulation here. So how would you think about framing it?
[Martin Casado] (27:11 - 28:43)
Yeah. So I think I fall in the same camp that Don Song does, who's again, Dan Hendrycks advisor or Ian Stoica or any of these professors, which is like right now is the time to really understand marginal risk. It's very, very important that we understand marginal risk.
And I do think that I would fund a lot of research efforts to understand marginal risk. So that's one. The second one is we do have a very robust regulatory regime for software.
I would make sure that like whatever works like that, it adheres to. And then if I was to start looking at ways, if I was worried that the research wouldn't catch up in time, which may be the case, I would start focusing on applications and I would start regulating the applications of these things. For example, deepfakes, I think has potential for political and social consequences.
So these are the things we can absolutely study and look at and decide we're going to regulate, which by the way, you don't need a hundred billion dollar model to do deepfakes. CSAM is incredibly important, right? This is a child safety and abuse, right?
So if there's any potential use for that, I want to understand that study and make it totally illegal. I actually think like data and access and privacy is totally orthogonal to scale also is something that is worth looking at. Like, listen, is my personal data in these models that someone can divulge?
Like this is something we've got robust policy framework around and we may need to extend that too. So there are these very kind of applied that are adjacencies that we understand that actually solve the problem that I think we should a hundred percent look at. This is not what's happening with SB1047.
Well, SB1047 literally came out of nowhere from people that don't understand what they're doing. And we're going to have to live with the consequences if it passes.
28:46 - Nancy Pelosi opposes the bill. Some famous researchers are for the bill. Who comprises the two opposing camps and what is the motivation?
[Sriram Krishnan] (28:43 - 29:08)
Yeah. I want to ask you about some of the dynamics politically. By the time we're recording this, a few days before a speaker, a former speaker, Nancy Pelosi came out and basically really criticized the bill for a bunch of reasons.
So could you tell us like maybe what happened there? Because it doesn't seem like even the democratic party agree with each other and you have like a former speaker coming in and saying like, this is not a good idea.
[Martin Casado] (29:08 - 29:25)
So here's the thing, there's very few voices in favor of this, right? I mean, there's a set of personalities who are kind of well-known in favor of anything that's kind of Doomer, right? It's anything regulation.
It's Geoff Hinton, you know, who is a Canadian academic, and he's a Twitter award winner.
[Sriram Krishnan] (29:26 - 30:25)
Yeah, maybe, can I interrupt you there? Because I think that, let me make the strong form case where some of the other folks on the other team, so to speak, right? And some of these, I think we know, I'm friendly with, like, for example, like some of the folks in the EA world.
Let me take a look. One is you said that there are people who are doing this, you know, who are maybe not well-informed. The counter-argument to that would be, this has support from Geoffrey Hinton, Yoshua Bengio, I would say, Anthropic, and I don't pick on them, but there are others who are maybe working in people who are at once in a state of the art, who are maybe have issues, but definitely have sympathetic with the broad notions of AI safety.
So I guess the first question would be, yes, I mean, you're talking about people who have debunked it, but they're definitely very, very credible people in terms of where they've worked in AI or where they are right now, who seem to be broadly sympathetic with the idea of these large models contributing to risk, even if they may be having issues with this particular bill. Like, how do you think about it? So do you get multiple camps here?
[Martin Casado] (30:26 - 30:42)
Yeah, for sure. Listen, there's definitely differences of opinions, right? You've got two Canadian academics who have no experience in tech policy at all, who are weighing in, and they're not, by the way, accountable at all to whatever happens because they're in Canada.
They can have opinions without actually having to suffer the consequences.
[Sriram Krishnan] (30:42 - 30:45)
That was a Canadian audience right there. No, but it's true, right?
[Martin Casado] (30:45 - 30:53)
I mean, listen, it's all fun and games until you push regulation on somebody else that you're not accountable for, right?
[Sriram Krishnan] (30:53 - 30:58)
Well, there's a second episode where you insult a Canadian audience. There's another one too. We just launched over here, but sorry.
[Martin Casado] (30:58 - 33:01)
Listen, I was very young in Montana, and that's like the Canada of, you know, I grew up in Montana. It's like the Canada of the United States. I feel like they're poor brethren.
You know, you've got Max Tegmark. You've got Stuart Russell has been on this for a very long time.
He at least is in California. That's great. You have Larry Lessig who is at Harvard, you know, and by the way, he's not like, you know, his work is like in copyright.
Like I was at Stanford and he's doing the creative common stuff, which was great, but that's not like AI safety or risk. Like that's just not his thing, right? So those are the voices that are for it.
Notice most of them are out of California except for Stuart Russell, and they're the same. They're basically the same voices. And like, listen, that's like a legit opinion.
You can have it. And so what do you do in these situations? Well, maybe you should stack it up of all of the voices that are accountable and absolutely as credible.
So for example, Yann LeCun, not in California, but he does work for Meta, which is a California company. That's something. He also is a Turing Award winner and he thinks this is total nonsense.
But then we actually go to people within California, like, you know, Fei-Fei Li, who's one of the most notable people in all of AI. She's called the godmother of AI. You've got Andrew Ng who's like, you know, one of the top ML researchers of all time, Stanford professor.
And so, listen, I don't think it's possible to have a bill where nobody supports it. But if you look at people who are accountable, people have diverse experiences, broad base, you will see that the opposition, SB1047, has orders of many new poor people against it in various walks. It's investors.
It's founders. It's politicians, which you mentioned with Pelosi. It's academics.
It's students. It's researchers. I mean, the outcry has been enormous.
33:10 - Why does Pelosi oppose the bill?
[Sriram Krishnan] (33:02 - 33:26)
I just got to dig into a couple of things. I want to get back to former Speaker Pelosi, because why does she weigh in? And I guess the second part of it is, why is this a state-level effort versus a federal effort, which is another strain of argument which has come in.
How do you think about, one, why is Pelosi? Because it seems like there's a lot of disagreement even in the Democratic Party over this.
[Martin Casado] (33:27 - 34:02)
Listen, I think Silicon Valley is a wonder of the world. I really do. And I think that it has been for a long time.
Every major super cycle for the last 30 years was rooted in here. And bills like this will harm that. And I think Pelosi, who lives in San Francisco or has a house in San Francisco and is a native, understands that.
And I think Ana Eshoo, who also went against this, understands that. I think Ro Khanna, who's also in San Francisco, understands that. And I think Zoe Lofgren, who is in Monterey, but also represents the area, understands that.
And so I just really believe that these politicians know this is bad for a marvel and wonder of the world.
35:00 - Leopold Aschenbrenner and “Situational Awareness” paper
[Sriram Krishnan] (34:02 - 36:16)
I totally agree. You know, Martin, I agree. And I think, in some way, this is sort of the...
In H.P. Lovecraft novels, right, there is this... I have a point. I have a point.
I'll get to it. Trust me, right? Trust me, this is a metaphor.
You have this existential horror, which is basically a projection of these malevolence, which exists in a different universe, different galaxy, but kind of projecting into our world. Right? And I think of this bill as a projection of a lot of the EA versus optimism debates, right?
This is basically less wrong. You know, some of the conversations from Eliezer Yudkowsy or a lot of the AI safety conversations being projected through sort of like the instrument of a CA draft bill. So maybe, I guess, we should kind of get to the heart of it.
Martin, you might have seen this document that went around a few weeks ago from Leopold Asherberger, I might butcher his name, this ex-OpenAI economist slash, I think, analyst. Very, very smart guy. I met him.
And he wrote this doc called Situational Awareness, right? By the way, if you folks haven't read it, it's an interesting read. But I want to kind of point to one thing.
In that doc, right at the beginning, he basically draws a line of GPT 1, 2, 3, 4, 4.5, 4.0 across time frame and capability. And he basically says that, hey, do you expect this line to stop? And I guess the question is, if this line of complexity and performance and capability of these models keep improving, one, do you agree with that?
Do you agree with the line is going to keep going? And the second part, if you do agree with that, does it not be, hey, we may not understand how these models totally work. And in the off chance that they do something bad, let's try and put some effort towards stopping it.
Maybe it's a two-part question. One, does the line keep going? Second, the line keeps going.
They're getting more complicated, more capable. Maybe let's play it safe.
37:20 - Non-system people viewing systems - computer systems are not parametric but sigmoidal.
[Martin Casado] (36:17 - 39:47)
I've got a precursor answer than the answer. The precursor answer is, don't you find it weird that the people that are most articulate on dimmer scenarios decided to go work on this stuff? It's like the dissonance.
I actually don't think I've ever. I think it's kind of unique to AI. I've never understood this.
Like, oh, this stuff is terrible. I'm going to go work for the organization that's bringing it to life. I mean, he literally worked at OpenAI that pushed the frontier.
So anybody's culpable. He's culpable. And it's kind of like, I mean, and by the way, this is throughout the industry.
The people that are the most against it are literally the ones that are investing in the organizations that are bringing it to life. Like, I mean, like very materially. So for one, I just have a hard time with kind of this very weird dissonance.
If you believe that, don't go work on the number one organization that actually caused this. The second one is, listen, I think that this is what happens when non-systems computer scientists kind of view systems. And it's a very economist view.
And it's like macroeconomics, where you believe the world is parametric and the world is not parametric. And having worked in computer systems for 30 years, I remember all of these numbers when it came to like whatever, like simulation, et cetera. It just turns out most systems are sigmoidal.
So what does the word parametric mean? Parametric just means like it follows some well-defined function. Like it just goes up to the right forever.
Or maybe it follows a sigmoid. And a sigmoid just means that it tapers off in asymptotes, right? That's all it means.
So what I say, a lot of like, especially economists, like there's, I think like Leopold's, you know, it's like exactly what an economist would say, you know, which is like, oh, like, well, this line must go forever because we like these great economic models. And that's just not how like computer systems, you know, tend to work. And they tend not to be parametric in general, by the way.
They tend to like, you know, like, you know, they'll asymptote off and then go up on new fixes, et cetera. And so there's, you know, in the history of computer science, let's just go back to this. In the history, you could basically take any technology.
You can take bit rate, you can take CPU cycles, you could take memory, and you could say in the early days that this thing is going to go forever. And it always asymptotes 100% of the time, like 100% of the time. It's never been the case that it's just, you know, the only thing I know of that like continues to grow exponentially is life, because we take energy from the sun and we replicate, right?
And that's like, that's a property of humans. That's not a property of computer systems. Computer systems are very, very different.
The rational view for people that have been working, that are not economists, that have been working in computer systems for decades, like myself and many others, is that no, like this is going to asymptote like it always does. Even some of the doomers, like Gary Marcus, who like, listen, I can't interact with him on Twitter, but like he got this one right, is like, listen, you know, we're going to run out of data and then these things are going to slow down and this is going to happen. And this seems to be happening, like these things are slowing down.
So the rebuttal to that is, is systems tend to slow down. They need more advances. We have no indication that these things are getting more dangerous at all.
We've been with them for five years. We haven't seen any kind of increase in that. They do seem to be slowing down.
And so let's kind of take a look at what we believe the long-term trajectories are taking into account the actual systems that underlie them, because these platonic exercises that just like are graphs on an economist's paper, don't tell us how the future unfolds. They just don't.
[Sriram Krishnan] (39:48 - 41:22)
I think, you know, if I can kind of like summarize this, because, you know, we've all been having millions of these arguments in different forums. I think this, what you just said is the fundamental schism in thought, whereas, you know, people like you and many, many others and me and a lot of folks believe like, look, A, you know, we're going to asymptote, right? Maybe it's not GPT 4.5, but soon and a lot of, we can get into the data wall and other things as to why. But second, more importantly, no one has been able to demonstrate harm yet, right? Like, you know, I always tell people this challenge of, hey, you know, show me something that any model can do, SONET, FORO, whatever that a college student can't do on Google. So unless you've seen harm, you know, why should we try and, you know, put this regulation in place because it doesn't be very harmful.
I would say the other, you know, opposing school of thought would basically, their view would be these things are getting more capable. Maybe they're slowing down and let's play it safe, right? Let's, you know, let's just kind of protect ourselves against the unknown.
Because, you know, as I don't want to say who, but one of the people you mentioned would say this, we might be creating something which could be smarter than us. Maybe we're not. But in the, say, the one or two percent probable chance we are, maybe we should try and take precautions.
Now, by the way, I've had n versions of this debate on both sides. I think, you know, both sides are very, very dug in. But I think that in some ways is the fundamental schism, you know, at the heart of all of this.
41:30 - AI is the ultimate ‘Deus ex machina’ (God in the machine)
[Martin Casado] (41:22 - 42:02)
So here's what's very hard with this discussion. I'm going to make a meta point relative to this, which is AI has become the ultimate Deus Ex Machina. Deus Ex Machina just means the god in the machine, right?
It's this thing that you can, it's this unspecified force that you can kind of move around as the argument fits. And so this discussion is actually very much a moving target. So, for example, there's all of this concern that there's actually emerging, emergent reasoning in LLMs, right?
This has been this kind of long concern. I think that's largely been debunked at this point.
[Sriram Krishnan] (42:03 - 42:11)
Is that, I mean, I'm not sure that will be brought, when we publish this episode and we make this statement, I'm not sure there'll be consensus that it has been debunked.
[Martin Casado] (42:11 - 13:14)
OK, OK, OK, that's fine. There's not strong evidence as to the case, and there's a lot of evidence that this is not the case.
[Sriram Krishnan] (42:16 - 42:44)
Fair enough. For example, I'll give you an example, two cash responses. For example, if you listen to Dwarkesh Patel's podcast, he had Shulman, he had a few folks from Gemini and Anthropic.
They would point to, for example, the Othello paper and a couple of other papers, which may point to certain reasoning capability, right? So anyway, I would say this is not a settled topic.
[Martin Casado] (42:45 - 42:57)
Sure, for sure, for sure. That's fair enough. There have been claims on certain aspects of LLMs, like in context learning and grokking, which they claim were generality, which were specifically disproven.
[Sriram Krishnan] (42:57 - 42:57)
Yeah.
[Martin Casado] (42:58 - 43:47)
My meta point is, is that this is a game of whack-a-mole, just like the intelligent design debates, which is like, you can continue to disprove that these things are dangerous, the other side will continue to come up with reasons why it's not. And so it's just this infinite game because AI and AGI is so unspecified. So I actually think the biggest problem is that there's a moving target on what these things are capable and not capable.
And I just feel it's exhausting. But what I will say is, for many systems, computer scientists, not economists, not physicists, many of these arguments just don't make sense. They're information theoretically infeasible.
And they also just are not how systems tend to converge over time. And I think this is going to bear out. And then we're all going to look pretty silly having wasted all this time.
[Sriram Krishnan] (43:48 - 45:56)
Yeah, I think there's something here. And I figured out, because of my own commentary, is that, did you watch Game of Thrones, Martin? Do you ever watch Game of Thrones, all the seasons?
I did not. Oh, my goodness. So disappointing.
44:15 - Sriram compares AI regulations to Game of Thrones episode
But OK, so for those of you who watch Game of Thrones, there's this famous scene or famous sequence in one of the later seasons where Cersei—spoiler alert—basically frees the High Sparrow and gets these priests out, hoping that they're going to help her take her power. But ultimately, they wind up seizing all power and they turn on her. And I use this scene a lot.
This is, by the way, the famous kind of shame, shame, shame meme scene that comes from this sequence. But the meta point would be, when you set regulation in motion, it's hard to predict where it often winds up. And often my rebuttal to people who are like, hey, this is lightweight regulation is like, hey, you don't actually know where this winds up.
The folks who thought about GDPR had good intentions. They were like, hey, let's stop big social media companies from abusing privacy. Very good intentions.
But what they actually wound up doing is anybody who ever goes to Europe is clicking on a bunch of cookie accept decline buttons nobody ever reads. And in some ways, you actually help the big social media companies because it turns out they are the only ones who have the resources and lawyers to go in and helping. So I think there is a lot of naivety around what the downhill, the slippery slope version of this.
I'll give you one example. The bill—sorry, I'm kind of taking over your role here—but the bill actually talks about submitting safety plans. Like the idea that you go submit these safety plans.
Now, on the face of it, they look pretty simple. Why shouldn't everyone have a safety plan? But you quickly realize that this can be very easily weaponized.
And a year or two down the line, somebody points and says, hey, look, all these open source models are not putting in as much work as these other guys. So these guys are, by definition, are class unsafe because of the documents they themselves submitted to us. So which is why some of these arguments on the surface, which are like, it seems like lightweight regulation, if you actually think about it and think about what happens in step two, three, a year or two down the line, actually to some really, really bad consequences.
46:00 - Anthropic’s investment in AI safety
[Martin Casado] (45:57 - 46:28)
Yeah. Just a very specific thing, because you brought up Anthropic previously. So for every voice in support of this bill, for which there's not very many, there's probably tenant opposition in the same sector and many sectors that are not represented that are not for the bill.
The two prime labs and LLMs have taken positions on this. So OpenAI against it. And Anthropic has this very lukewarm thing where they issued a response, but in their response, they're like, we're not sure.
So it's like, not really an endorsement, but kind of an endorsement.
[Sriram Krishnan] (46:29 - 46:48)
I want to highlight what he just said. So OpenAI wrote a letter, which came out the day before we were recording this, which basically said, we think this bill is a bad idea. It's going to hurt innovation.
It's going to hurt national security. This should be done at a federal level. Anthropic wrote a letter, which is kind of, we read in multiple ways, but they definitely had strong issues with it.
And this is like a week after Pelosi's statement.
[Martin Casado] (46:48 - 47:57)
And Anthropic in particular said, listen, we think on balance, it's probably a positive, but we're not sure. It's probably the most noncommittal letter. It probably cancels itself out.
But even in the Anthropic response, it's very interesting because they had some proposals, not all that we're taking up, but their proposals basically said, you need to do best effort for safety. Now Anthropic very famously invests a ton in safety. And it's not clear what that means.
And of course, it'll become up to the courts to determine whether this is actually followed. And so if they're the one that invests the most in safety, clearly they set the standard for like best practices and best effort. And again, this has a very GDPR like consequence for anybody that doesn't have their resources or knowledge to do that would be held liable.
And this is very explicit liability as held by courts. And so like, again, they were kind of very mealy mouth about their support or lack of support, but even their proposal, you could argue would be bad for innovation. I mean, good for Anthropic, good for Anthropic, but bad for the rest of us.
[Aarthi Ramamurthy] (47:57 - 48:33)
Yeah. The way it went was good. Yeah.
I think that's a great point. I think earlier you had said, this is not just for like really large companies or startups. This is like, you're thinking like Series B or Series C, you're going to get there.
48:15 - If you’re a AI founder, what can you do about this bill today?
If you're doing things right, you're going to get there pretty quickly. So I guess question for you is, if you're a founder, what would you do in the context of what's going on right now with this bill? Like, what can you do?
I think most people, most founders I'm sure until this point are like, well, it doesn't really impact me. My startup's too early. It doesn't really matter.
Let somebody else fight this fight. So what do you think people can do today?
[Martin Casado] (48:34 - 49:24)
Yeah. So I'm so glad that you're asking the call to action, but I'm going to give you a very depressing response, which is as far as I know, 100% of founders I've spoken to are against this bill. 100%.
Everybody understands the impact. Everybody understands that open source will be limited and innovation is bad. Many have spoken up and Scott Wiener is simply not listening to them.
And so you have basically a rogue politician who has spoken directly to many of the experts and founders. Like we're talking people like, of course, Amjad and Feifei and Andrew and Ian and has ignored them. So I would say we do have to speak up.
You do have to make your voices heard. But many of us are very discouraged because it's fallen on 100% deaf ears in the face of out-of-state support from people not even accountable to the bill. So it's a very frustrating situation.
And I'm sorry to be giving that answer, but that's just, you know, the state of play today.
[Sriram Krishnan] (49:25 - 50:16)
I do think, you know, one thing which has been heartening is to see senior leaders in the kind of the government apparatus like Pelosi herself and Roka, and I was actually a former guest on her show, all speak up against this. So you're basically, you're seeing politicians connected to Silicon Valley actually try and say, hey, let's hold off. This could really cause harm.
I want to ask you a bit of a personal question because I know Martin, I work with him and Martin is kind of the nicest person around. So calm, you know, calms me down. I call him and I'm kind of like angry with something and he calms me down.
50:00 - Why is this bill a personal issue for Martin?
And you've been really, really worked up about this, right? And when you, with you on Twitter, on other places, this feels so personal to you. Why is it so personal to you?
And also, what have you kind of learned in trying to kind of fight this fight in public?
[Martin Casado] (50:16 - 52:25)
Yeah, so listen, I'm a moderate centrist Democrat. I've been all my life. I'm a moderate dude.
I don't like politics. I don't get involved. I've never spoken up publicly about politics that I can recall.
Since about maybe 25 years ago when the same thing was playing out. So we've been through this before. We did this with, you know, cryptography.
We did this with the internet. We did this with digital rights and DRM. And I was in college, an undergrad during that time.
And it was the broad consensus that this is very bad for my industry, which I loved, which is computer science, right? Like we were going to choke off software. We're going to choke off innovation.
And a lot of this was driven for the wrong reasons. And so I got very active then 25 years ago. And then I think since then, we laid a great foundation, which has been adhered to.
So the reason I've kind of had to come, you know, you know, put back the old battle gear on and like come out of retirement and like go back in the field is like, again, we're at this point where we've forgotten all the lessons that we learned, that there's things that are very dangerous to the discipline that I love. I really love computer science. I really do.
I really love software. And what's being pushed is very, very dangerous for the industry. And unfortunately, and this is the discouraging thing, it basically is a political gambit by one person.
I mean, it's not even based on, you know, some understanding of the issues or some first principles thinking or any of that. And so, yeah, so hopefully this, you know, ends up going away and senior voices take over like, you know, Zoe Lofgren has been fantastic and Ro Khanna has been fantastic. I agree with her comments.
Pelosi's been fantastic. Like the majority of the Democrats, I very much agree. And once like things normalize, I can go back to, you know, what I like to focus on, which is investing and building.
But until then, I think I feel like I need to do my duty. And many of us, I think I would call the call to action is for those of you that are not participating, I think this is the time in your career to do it because this matters. It really matters this time.
And then, you know, like once, you know, things get back on track, we can kind of go back and focus on other things.
[Sriram Krishnan] (52:25 - 53:21)
This whole thing has been kind of a crazy, bizarre experience. And one of the lessons for me is how one sort of minor extremist school of thought can suddenly just because you get one person who's in a position of power almost suddenly become law, right? And then you almost have to rally the Avengers and get all the, you know, the folks who actually understand what they're saying.
But you're kind of fighting this rearguard battle because it comes out of nowhere and it's a draft bill. And then you're trying to kind of align people. So it tells you like, you know, how quickly these things happen.
And I'll agree with Martin that this is essential. If you are dealing with AI, if you're building on top of open source AI, and trust me, a lot of you are, you know, sometimes you don't know it, right? Like this really matters.
So, you know, go read up, you know, go follow Martin, you know, go follow our partner Anjane, but not just folks at A16Z, right? Not just the tech bro billionaires that we represent. But Marty, by the way, I just want to go with...
I am not a billionaire. I want to be fair. I don't know.
I think you might be a billionaire.
[Martin Casado] (53:21 - 53:37)
Can I be very clear about a few things? Nobody has told me to do this. Like Mark and Ben weren't like, Martin, nobody told me to do this.
I'm definitely not a billionaire. I'm definitely not like funding people to do this. This is 100% a personal issue that I feel very passionately about.
Are you not a billionaire?
[Sriram Krishnan] (53:37 - 54:31)
Okay, maybe not for a bit. But all right. Martin, you know, I'll say this.
Martin so personally cares about this. And so do a lot of others. And I think a lot of us are just doing it because we passionately believe in this stuff, right?
And it's not just people at the front. Like you see so many others, you know, so many other voices in there. You mentioned a few like Fei-Fei Li, who is really the godmother of this space in so many different ways.
She had this great op-ed come out a couple of weeks ago. And I think, you know, we need all the voices we can get. But Martin, you know, this is a fantastic conversation.
I think, you know, we should, you know, thank you so much. And, you know, look, we touched on so many things, but, you know, we always like a follow-up conversation on all the safety existential risk stuff, because this bill is one manifestation of this deeper argument. But thank you so much for everything you do.
And, you know, and this is such a blast.
[Martin Casado] (54:31 - 54:33)
Yeah, I really enjoyed it. Thank you so much.
[Sriram Krishnan] (54:33 - 54:35)
Thanks, folks. Thank you.