- Responsible by ClearOPS
- Posts
- Responsible
Responsible
by ClearOPS, the Critical Decision about Regulation
Hello, and welcome to Responsible, by ClearOPS, a newsletter about ResponsibleAI and other responsible business practices.
The last week of August is usually pretty slow. I hope you are on vacation somewhere relaxing. For this last week of the month, I am focusing on legislation again.
What I have for you this week:
California is not dreaming
ELVIS has re-entered the building
Caroline’s weekly musings
Chef Maggie
AI Governance
AI Bites
California has been working on a new law that will regulate AI models, particularly extremely large models. The regulation is called the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act and is receiving very little support from the industry.
I took a look through the Act and here are my highlights from reading it:
The covered AI models are extremely large, frontier models that are not even available yet.
The Act requires a verifiable risk assessment process of the covered AI models and puts that responsibility on the developer.
It also provides for audits of the risk assessment process to ensure that it has been done.
Records of the AI risk assessment process must be kept for 7 years.
Covered AI model developers must have a process for whistleblowers and cannot retaliate.
Covered AI model developers must keep a record of their customers, including IP addresses, and verify their customers identity and do due diligence on them.
Covered AI model developers must have a “kill switch” for the covered AI models.
There are significant penalties for failure to comply
According to the media, OpenAI opposes the bill and Anthropic is in support, with amendments. I could find neither company’s name on the list of those opposed or supporting the bill, but I surmise these reports are true. What was most interesting to me was listening and watching some of the Hearings.
The opposition re-iterates that the Act would stifle innovation in AI, but they fail to mention exactly how it would stifle it. I think the assumption is that AI developers will just move to countries that aren’t regulating AI and so the U.S. will competitively disadvantage itself.
The supporters for the Act argue that the requirements are reasonable and that they do not want to stifle innovation. I was surprised that they did not twist the knife on the problem that it is meant to address. Namely, that unchecked AI is likely to cause the end of human autonomy on this planet.
California is not the only state to get into the AI regulation space, which is why the opposition’s argument that it should be a federal, and not a state, regulation makes no sense to me i.e. too late!
Tennessee, my own home state, passed the ELVIS Act, Ensuring Likeness Voice Image and Security Act.
Can you picture it? The legislators all sitting in a room and someone says, “we can use the word “voice” which has a “v”, maybe we can create an acronym of Elvis’s name!” And then everyone gets super excited and gets to work on the rest of the acronym. It protects Elvis and markets him all at the same time and, as someone who grew up in Memphis and watched thousands of fans come every year on his birthday and anniversary of his death, I know how important Elvis is to the Memphis economy
Interestingly, this legislation is really just an amendment to an older law from the 80’s that protected likeness and image. The amendment focused on adding that critical word “voice” so that it could specifically target uses of Generative AI. It, too, was opposed by industry like OpenAI, Google etc. but also including (according to Wikipedia) the movie industry, which I found interesting.
It passed with unanimous bi-partisan approval. WOW
Reply