- Responsible by ClearOPS
- Posts
- Responsible, by ClearOPS
Responsible, by ClearOPS
The Legality of AI, Anything you Output Can and Will Be Used Against You
Now that the Big Beautiful Bill was signed without the moratorium on state AI laws, let the games begin! Seriously, though, it is a big deal that they cut that part out of the bill, because now we are going to see a lot of state regulation. We might as well dive in.
Today’s newsletter will challenge your ethics and your morality.
You have been warned.
What I have for you this week:
Snippets About What is Going On in Responsible AI
Caroline’s weekly thoughts
Chef Maggie Recommends
Useful Links to Stuff
Learn AI in 5 minutes a day
This is the easiest way for a busy person wanting to learn AI in as little time as possible:
Sign up for The Rundown AI newsletter
They send you 5-minute email updates on the latest AI news and how to use it
You learn how to become 2x more productive by leveraging AI
Back in law school, I quickly learned that where CA might have introduced ground breaking laws, NY was never far behind and vice versa. However, they somehow managed to make the laws similarly purposed with opposite effects. I honestly don’t know how to better explain it than that.

And it is happening again. Last Fall, I wrote about CA SB 1047 that was ultimately vetoed by the Governor. When I read the NY RAISE Act, I noticed a similarity so I decided to understand where the two laws were the same or distinct. If you don’t know, the NY RAISE Act is the most recent AI legislation making its way to the NY Governor’s desk. Once there, she has 10 days to decide whether to veto it or not. The bill targets frontier model makers and introduces a series of safety requirements and protocols, including annual safety audits. As a bonus, it looks like Governor Newsom is now reviving SB 1047. So I guess CA came up with the idea, NY ran with it and now CA can’t stand to let NY be the only one with this particular law. Interesting.
Anthropic won a major court battle. It won in summary judgement that it could use copyrighted work as training data and such use is protected under the “fair use” doctrine. If you are a regular reader, you may remember how a few newsletters ago I tried to get AI to finish a book series for me. What struck me the most was how much of the book ChatGPT already knew. Questions like, “doesn’t that violate copyright?” and “how does it know the answers?” popped into my head. But what was most interesting about that experiment is that Amazon does not let me download a copy of the book I purchased so I could run ChatGPT against it.
Which brings me to the nuance of this court case. If a user using the free version of ChatGPT inputs a book, can ChatGPT train on that book? I think this is similar to the questions being answered in the NY Times case against OpenAI. It will be interesting to see how this issue is resolved (and whether users of ChatGPT will threaten lawsuits of their own if the NY Times sues them). In the meantime, don’t think this gives you that right. So far, the Anthropic case is a narrow decision that allows model companies to buy copyrighted works and use them as training data.
Last month, I participated in a trial of a few AI legaltech tools. My takeaway? It is a race between the best UI and the best functionality, otherwise, they are all mostly the same. I sort of hate to say it, but I am wondering where the creativity is? Everyone thinks that business lawyers just review documents all day. And it’s true. I have joked a lot that I am a professional reader of boring documents, but I strongly believe there is a missed opportunity here. The moat is not the tech anymore but the processes that you enable around the tech. If I am redlining a document, I am using Word. I don’t want to upload it to your application. And I am likely downloading the redline and emailing it to the sales team or the lawyer on the other side. How about helping me organize all those redlines or streamline that communication process?

Reply