Responsible

by ClearOPS, Regulation Does Not Stifle Innovation

Hello, and welcome to Responsible, by ClearOPS, a newsletter about ResponsibleAI and other responsible business practices.

Thank you for being a subscriber. I only ask for one thing from you. To please share this newsletter. The more we make a unified voice that we want a Responsible AI, the more likely the industry will grow up altruistic vs dystopian. That’s why I’m in it. What about you?

What I have for you this week:

  • Yes, you can train on my data. Wait. No, no you can’t.

  • More AI laws waiting for Governor Newsom

  • Caroline’s weekly musings

  • Chef Maggie Recommends…

  • How to build an AI Governance program

  • AI Bites - TED talks

A few years ago, I used a service called Chargebee that I did not consider right for ClearOPS. So, I deleted the account. To this day, I get a daily report email from them. I cannot stop them from sending it because I technically do not have an account and so I have no access to my account’s email preferences or to customer support. It is an endless loop of frustration.

If regular software has challenges deleting or correcting data, how will an AI company handle it?

As I mentioned in my post last week, much of the objection to AI regulation is that the regulation will stifle innovation. GDPR and various US state laws allow consumers the “right to correct” or the “right to delete” their data. Those laws were debated and then passed prior to the mainstream understanding of AI. The understanding that AI takes data, learns from it, and then uses those learnings to generate an output. We know now that deleting training data means you have to delete, re-train or somehow get the model to “forget” the data, which is expensive. Same thing with correction. Is the mere fact that compliance with a law makes it harder to use a certain technology mean that the law should not have existed in the first place?

But if that training data is wrong, why shouldn’t we have the same rights?

Personally, I don’t think regulation is as stifling as everyone thinks. I mean, ChatGPT still exploded and those laws were already in place.

Last week, I focused on California’s proposed legislation on AI, SB 1047. Well, it is on Governor Newsom’s desk, so if you want to know more about it, please read last week’s newsletter.

But it was not the only AI bill that passed before the CA legislature went on break.

AB 2013 also passed and is on the Governor’s desk. This bill addresses my discussion above on the right to delete and correct by requiring AI companies to disclose the training data used in their models to developers using those models. It is essentially a transparency requirement from the foundational models to the application layer.

I feel like I just used a ton of AI buzzwords.

I personally read this one to be an attack on OpenAI’s decision to treat its training data as a trade secret.

AB 2885 is a fairly uninteresting bill that updates the definition of AI, which obviously has a broad sweeping effect through all the other legislation. It defines AI to mean “engineered or machine-based system that, for explicit or implicit objectives, infers from the input it receives how to generate outputs that can influence physical or virtual environments.”

And finally SB 942, which interestingly codifies the term Generative AI or GenAI, requires watermarks and an AI detection tool. I am struggling to identify if ClearOPS is subject to SB 942. Once I figure that out, I will know how broad the reach of this one is and I will get back to you.

One thing is clear to me, once California gets active on a topic, you can be sure other states will follow…quickly.

Caroline’s musings:

I am thinking about the biggest risks we face today using AI technologies.

Subscribe to keep reading

This content is free, but you must be subscribed to Responsible by ClearOPS to continue reading.

I consent to receive newsletters via email. Terms of Use and Privacy Policy.

Already a subscriber?Sign In.Not now

Reply

or to participate.