- Responsible by ClearOPS
- Posts
- Responsible
Responsible
by ClearOPS,
Hello, and welcome to Responsible, by ClearOPS, a newsletter about ResponsibleAI and other responsible business practices.
I have done it! I have updated the ClearOPS website and it took me 3 days! Check out my musings for how I used AI, but I would love to hear from you and what you think about the new website!
What I have for you this week:
LinkedIn did what?
NIST AI RMF, so many frameworks, which one should you adopt?
Caroline’s weekly musings
Chef Maggie Recommends
How to build an AI Governance program
AI Bites
I subscribed to a new AI governance newsletter yesterday and it brought up the open letter from certain US-based AI companies to the EU about how regulation of AI creates uncertainty and hinders innovation. It made me think of something.
But first, I also have to recap the LinkedIn debacle this past week. In case you didn’t experience it, LinkedIn updated user permissions, without notifying its users, that turned on by default LinkedIn’s right to train AI models on your data, including personal data.
Such bad form.
But not illegal in the U.S.
But that is not the point I want to make. The point I want to make is that I have been working in and around AI for nearly a decade and here is what is interesting to me. In business, if you want to use someone else’s data, you have to foresee a real and tangible benefit.
As a business, I can negotiate a contract for a vendor to use data. But we don’t treat individual customers the same way because there are too may of them and it would be too hard. Perhaps it is time that changed.
I think this is one problem AI can solve. As an individual accepting terms of service from a provider, AI could make it easy for that provider to keep track of the agreements with each user and it could even empower customization. We have talked about this in the legal world for a long time, but essentially it is a “choose your own adventure” form of contract. So you provide the AI agent with the choices a user can make in certain places of the contract, store that contract and notify the business and user when terms are triggered.
That would be cool.
It seems my posts are more popular when I actually break down the laws or regulations so today, I did a little extra research on the NIST AI Risk Management Framework for your benefit.
First of all, if you are a business, should you adopt this framework? Or wait until the US adopts laws that require Responsible AI?
I am not really sure. On reading it, it seems to me that one of its benefits is that it is flexible and adaptable so that it prepares a business for future regulation. I don’t know about you, but I don’t know many businesses who try to get ahead of regulation.
Except in AI, of course, because, well, dystopia is the alternative. And I am serious! But there is a little more here than a gentle prodding to do the right thing. Even President Biden, in his Executive Order on AI directs the Secretary of Commerce, through NIST, to develop a companion resource to the AI RMF, named NIST AI 100-1, targeted at generative AI, and emphasizes the privacy and security risks of not doing anything.
The momentum to regulate AI is strong, so I don’t think it would be a waste of time. So how do you get started? You need to map your AI systems and data usage. You need to measure the risks and the impact of your AI systems. With the mapping and measurement in place, you can manage the risks and, ideally, maximize the benefits, all while striving for that last core component of having a systemic governance program.
How do you actually do all that stuff I just talked about? Well, a questionnaire, of course. 😜
Reply