- Responsible by ClearOPS
- Posts
- Responsible, by ClearOPS
Responsible, by ClearOPS
And around and around we go, where we stop, no one knows
Hello, and welcome to Responsible, by ClearOPS, a newsletter about ResponsibleAI and other responsible business practices.
Yesterday, I was in London moderating a panel about the Global AI Regulatory landscape. I had a great group of panelists. It was an event run by GRC World Forums and I also went to their October RISK event in London. Having been to two events, I noticed that London is a tight circle of mostly the same people. I love this because it means the more events I go to, the more people I will know.
If you like my style, please share this newsletter. Sharing is caring!
What I have for you this week:
Same song, different dance
AI governance trends
Caroline’s weekly musings
Chef Maggie Recommends
AI Tool of the Week
AI Bites
This week, I moderated a panel at the AI RISK conference in London on AI regulation. I love London.
Attending conferences gives me invaluable perspective on industry. This conference was no different. From receiving a question about “what is Generative AI anyway” to how politics will shape AI governance, I want to share my takeaways.
It’s not mind blowing, but AI governance is following the exact same path as cybersecurity and privacy. What I mean by this statement is that the processes are exactly the same. First, perform a gap analysis, then a risk assessment, implement controls, prepare for an audit, get an audit and enter the annual cycle. Interested in your “AI trust score”? Well those companies are popping up too! I just found this one but, warning, I am not a fan (more on this in my musings).
I guess it is obvious that the same processes would be used for AI governance considering that a majority of the people switching to AI governance roles were in privacy or information security. But I did not anticipate that the exact same process would be used.
There are over 117 AI frameworks that have been published this year. To put that in perspective, there were 17, total, when we started 2024. So, I figure we are still in a bit of a mess.
The key advice given at this conference was that most organizations already have a process for risk. AI is not a new process, but should simply be incorporated into your existing one.
It’s the same dance, just a different song.
As an attorney, I speak to a lot of companies about their legal stuff, such as terms of use and privacy policies. Recently, I have noticed a trend that seems to be expanding.
Reply