Responsible, by ClearOPS

AI Governance in top 25 technology trends to watch in 2025; Privacy and Security to lead the way

In partnership with

Hello, and welcome to Responsible, by ClearOPS, a newsletter about ResponsibleAI and other responsible business practices.

Happy Halloween! I should have saved that Michael Jackson Thriller newsletter for this week! Oh well. If you enjoy pet videos as much as my family, this Halloween inspired one on X has us all walking around with big smiles. Enjoy the candy!

I’m speaking at AI RISK on November 19th in London. If you are there, let’s meet up! Also, please enjoy a complimentary ticket to the conference. Unique Booking Link: https://buytickets.at/grcworldforums/1299026/r/clearopsvip. Discount Code: SPONSGUEST

What I have for you this week:

  • AI Principles, or maybe it is just principles without the AI

  • Who is leading AI governance anyway?

  • Caroline’s weekly musings

  • Chef Maggie Recommends

  • AI Tool of the Week

  • AI Bites

Okay, okay, I preach a lot about AI principles and ethics. If you run a business using AI, an AI governance program is critical and your choice of principles is one of the first steps.

But how?

Let’s evaluate a few different kinds of AI businesses.

A few months ago, I had the privilege of working with a disease diagnostic company looking to use AI to find cures to certain ailments. They were looking at a couple of AI companies to help with the custom models. Now, don’t you think those companies should prioritize accuracy? Seems like a pretty core principle when people’s very lives are at stake.

Now take a company like Character.ai who is facing a lawsuit because a teenager, Sewell, began using the AI bots product and later committed suicide. Because of Sewell’s suicide and the mother’s resulting lawsuit, Character.ai is now taking certain actions, such as putting in place warning signals if one of their customers is using it for too long or changing models targeted to under 18 year olds to remove certain suggestive content. Clearly they should have a principle of safety and, more specifically, content moderation. I wish they had figured that out before Sewell took the action he did.

Finally, if I think about a legal tech company, or even my own company, ClearOPS, where businesses use it to make decisions, then transparency becomes a key principle. For example, making sure that the resulting output references the sources from which it made its analysis or answered a question.

It’s not easy, but it is important.

On LinkedIn, I follow a bunch of people who post links to AI Governance jobs. These “influencers” have added thousands of followers to their LinkedIn network as well as to their newsletters.

AI Governance jobs are big.

Or are they? Last week, at the Privacy and Security Forum, I would poll people who came by my booth. Are you handling AI governance and what was your role before? Now, you could argue that this is a biased group and privacy did come out on top as leading AI governance programs at their companies.

But is that because it is getting the budget? That is my number 1 question. Are companies willing to lay out more money for AI governance than they traditionally have been, at least in the U.S., for privacy?

Most of the people I spoke to said, no. They opined that it was the same budget therefore making it smaller. Do more with the same amount.

Part of me wants take their word for it. But considering the number of jobs popping up in AI Governance, I am just having a very hard time believing them.

Subscribe to keep reading

This content is free, but you must be subscribed to Responsible by ClearOPS to continue reading.

I consent to receive newsletters via email. Terms of Use and Privacy Policy.

Already a subscriber?Sign In.Not now

Reply

or to participate.