- Responsible by ClearOPS
- Posts
- Responsible, by ClearOPS
Responsible, by ClearOPS
A Deep Dive into Microsoft's AI Governance framework; AI Employees and the Drama!
Hello, and welcome to Responsible, by ClearOPS, a newsletter about Responsible AI and other responsible business practices.
I gave another AI training for lawyers and compliance professionals this week. Loved it! We built a recipe app together and I enjoyed that they wanted a vegetarian chef so that when I asked it to make chicken, someone said, “no wait! no meat.” The app already had responded, “let’s put the chicken aside.” This response is proof to me that we have already come a long way since ChatGPT was first released in terms of accuracy because a year ago, it still would have cooked that chicken.
New for you this week, I am slightly changing the flow (so let me know if you dislike it!):
Is Microsoft’s Responsible AI program worth mimicking?
Should you be worried about OpenAI with all the executives leaving?
Caroline’s weekly musings
Chef Maggie Recommends
New AI Tool of the Week
AI Bites
Presentation cover page
About a year ago, I embarked on my first AI governance task for a client. She is actually a cybersecurity professional at a large organization but we started talking about AI and she mentioned how she was spearheading the company’s internal AI governance program.
So, I asked her, “Have you seen the guidance out from Microsoft?”
“No.” she said, “but I am interested.”
That week, we began building, and we started with the Microsoft AI Impact Assessment, conforming it to her company. It’s getting a bit stale, but it is located here. That is where my involvement stopped, but I did note that if her company was selling AI, as a product, she might also want to consider a Code of Conduct, like Microsoft did. You should note, that Code of Conduct is specific to one piece of AI technology, speech text to speech.
Should Code of Conducts be product by product or comprehensive?
Microsoft also discloses what they call a Transparency Note, which is similar to the model cards that other AI model companies use, such as Hugging Face. If you continue through Microsoft’s Responsible AI program disclosure and framework, they discuss how they decided to deprecate the use of an AI emotions model, noting that the science on it was too immature. They also note that their framework is under constant work and improvement.
They also consistently release blog posts about Responsible AI and AI Governance, with a recent focus on their approach to AI Governance in different parts of the world
As I was writing this, I found myself beginning to pick apart the frequency of their updates and the flow of the information provided, but I stopped myself. If Microsoft has picked the principle of transparency as the top most critical principle for its AI Governance, then I have to admit, they are succeeding. And I think that is the biggest point I want to make. When you pick your AI principle or principles, the key with any governance program is sticking to it in every way, as Microsoft does in both its products and in its disclosures about its program.
I am doing a much deeper dive on their AI Governance framework as compared to a few others. If you would like a copy of this white paper or whatever you want to call it, please email me.
Everyone got really excited about the departure of OpenAI’s CTO. I saw the below picture everywhere.
From LinkedIn and Generative AI
To say that the response has been Chicken Little, “the sky is falling!”, would be an understatement. Clearly, she left because they are changing from a non-profit to a for profit entity. Clearly, the culture at OpenAI is massively shifting.
Do I care about that? No, not really. if I worked there, I would probably be leaving too, but not because I wanted the company to fail.
Clearly, some employees had a moral dilemma with this switch and so they left. It is incredibly painful, as an executive, to see people who you valued and helped hire, walk out the door in a pretty unhappy frame of mind. If you have ever heard of the term cancer culture, then you know that unhappiness can spread pretty quickly and it is hard to endure.
Is Sam Altman making the right decision? Is it the only decision? Time will tell but I am concerned that the switch from a research company to a company that prioritizes revenue will have negative repercussions, whether in the performance of the product itself or in our collective future to avoid dystopia. Clearly I am not the only one.
P.S. I also have to note that the principles of AI Governance that OpenAI originally adopted have likely changed and so it begs the question, can you legitimately change your AI Governance principles? Or is that, in and of itself, against Responsible AI?
Reply