Responsible

by ClearOPS, My Contributions to Responsible AI, playing around with o1 and Disrupting Regulation by Recission

In partnership with

Hello, and welcome to Responsible, by ClearOPS, a newsletter about ResponsibleAI and other responsible business practices.

When I read my own writing, I think of myself as having a wry sense of humor. I have no idea if that comes out or not because I cannot help but read it that way since I intended it that way. Maybe I should ask an LLM to tell me if it comes through or not…

What I have for you this week:

  • My comments to the regulators on their efforts in Generative AI

  • o1, everyone is talking about the latest “chain of thought” model

  • Caroline’s weekly musings

  • Chef Maggie’s Recipe of the Week!

  • How to build an AI Governance program

  • AI Bites

A few weeks ago, I mentioned that the ICO was seeking contributions on its fifth call for input on AI vendors. For you, my dear reader, I went through their process of providing feedback.

The theme of my comments is that trying to fit “controller” and “processor” - those two terms loved by EU privacy enthusiasts and GDPR regulators alike - does not fit in this context and is too rigid. Do I like the binary nature of these definitions? Sure. Do they work for AI, which was not contemplated when they drafted GDPR?

No.

I will say, I was impressed with the level of knowledge the ICO has about Generative AI. They even mention RAG. But they do not know all the nuances, as I tried to point out.

For example, if you go to the application sponsored by one of the AI podcasts I regularly listen to, called “This Day in AI Podcast,” you can flip and interchange models. Does that make Simtheory the processor because I am directing it which model I want to use? Or are they the controller because they control the means by which the processing happens (i.e. which models they give users access to)? Maybe they are both or maybe they are neither.

I guess the US approach to privacy, which is sector specific, was the way to go after all vs the GDPR one size fits all approach. (I am kidding! sort of)

Of course I cannot release this newsletter without talking about OpenAI’s latest model, o1. If you haven’t heard, this is OpenAI’s newest model that defaults to “chain of thought” reasoning. “Chain of thought” was a method previously used last week and still used on the other models, to prompt.

If you haven’t taken a course on prompting yet, then LinkedIn learning has some free ones, in case that helps. But chain of thought was providing detailed instructions in your prompt in order to make sure the chatbot “reasoned” through.

The point of this new model is to eradicate your need to do chain of thought prompting, or at least, that is how I am looking at it. I tested it for this newsletter using a fact pattern where a company experiences a data breach and the CEO is freaking out, but the in house counsel is advising the CEO to follow the incident response plan to the “T.” The CEO is too freaked out and is like a deer in the headlights and cannot sit still long enough to listen to counsel, let alone read a policy. So I asked this new model, “who is right?”

Interestingly, o1 definitively said the lawyer was right. 4o said that they were both sort of right with the lawyer edging out on top. Hmmmm

But my suspicion is that o1 is just a piece of the roadmap to true agentic AI, where AI agents talk to each other using human level chain of thought.

Subscribe to keep reading

This content is free, but you must be subscribed to Responsible by ClearOPS to continue reading.

I consent to receive newsletters via email. Terms of Use and Privacy Policy.

Already a subscriber?Sign In.Not now

Reply

or to participate.