Responsible, by ClearOPS

It's the final countdown!!

Hello, and welcome to Responsible, by ClearOPS, a newsletter about ResponsibleAI and other responsible business practices.

I did it! I finally pressed “launch” on my implement AI governance course on Maven! Phew. That was a lot of work. I know a bunch of you out there need help launching your AI governance program and that is why my course is focused on just that. Not teaching you how to do it, but as a forcing function for you to build and launch it.

If you decide that you want to join, WAIT! I have a special discount code for you at the end of this email because you are my favorite reader ever.

What I have for you this week:

  • AGI in 2 years. No, not really.

  • Grok 3 will be trained on US case law!

  • Caroline’s weekly musings

  • Chef Maggie Recommends

  • AI Prompt of the Week

  • AI Bites

DALL-E dog chasing its tail

My dad called me today to tell me that Sam Altman was at the DealBook Summit today. You know you have talked about AI too much when your dad calls you just because he saw Sam Altman on the news. LOL

But the interview reminded me of a conversation I had a couple of weeks ago.

Now, I often have people expressing their massive fears and anxiety over AI and its dystopian potential. But this one was different. I was point blank asked, “So you don’t think there will be AGI in 2 years taking over everything?”

My response was “no” to which my interrogator then asked, “well how long?” I responded that I could not put a number of years on it, but likely in the more than 7 years range. I surmised that it was because we don’t know how the human brain actually works and until we do, we cannot have AGI.

But I am also skeptical it will even happen that fast. After all, we have been working on AI for 60 years and so far we have only managed to create some well-spoken chatbots.

Okay maybe that was glib, but Sam Altman also seemingly wanted to temper the predictions for AGI, blaming it on the human race not being ready.

Not the tech, no never the tech.

It’s the humans that are to blame. Funny how it is the humans that build the tech that then the humans aren’t ready for? Have I talked about going in circles before? Yes!

I haven’t talked a lot about Grok, Elon Musk’s chatbot, because I haven’t had any interest in it. Plus, I read that X now uses all tweets as training data and you cannot opt out, which conflicts with my ethics and principles.

DALL-E Judgement Day

Turns out that’s not true!

You can disable it in your settings.

Anyway, it was recently announced that they are going to train Grok 3 on all publicly available court cases! When I read this, I saw immediate concern and criticism, which I don’t get. I think this is fantastic news.

As an attorney and an AI product builder, I have wondered why we haven’t trained AI on all the case law. Imagine you are a litigator and you want to know how to adjust your tone, argument style, etc. based on the judge. By reviewing all of that judges past rulings, you get that edge. Or even yesterday, at the CISO function, an attorney mentioned how recent case law was focusing on what is considered “material” for purposes of reporting incidents, as well as what is a “reasonable” cybersecurity program. His advice was to go look at the case law. It would be a lot easier to just ask a chatbot than go to the law library.

I think the concern and criticism is over hallucinations and inaccuracy and potentially breaking through sealed judgements that violates people’s privacy. I think those are very valid concerns, and worthy of being addressed sooner than later. I’m still going to try Grok 3.

Subscribe to keep reading

This content is free, but you must be subscribed to Responsible by ClearOPS to continue reading.

I consent to receive newsletters via email. Terms of Use and Privacy Policy.

Already a subscriber?Sign In.Not now

Reply

or to participate.