Responsible, by ClearOPS

We cannot control the future, but we can live in the present

In partnership with

Hello, and welcome to Responsible, by ClearOPS, a newsletter about ResponsibleAI and other responsible business practices.

The holidays are upon us. One of my local radio stations is playing non stop Christmas music. I personally like to focus on Thanksgiving before I move to Christmas. Thanksgiving is my Superbowl for meal prep. Speaking of food, what is your favorite Thanksgiving dish? Maybe I can have Chef Maggie make some suggestions?

What I have for you this week:

  • The results are in! What a Trump administration means for the future of AI regulations

  • UK AI Management Essentials

  • Caroline’s weekly musings

  • Chef Maggie Recommends

  • AI Tool of the Week - back next week

  • AI Bites

*If you like my newsletter, please share it!

ChatGPT generated image

I want to follow up on my promise to cover the effect a Trump administration will have on AI regulation. An overwhelming majority of you wanted me to dive in and this is multi faceted.

I will give you my conclusion, which is my opinion is different from almost everyone else in my field. Oh well.

As I reported last week, the new administration will form the government efficiency commission with its leader being Elon Musk. The point of this commission is to cut Federal spending. Many have already opined that Trump (including, apparently, Trump himself) will repeal most of Biden’s executive orders, including EO 12250 (on AI). The theory is that if the EO is repealed and cutting budgets is prioritized, then AI regulation becomes a nice to have, not a need to have.

But hold on, that doesn’t actually address anything that either Elon Musk or Trump have said about their own views on regulating AI. Musk has been an avid supporter of AI regulation and quite vocal about it since 2017. In fact, he publicly supported California’s SB 1047 bill.

And because Trump was already President, we know that he supports regulation of AI from EO 13859 which has a primary purpose of stating we need to keep innovating in AI, but also includes a specific risk-based approach to AI regulation. In fact, Congress passed the National Artificial Intelligence Initiative Act of 2020 during the first Trump term which in turn led to the creation of NIST AI RMF, which is one of the most commonly used frameworks for regulating AI.

So far, my rabbit hole is uncovering an administration favoring regulation of AI, specifically focusing on safety, removal of bias and a risk-based approach. But, wait, it also focuses on innovation with the correlative position that regulation can curb innovation. So, there is confusion, which is why I am now going to focus on the environment.

Trump does not really believe that humans are to blame for climate change; economic growth and energy independence are more important to him than protecting the environment. Unfortunately, AI consumes a lot of energy and is having an impact on the environment. Given Trump’s position on the environment, it is fairly reasonable to conclude that regulating AI through environmental bills is not going to happen. I would go one step further and argue that the US will support building more data centers so that we can support more AI innovation, potentially increasing the US carbon emissions.

Contrary to my colleagues, I do think the Trump administration will favor AI regulation and I don’t think businesses can breathe some sort of sigh of relief that they will not be under scrutiny (especially if they are a Federal contractor or sub). I doubt there will be a law passed in Congress, but I do think Trump’s EO to replace the Biden one will include a call for regulation and I think Elon Musk will be supportive. However, I don’t think any AI regulation will include the environment as a risk to consider. I also think the states will be very active in passing AI regulations, with support from the Federal government.

With sector self regulation also being encouraged, I predict that those of you in AI regulation jobs will be very, very busy.

That was a long one so I will keep this one brief. The UK has published the AI Management Essentials that includes a self assessment. It is a draft at this time, but there are some interesting points about it:

  1. It combines the NIST AI RMF, ISO 42001 and the EU AI Act, making it the first assessment that cross walks all these regulations and frameworks. I wish it included some of the work Singapore has been doing, but this is still pretty good.

  2. It is in a comment period until the end of January, so please provide them with your feedback! Especially if you are a startup.

  3. I plan to take their assessment and put it into Excel with commentary. I will then upload it to ClearOPS so if you are a customer, you will have access to this AI assessment!!

I’ll keep saying it, self assessments, second party assessments, third party assessments - they aren’t going anywhere. If anything, we are just getting more and more of them.

Subscribe to keep reading

This content is free, but you must be subscribed to Responsible by ClearOPS to continue reading.

I consent to receive newsletters via email. Terms of Use and Privacy Policy.

Already a subscriber?Sign In.Not now

Reply

or to participate.