- Responsible by ClearOPS
- Posts
- Responsible
Responsible
by ClearOPS, China, Treaties and Should You Ban the Use of AI?
Hello, and welcome to Responsible, by ClearOPS, a newsletter about ResponsibleAI and other responsible business practices.
The interesting thing I have learned is that most of you really want to know about AI laws, frameworks and regulations, so I guess I will keep doing that until the open rates start to fall! I will be in D.C. on Thursday night and Friday for the Privacy Law Salon. Let me know if you are going too!
What I have for you this week:
China’s Newly Released AI Framework
The Council of Europe and the Dataset Providers Alliance on EthicalAI
Caroline’s weekly musings
Chef Maggie comes to the rescue
How to build an AI Governance program
AI Bites
Guess who came to the #ResponsibleAI party? China
China and ResponsibleAI
Yesterday, the China Cybersecurity Standardization Technical Committee released version 1.0 of the "Artificial Intelligence Security Governance Framework" (Framework). I cannot do it as much justice as this LinkedIn post, so I will leave it at that.
Just kidding.
I want to focus on what he says in that post about the three notable differences in the Framework (there are others):
- "cyberspace risks" (including misinformation, which is cyberspace risk rather than a model risk)
- "cognitive risks" (e.g. information cocoons, cognitive warfare)
- "social order risks" (i.e. challenging traditional social order)
Looking at these three closely, they seem to focus on the risks of AI influencing human behavior, presumably not in a good way. To me, it means transparency is a critical part of this framework. I also noticed that it calls for strong governance, including agile governance and shared governance. This is one framework worth keeping an eye on in case it turns into the law in China.
On LinkedIn yesterday, I posted about the Council of Europe forming a treaty on AI. It is great work but I am not going to claim that they did anything that is different from the other laws, frameworks and regulations that are in existence or proposed.
It still requires a list of ResponsibleAI measures such as human dignity, transparency, accountability, non-discrimination, privacy, reliability and safety. At this point, it is pretty safe to say that these are the common principles in AI Governance.
Interestingly, a new group has formed to promote something tangible in the US in pursuit of the mission for ResponsibleAI and it is an opt in requirement. Europe may be laughing so loud that it is hard to hear what this type of standard in AI means but I think the group is quite right to identify that many people are offended by the scraping and then unapproved use of their work for generative AI products. I wrote about how Anthropic’s default is opt in while OpenAI is opt out. Since I wrote that piece, I have been a fan of Anthropic, so I am also a fan of the Dataset Providers Alliance and their push for this idea.
it is laudable that so many trade groups, government organizations and countries are focused on ResponsibleAI. It signals a great collaboration and I think it will also result in the adoption of AI regulation faster than we have seen in other technologies.
At least, that is my hope because most of this, so far, is just a recommendation with very little incentive to utilize.
Reply