- Responsible by ClearOPS
- Posts
- Our "First" Newsletter on Responsible Technology
Our "First" Newsletter on Responsible Technology
We are switching up our theme and our newsletter platform
Hello, and welcome to Responsible, by ClearOPS. We started this newsletter because we believe in ResponsibleAI and responsible business practices. ClearOPS is a tech company using GenAI to answer hard questions about privacy and security.
What’s new this week?
Slack Changes It’s AI terms under the radar.
OpenAI gets rid of Superalignment Team
Caroline Musings: Should you ask your vendors if they use GenAI in security questionnaires?
Slack changed its terms reserving their right to train global AI models for the benefit of all Salesforce customers. I found out about it through one of my privacy-focused Slack channels (the irony of which is not lost on me). A few things rub me the wrong way on this news. First, the change is not located in their Terms of Service. It is located in their Supplemental Terms. And according to those Supplemental Terms, you are supposed to receive a notice if they change them. I belong to about 10 Slack channels and I did not receive a notice. Did you?
Second, it is not upfront about whether they anonymize your data for training so on first glance, it is very concerning! Third, in order to understand what they mean by training, you have to click the link where they say you can opt out. Clicking on the link does not actually take you to an opt out. It’s just more explanation. Then they add additional friction by making the opt out only available through email, and only by contacting the workspace admin. Hey Slack admins, you got some work to do!
So OpenAI is shaking it up again, and not in a good way. The two leaders of their “Superalignment” team left at the same time. Now, I don’t know about you but if you are responsible for the company’s responsible AI practices and you coordinate your departure with the other team leader, leaving your direct reports lost without any protection, I’m not sure you are acting responsibly to them. Clearly, it was done to take a stance and paint the picture that things are not okay over at OpenAI, but I mean, c’mon. Either you fix it from within or you try to pressure it from without, but don’t leave in an obviously coordinated way and then say you are, “confident OpenAI will build AGI in a safe and responsible way.” Actions are louder than words and these actions tell me that those leaders might not have been a good fit to lead that team, in addition to the fact that OpenAI cannot figure out its own ethics in AI strategy. Both are bad.
I’ve seen a lot of security questionnaires lately with GenAI questions in them. With the passage of the EU AI Act (made official today by the EU Council), you can be assured that this might be setting up your business for an AI assessment and you may have to register your application with the EU.
The first step is to disclose your use of AI in your terms of service and privacy policy. Many businesses struggle with this first step because they want to leave their options open when it comes to using customer data as training data, so they don’t know how to write in the terms. In addition, not many lawyers know how to write this yet. But it is critical, or you could face FTC scrutiny.
Reply