- Responsible by ClearOPS
- Posts
- Responsible
Responsible
A newsletter on ResponsibleAI and other business practices
Hello, and welcome back to Responsible, by ClearOPS. We started this newsletter because we promote ResponsibleAI business practices.
Here’s what we have for you today
OpenAI just keeps giving us things to think about.
How do you know if your new pdf AI Assistant is secure and private?
Caroline Musings: is it really a shakedown?
Regarding the situation between OpenAI and Scarlett Johansson, I say, “You go girl.”
I am getting out my 🍿 for this one because maybe it will shed some light on the legal right to publicity and AI.
But there is something else with this AI that OpenAI released that is getting less attention, but raises a huge issue of ethics. It has to do with the fact that the AI flirts with people, as hilariously portrayed by the Daily Show. It makes me want to say my favorite thing about businesses these days. “Just because you can, doesn’t mean you should.” Flirting gives us a dopamine hit, which will bring that addiction that we currently have with our phones to interacting with voice AI. So far, most companies (like Apple) have avoided flirtatious AI, but I suppose OpenAI proves, yet again, that ethics is not a core value.
Got Adobe Acrobat? I’m a lawyer, so of course I do. Recently, it constantly pushes its new AI feature, called AI Assistant. Being naturally suspicious, I wanted to know, are they using my data for training? What should I know if I use this? I suspect they are using RAG, but I need to know for sure.
As I continued clicking, I noted that they updated these terms only last week. Hmm, okay, not great, but okay. I mean, I really don’t think anything nefarious is at foot here, but I am not 100% crazy about their onboarding process, either. When I clicked the button in Adobe to enable GenAI, I was not taken to their terms page or asked to consent. It just automatically enabled it, which is not active consent best practices. So to answer my questions, I had to go digging. When I did go digging, I was even more suspicious, because, again, of the digging.
When I did finally review their online terms, they explicitly stated that your pdfs are not used for training an LLM. Neither are your inputs. That’s good, but aren’t they sharing the pdfs with a new third party? Ah yes, I finally found it: Azure OpenAI. Okay, I’ve also read Microsoft’s terms and ethics disclosures.
I give them a “C-” on the legal process here but an “A-” for not training on my data and having an ethics program. I think I will give it a try.
If I told you I was going to look you up online and based on what I saw, come up with a score of how good or bad of a person you are and then offer to improve your score if you pay me, would you consider that a shakedown?
I rated Adobe above. But are my scores meaningful to you? Not without context and verifiable information. I think it is fine to adopt someone else’s scoring if it is unbiased, but in my statement above about getting paid to increase your score that I gave you, the scoring is not unbiased. I am encouraged to purposefully score low so that I can make money off of it. IMHO, that is a shakedown. For this reason, ClearOPS offers custom scoring and will never offer that type of black box scoring.
Reply