- Responsible by ClearOPS
- Posts
- Responsible
Responsible
Hot Dog, Not Hot Dog? Hot Diggity Dog and the AI Hype
Hello, and welcome to Responsible, by ClearOPS, a newsletter about ResponsibleAI and other responsible business practices.
It’s me again. Tonight, at the dinner table, we played a game that asks you life questions. The question was, “what are you more expert at than anyone in this room right now?” I said, “The AI technology intersection with the law.” Welcome to my weekly newsletter.
What I have for you this week:
The other 20%
AI in hiring, what’s right and what’s wrong?
Caroline’s weekly musings
Chef Maggie
How to build an AI Governance program
AI Bites
DALL-E image generated with prompt “woman in black turtleneck holding capsule”
Last year, OpenAI promised to spend 20% of its computing power on safety. First of all, who is measuring their computing power? Seems like a basic question that no one asked. Anyway.
Recently, OpenAI and its CEO, Sam Altman, have come under scrutiny for failing to uphold this promise. If you recall, they disbanded the SuperAlignment team earlier this year, and most of that team is now either at Anthropic or at Safe Superintelligence. The critics claim that OpenAI is more interested in releasing products than safety.
You got to hand it to them, they keep themselves in the news with all this controversy. But I want to address what is at the core of this criticism. It is the notion that in order for AI to be “good,” we have to figure out when it is “bad.”
This past weekend, I was listening to the AI in Business podcast featuring Juan Lavista Ferres, a Chief Data Scientist at Microsoft, who recently released a book called AI for Good. In the podcast, he tells the story of AI being used to detect skin cancer. The training data included a lot of pictures of skin cancer and not skin cancer. Now, most of the pictures with skin cancer also had a ruler showing the size of the lesion because this is one of the ways doctor’s identify cancer.
The resulting model was 99.99% accurate. But then they realized that it was detecting skin cancer whenever there was a ruler in the picture. So, actually, it was really, really good at detecting rulers. Not so much with skin cancer.
Anyone see the show Silicon Valley and the “hot dog, not hot dog episode”?
It is a very different thing to recognize when AI is doing something bad, like telling someone they have cancer when they don’t (Elizabeth Holmes and Theranos pops into my head) and recognizing that it is doing good, but in a bad way.
In case you did not know, New York city passed an AI law a few years ago restricting how employers can use AI in hiring (they must audit the software and notify applicants they are using it).
But what about the other way around? Are candidates who use AI during the interview required to disclose it? Does it seem like a form of lying if they are assisted by AI in their answers?
I definitely know people who dislike this practice and would not hire someone if they found out that they did use AI to help. Yes, the irony is real that employers have been using automated decision making in screening and interviews for many years but now candidates are turning the tables.
Check out this website and then come back and answer this one question.
Would you fire someone you hired if you found out they used AI to help them answer your questions during the hiring process? |
Reply