- Responsible by ClearOPS
- Posts
- Responsible, by ClearOPS
Responsible, by ClearOPS
To censor, or not to censor, that is the question
I am shaking it up and trying to create a new template to this newsletter.
Responsible is a newsletter about Responsible AI and other responsible business practices. It stems from my time as a General Counsel at an AI company and I pushed for one of our core values to be responsibility for what we were doing. Since then, I have maintained an interest in the challenges of running a business, ethically.
What I have for you this week:
I hate giving Mark Zuckerberg what he wants
AI Weaponry, the real ethical dilemma
Caroline’s weekly AI Governance tips (NEW)
Chef Maggie Recommends
AI Tool of the Week
AI Bites
DALL-E generated image
As much as I hate giving an individual like Mark Zuckerberg what he wants, his recent actions deserve a post for your awareness and consideration.
Those actions include calling current society “emasculated” and ending the DE&I program at Facebook as well as changing content moderation from third party fact checking to a community notes process like Twitter.
People are outraged. One post on LinkedIn has close to 600 reactions.
I believe that the former Biden supporter is simply flipping his position to align to the incoming Trump administration because he has Presidential sights i.e. he wants to be President one day.
And the best way to do that is to get media attention. He cannot let his Silicon Valley peers, Musk and Sacks, get all the attention. He needs to grab some of it.
So the real question here is, aren’t we just helping him meet this goal by giving him attention over these statements and announcements? Personally, I prefer to choose my politicians because they stick to their positions, not because they flip flop to pursue their own ambitions.
Or at least, that would be the ideal.
Next thing you know, he will be using the California fires in LA to his advantage somehow.
P.S. If you want to donate to those in need, who have had devastating losses due to the fires, then here is a charity link.
I think the most deeply felt ethical debate in AI is AI in weapons and that debate just got a wake up call.
Dall-E generated image
As a bit of a backstory, I worked for a startup, called Clarifai, that provides image recognition technology to the government. The internal agony that resulted when the company became a government subcontractor was intense. It was so culturally difficult that it is the main reason I ended up leaving the company, for the sake of my mental health.
So, I feel strongly that AI weaponry is a highly debatable ethical dilemma. If I had to guess, it is probably as contentious as nuclear weapons, which we all relived in the Oppenheimer movie.
Did Oppenheimer really throw caution to the wind and risk our existence on an experiment?
I am bringing this topic up because a civilian just showed how he could add a rifle to an AI robotic arm and it would fire based on his voice commands.
If you are like me, then your thoughts may turn to James Bond movies, or even a recent conversation I had with friends about their kid becoming a personal body guard to a rich kid. Do we want AI to protect us from getting shot?
The horrible reality is that there are people out there who are aggressive, and not in the Mark Zuckerberg “we need more masculine aggression in the workplace” kind of way.
But to bring this back to the internal strife at Clarifai, the debate in people’s minds and hearts focuses on one key question: If we (the USA) build it first, are we using AI weaponry offensively or defensibly? Or, actually, maybe it is four more key questions: 1. can the use of AI weaponry ever be justified? 2. are we building a future war zone of AI vs AI, 3. will the AI turn against people in the end, and 4. is the question about whether or not to build it useless because if we don’t build it, surely other countries will, leaving us defenseless?
I recognize that this topic can raise strong feelings and I welcome it because I am a huge fan of debate and the autonomy to change my mind if I want to.
So I will end this section with one last thought. If a random guy can build an AI weapon with ChatGPT, then I think all my questions are pointless. Perhaps the real point here is, what is the US AI Governance principles that will guide its strategy and the corresponding guardrails to enforce it?
Reply