- Responsible by ClearOPS
- Posts
- Responsible, by ClearOPS
Responsible, by ClearOPS
The AI Wasteland
I was not happy with my newsletter last week, so let’s see if I can do better this week.
This week I had an anxiety attack. It was unpleasant for sure but the interesting thing was that I was tempted to input my anxiety into an AI chatbot and see if it could calm me down. Apparently, this sort of emotional use case with AI is not uncommon.
I guess AI has become that friend that tells you what you want to hear and I think that is very, very dangerous. I wonder what would have happened if AI like we have today had been available during the Covid pandemic.
What I have for you this week:
AI Snippets
Caroline’s weekly AI Governance tips
Chef Maggie Recommends
For the lawyers - NEW!
AI Bites
There is a bill making its way to the President’s desk called the Take It Down Act and it targets deep fakes. Revenge porn is a horrific experience that I have read about many times with great empathy, so seeing that this bill is close to passing is encouraging. Or so I thought. The Verge argues that there is a very political reason for this law and, to put it simply, that reason is to increase civilian surveillance. I don’t buy that argument but I am glad I read it to understand the perspective so I hope you feel the same.
I read an article about how DeepSeek is scraping all the user data and passing it along to the Chinese government based on a House investigation with a great deal of skepticism. Let me tell you why: first, it does not address the difference between the app and the open source version of DeepSeek technology (open source is self hosted so the data cannot be sent to China); second, Silicon Valley AI companies have their hooks in Washington now and this screams self interest; and third, I did not even know we had a Select Committee on the CCP, which really reminds me of McCarthyism and quite possibly a slippery slope.

McCarthyism
I read an interesting article about students conducting research, using AI, to determine if fingerprints truly are unique. The result? They are not. Their research began over a year ago and so far, they have met a lot of resistance to the findings. You might be thinking that fingerprints have not been used as evidence in criminal cases in a long time because no one knows how long a fingerprint has been there and you would be right. But there are challenges to the accuracy of long held beliefs here that makes this a really fascinating story. Plus, one student involved in the research points out, “Many people think that AI cannot really make new discoveries – that it just regurgitates knowledge, [but this proves that it is more].”

Cursor faced some trouble last week over hallucinations. Oh, the irony of an AI company falling victim to AI. It just shows you that you cannot be an expert on all types of AI. Essentially what happened was a glitch where a user could not log in to the application on multiple devices. When it asked the chatbot what was up? It responded that a security policy change required each device login to be unique. This is not a security best practice and the AI just made that up. A lot of people believed it though and cancelled their subscription. It makes me laugh! Maybe accuracy is not one of Cursor’s AI principles.
Reply