- Responsible by ClearOPS
- Posts
- Responsible, by ClearOPS
Responsible, by ClearOPS
Something Evil is Lurking in the Dark, You Try to Scream but...
Hello, and welcome to Responsible, by ClearOPS, a newsletter about ResponsibleAI and other responsible business practices.
Someone told me recently that AI training is “really expensive.” Really? That makes me want to launch my AI training course on Maven even more but I am suffering from a little bit of imposture syndrome. Ugh.
What I have for you this week:
Mwah ha ha haaa!
Hinton vs LeCun
Caroline’s weekly musings
Chef Maggie Recommends
AI Tool of the Week
AI Bites
In London last week, I was scrolling through Twitter/ X during a relatively boring moment only to find a tweet about a new breach reported by HaveIBeenPwnd founder, Troy Hunt, about a company named Muah.AI.
Caution: You will see some unpleasant things if you look up what happened in that breach. I will link to Troy’s X post in a bit.
Muah.AI is an AI girlfriend website. I wrote about the popularity of AI girlfriends a few posts ago. I won’t focus on the content that was exposed, but I will tell you that it is very lewd and potentially illegal. Okay, in my opinion, definitely illegal. I am not exactly sure how the breach happened, but we do know that plain text email addresses and the prompts associated with those email addresses were exposed.
And that is the focus of my newsletter today.
This is the first known breach of a Generative AI company. What is unique in this breach is that the email addresses were easily identifiable back to most of the people. Their prompts/ inputs were attached to those email addresses. In order for that to happen, there had to be a database configured to keep email address together with the prompts. If you use a GenAI tool, and it has history turned on, then it is safe to assume that keeping those two data points together is necessary, and apparently a vulnerability (depending on the use case).
There are so many angles on this story. Is it about ResponsibleAI, ethical use, cybersecurity or privacy? How about all of the above. My thoughts are scattered.
But one thing is clear, hackers are coming after Generative AI companies, especially ones with highly personal data, like an AI girlfriend or boyfriend.
P.S. Malwarebytes also reported on this breach and I found it an amusing read. Here is the Twitter/ X post I read.
I have been using this space to write about AI regulation recently, so I don’t want to disappoint you by writing about something else.
Duel
Sorry, not sorry.
Did you hear that Geoffrey Hinton won the Nobel prize in physics for his work in AI? Did you also hear that he warns about the potential of bad consequences of AI? In his opinion, AI will be smarter than humans.
That is where his colleague, Yann LeCun comes in. He doesn’t think AI will ever become that intelligent.
It is becoming a fascinating debate between these two originators of modern AI innovation. In case you did not know, LeCun did his postdoc in Hinton’s lab at the University of Toronto. Another relatively well known AI researcher also studied under Hinton, Ilya Sutskever, the former OpenAI chief scientist.
It seems that there is quite the debate in the AI academic community about whether we are risking a dystopian future and I think it is a debate worth paying attention to. Or, at least, to know about it because it is a debate about the intelligence of AI. On one side, there is fear that it will become more intelligent than humans and on the other, that it never will.
I think i have made this point before, but who determines if an AI system or machine is more intelligent than a human? Can humans objectively make that call? Or are we really just talking about control?
Reply