Responsible

The Good Kind of Voice AI, Be Wary of Deepfakes and more

In partnership with

Hello, and welcome to Responsible, by ClearOPS, a newsletter about ResponsibleAI and other responsible business practices.

If you don’t know me, I am an attorney who recently passed the AI Governance certification offered by the IAPP. I am diving deeply into the world of AI governance. I am also a lawyer, so I help clients build their privacy, security and now AI programs. I have been working in the field of AI since 2015.

What I have for you this week:

  • Voice GenAI for the blind

  • More deepfakes; AI bots getting better at deceiving our social media feeds

  • Caroline’s weekly musings

  • My Custom GPT

  • How to build an AI Governance program

  • AI Bites

My father’s girlfriend is mostly blind. It is a condition that has been slowly deteriorating for many, many years. She is an amazing person with a love for museums and culture. So you can imagine how hard it is for her to be going blind.

When I built my website, she explained to me how different colors and backgrounds make it particularly difficult for a blind person. It makes sense so why don’t any of the website builders out there give you guidance and resources on this point?

Today, I was listening to one of my many GenAI podcasts and they mentioned that ElevenLabs had published a free text to voice app. The reason they made this app freely available is to help blind people. While I think there is so much more potential here to make a difference for those who are challenged with their vision, I give this some 👏🏻.

Clearly, deepfakes are well-known malicious uses of AI. I fear that we are becoming desensitized to them and starting to accept them as part of our new AI world.

Let’s not do that.

The FBI has recently released a joint advisory on an AI tool used to sow disinformation on social media, like LinkedIn. The software, known as Meliorator, creates three archetypes: the first one has a complete profile, including profile photo and biographical data; the second has very little information on its profile and the third seems real because it generates a lot of activity and gets a lot of followers. Apparently the social media site currently getting hit hard by this is Twitter.

I have to wonder if requiring users to pay for accounts will alleviate a lot of this. To mitigate these deepfakes, the authorities have advised social media companies to verify their users. I personally receive a push everyday from LinkedIn to verify my identity via Clear (and yet they take no liability for Clear, so 🤷🏼‍♀️). I have not verified on Clear because I don’t want another third party to have very sensitive data on me. How do I know that Clear is secure without going through all their documentation and calling them to confirm they handle my data appropriately?

I digress.

Deepfakes are clearly irresponsible uses of AI and a growing problem at that. What I would love to see is companies, such as social media sites, have alerts about this type of malicious activity because I did not learn this from them. I think users knowing that there is something called Meliorator and to be on guard is important to minimize the negative effect. The way I found out about it was from an email by Daniel Miessler, not X or LinkedIn.

Subscribe to keep reading

This content is free, but you must be subscribed to Responsible by ClearOPS to continue reading.

I consent to receive newsletters via email. Terms of Use and Privacy Policy.

Already a subscriber?Sign In.Not now

Reply

or to participate.