Responsible, by ClearOPS

The Looming Infidelity by AI Crisis

In partnership with

Well, when you miss writing your regular newsletter two weeks in a row, you pay the price and that is certainly what I am doing! However, I think you are going to really enjoy this newsletter because I am talking about something that no one is talking about. And I mean no one.

Today’s newsletter will challenge your ethics and your morality.

You have been warned.

What I have for you this week:

  • Snippets About What is Going On in Responsible AI

  • Caroline’s weekly thoughts

  • Chef Maggie Recommends

  • Useful Links to Stuff

Receive Honest News Today

Join over 4 million Americans who start their day with 1440 – your daily digest for unbiased, fact-centric news. From politics to sports, we cover it all by analyzing over 100 sources. Our concise, 5-minute read lands in your inbox each morning at no cost. Experience news without the noise; let 1440 help you make up your own mind. Sign up now and invite your friends and family to be part of the informed.

Question: should the use of GenAI in education be called cheating? On the one side, AI might reduce our ability for critical thinking. Which we sort of need in order to continuously feed the AI with new training data. On the other hand, banning the use of AI is not teaching a critical skill, i.e. how to use it, which is clearly needed to enter the workforce these days. Apparently AI detectors have so many false positives that most teachers just rely on good old instinct and common sense to detect the use of AI. But is that where their focus ought to be? The University of Texas at Arlington has taken an interesting approach.

P.S. I have teachers in my family who I don’t want to piss off so I am trying to tread carefully here, but there is definitely a growing problem between all students using AI and the fact that we cannot flunk all students because of it.

One of my favorite people, Luiza Jarovsky, opened my eyes this week to a few of the ethically worst companies using AI or maybe I should say exploiting ethics in marketing their AI products. The first one is Cluely who has raised $5.3M, which certainly says something about those investors. Basically, it will transcribe a virtual meeting in real time and suggest your responses. They say that they help you cheat - thus the blatant marketing angle that owns their lack of ethics using AI. The other one, I will report on below.

So, in case you thought AI feeding your counterparty lines while on a call was bad, how about someone scraping the internet to find information about you and then scoring you? Gigi is the name of the company, and also a movie from the 50’s where a young woman is groomed into becoming a courtesan to wealthy men in 1900 Paris. The name has meaning.

Ugh! Just because you can, doesn’t mean you should and, in this case, they really shouldn’t have. And supposedly this is all in the spirit of dating.

Every now and then, I get into reading as a past time vs my job. Usually, I like to quip that I am a professional reader by trade so the last thing I want to do is read during downtime. But the reality is, I only like to read really good books. So a couple of weeks ago, I started a three book series. This past weekend, I started the third book and 20% into it, I realized that there was no way the series was about to end. So, I did a search and, sure enough, the author intends to release 2 more in the series within the next few years.

Egad and argh!

So, I thought, well, maybe I can ask ChatGPT to finish the book series for me using the author’s basic story line and style. It won’t be as good, but at least it will give me some closure. While the filters have improved enough that it refused to actually write a speculative ending and it won’t violate copyright, it did propose some choose-your-own-adventure style plots with twists and new discoveries that really satisfied my curiosity.

It isn’t a copyright violation and I will still buy those next 2 books when the author finally finishes them (in about 3 years), but is it morally wrong that I used AI to give me closure now? How different is that from creating that closure in my head?

Subscribe to keep reading

This content is free, but you must be subscribed to Responsible by ClearOPS to continue reading.

I consent to receive newsletters via email. Terms of Use and Privacy Policy.

Already a subscriber?Sign In.Not now

Reply

or to participate.