- Responsible by ClearOPS
- Posts
- Responsible, by ClearOPS
Responsible, by ClearOPS
All this marketing about AI is really just super confusing
Hello, and welcome to Responsible, by ClearOPS, a newsletter about ResponsibleAI and other responsible business practices.
Oh my, I am so sorry. I forgot to send this last night. I was out celebrating a friend’s 50th birthday at a dinner and had this newsletter all ready to send, but once I got home, I forgot to press the send button. As much automation is available, you cannot lose the human element!
I am in D.C. at the Privacy and Security Forum near GW University. If you are also at that conference, please say hi!
What I have for you this week:
Generative AI for Repurposing Medicine
AI Co-Pilot vs AI Agent
Caroline’s weekly musings
Chef Maggie Recommends
AI Tool of the Week
AI Bites
The media sometimes overly focuses on the negative.
DALL-E generated image with prompt “AI used to diagnose health conditions”
So, today, I want to focus on the positive of AI, specifically Generative AI. As someone whose life has been touched by multiple medical problems, the use of GenAI in medicine is interesting.
Last week, TechBrew reported that GenAI is being used to find treatments for rare diseases using existing drugs. Called the TxGNN model, it relied on de-identified patient data and is freely offered to other scientists for their discoveries in drug repurposing. The model has shown significant improvements in finding a repurpose as well as 35% better at finding contraindications, i.e. the reason why a patient should not take a certain drug.
I remember growing up during the AIDS epidemic. It was often a scary time with lots of unknowns. Today, I rarely even hear about AIDS and it no longer seems to be a death sentence due to the treatments that have become available.
TxGNN seeks to solve the same problem for the 300 million people with rare diseases (of which there are 7,000). 300 million people is about the size of the population of the United States! Imagine the difference this could make to the lives of people who feel hopeless.
Now that is all around a responsible use of Generative AI.
But that isn’t the only news about AI and the medical field. I found a great resource of papers with a wide variety of research pertaining to the use of AI to detect cancer. Check it out.
The terms “co-pilot” and “AI agent” as used in the industry don’t make a lot of sense to me. Who comes up with these terms anyway?
And Microsoft literally combines these terms in its autonomous CoPilot agents. Ugh.
For those of you who are similar to me, I will offer my understanding. Much like the term is used on an airplane, a co-pilot is someone who turns the knobs next to you turning the knobs. But unlike the airplane co-pilot, it cannot take over for you. Generally, AI co-pilots have been the chat features that we have all become used to at this point. It pulls from sources, whether the sources available in an LLM, or your own uploaded sources, and you can ask questions and receive answers.
An AI agent is when the AI makes a series of decisions and acts on them. I think this will be particularly useful for businesses who receive a lot of alerts. So, for example, the AI agent reads each alert that comes in. It makes a decision about whether it needs to classify a risk as requiring human review. It reaches out to the human about the risk it identified and gives the reviewer a synopsis of why it made its decision. It can write the email and send it to the human reviewer. Going even further, it could identify a high risk and make another decision to take action and take that action before bringing in the human.
So in the ad below, I think they are really selling a co-pilot, not an actual agent, which is why I believe these terms are confusing because companies are marketing AI agents that aren’t really available, yet. I think Salesforce was the first to release agents and it was only just recently (although even their description for some of these agents sound like co-pilots).
Reply