- Responsible by ClearOPS
- Posts
- Responsible, by ClearOPS
Responsible, by ClearOPS
Small is Big and Big is Small? What the...
I found a tick on me this weekend. It was odd because I don’t usually feel it when they bite, but this time I did. I immediately pulled it off. Now, I live in an area with a lot of ticks so I know that it takes at least 24 hours for them to transmit any one of their nasty little diseases. However, when I found myself scratching the site of the bite a couple of days later, I had to show it to ChatGPT to calm my growing anxiety. It just confirmed what I already knew.
Is that confirmation bias or an unhealthy reliance on technology?
Today’s newsletter will challenge your ethics and your morality.
You have been warned.
What I have for you this week:
Snippets About What is Going On in Responsible AI
Caroline’s weekly thoughts
Chef Maggie Recommends
Useful Links to Stuff
Learn from this investor’s $100m mistake
In 2010, a Grammy-winning artist passed on investing $200K in an emerging real estate disruptor. That stake could be worth $100+ million today.
One year later, another real estate disruptor, Zillow, went public. This time, everyday investors had regrets, missing pre-IPO gains.
Now, a new real estate innovator, Pacaso – founded by a former Zillow exec – is disrupting a $1.3T market. And unlike the others, you can invest in Pacaso as a private company.
Pacaso’s co-ownership model has generated $1B+ in luxury home sales and service fees, earned $110M+ in gross profits to date, and received backing from the same VCs behind Uber, Venmo, and eBay. They even reserved the Nasdaq ticker PCSO.
Paid advertisement for Pacaso’s Regulation A offering. Read the offering circular at invest.pacaso.com. Reserving a ticker symbol is not a guarantee that the company will go public. Listing on the NASDAQ is subject to approvals.

OpenAI scared me to death this week. They released GPT-5 and removed some of their older models. Models that my company, ClearOPS, uses. Luckily they reversed this move due to public backlash, but the question is why did they do this?
Do you remember the good old days, when there was only one model you could use? No choices, no meaningless numbers or model versions. Well, that is what OpenAI was trying to clean up. By releasing GPT-5, they thought it was unnecessary to keep older models around. What the backlash tells you is that people like choosing models, even though the industry considers it terrible user experience.
But let’s think of this a different way. Why is offering users a choice a bad thing? Who decided that? And maybe you could argue that they trained us to have choices, but I have always believed that having a choice means autonomy. It is the lack of choice, the lack of opinions and the lack of a dialogue that is dangerous.

I am not sure how it started, but there is a misconception that hosting your own data is more secure. I know many of my cybersecurity colleagues would confirm that it is, but that is because they know how to cybersecure. In my humble opinion, it is not the mere fact of it being on your hardware or mine, but who is managing that hardware. That is why the release of GPT-oss is interesting to me, because it buys into the argument that hosting your own is more secure.
So, I decided to do a little digging. GPT-oss is offered on Hugging Face via the Apache 2 license. I don’t know about you, but Github intimidates me, so trying to figure out how to launch GPT-oss from my own machine - yeah, that’s intimidating. Which is really my point. It could be more private and secure, but I would have to hire someone else to make it so because I don’t have those skills. And that’s not really more private or secure.
Is it?
While sitting amongst a group of friends this past Saturday night, the topic of how much energy AI consumes came up. The crowd was lamenting the costs to consumers and our infrastructure. You know I had to chime in, right? I told them that while, yes, AI consumes a lot of energy right now, it still is not widely used and it is evolving to be more energy efficient, so as use goes up, the energy consumption is going down..or at least has a chance to stay consistent.
That energy consumption argument is why small language models will soon become “all the rage.” I predict you will hear about them more and more, so let me clarify one thing for you, in case you are interested. A small language model has fewer parameters making it faster. Think of it as if it has less data to parse through before it gives you an answer. That doesn’t mean that the data is any better (or worse) than a large language model.
Do you remember OpenAI’s 5 stages? I think it is interesting to take a step back and remember it, so here is a picture and I will talk about it more in my deep thoughts.

OpenAI’s 5 Stages to AGI
Reply