- Responsible by ClearOPS
- Posts
- Responsible, by ClearOPS
Responsible, by ClearOPS
Are you donating this holiday season to your favorite charity? Is it AI?
Hello, and welcome to Responsible, by ClearOPS, a newsletter about ResponsibleAI and other responsible business practices.
In one week, I do not plan to be writing this newsletter. In fact, I predict that my brain will be switched off at that point, watching some holiday movie. However, if I do write the newsletter, it is because starting this weekend I am really focusing on putting the finishing touches on my AI Governance course. If you haven’t signed up, there is still time. It is for you after all. Don’t forget the discount for Responsible readers!
What I have for you this week:
The ethical and legal considerations of switching from a non-profit
Data and your AI models
Caroline’s weekly musings
Chef Maggie Recommends
AI Tool of the Week
AI Bites
If you had ever told me that I would side with Mark Zuckerberg, I would have quickly laughed and told you “no way.” But here we are.
Mark Zuckerberg, the CEO of Facebook (which I refuse to call Meta), wrote a plea this week to the California Attorney General to deny OpenAI’s request to convert to a for profit entity and … I agree with him.
Personally, I usually roll my eyes at two megalomaniacs battling it out, or make that three since Elon Musk is also in this battle, but the issue of letting a company that started as a non-profit convert to a for profit enterprise is an interesting legal and ethical question.
On the ethical side, if a company starts with a mission that is based on the greater good and it later switches to prioritizing profits over the greater good, then I think it violates ethics. Particularly, when it comes to AI and particularly a company focused on AGI and enabling artificial sentience.
We don’t want the machines to prioritize personal gain over benefitting society.
But, legally, I also disagree with setting a precedent that non-profits can easily convert to for-profit entities and the reason is for exactly what Mark Zuckerberg argues. If conversion can happen so easily, then we will see a plethora of businesses starting as a non-profit during the “research” phase for tax reasons only to later convert when they need to switch to the “go-to-market” phase.
In the long run, I think this would put all non-profits under pressure, potentially causing changes in the IRS code and certainly reducing the pool of available “donors” due to the explosion of new non-profits for them to donate too.
Personally, I also worry that, in addition to the charities who already bombard me during the holidays, I will receive letters from startups at Christmas asking me to spend a little of my Christmas spirit on a donation to their cause?
🤮
Since Ilya Sutskever claims that we have run out of training data, I want to explain why I think he is not quite right and how this affects AI governance.
By now, you probably know that AI is an algorithm that is “fed” a bunch of data. That data food is called training data. Data can also be input into the model to test it i.e. see if it outputs the desired result.
So now we have training and testing data. There is also re-training data and fine-tuning data. Fine-tuning is a method of taking a base model and modifying to suit your purposes. Re-training a model is what it sounds like.
Any kind of training, re-training or fine-tuning of a model means the data is permanently a part of that specific model. If it is a custom model, then that means certain access permissions prevent others from the data and its outputs are exclusive. Note, testing data does not stay with the model.
The problem with Ilya’s argument is that we do not have great data. The data used to train, re-train or fine-tune models is A. only from the last 10 years or so and B. constantly replenishing. At ClearOPS, we have collected hundreds of millions of pieces of data. I can guarantee our compilation of data could be used to train a very good AI model, but it would be specific to an industry and the tasks we use it for now.
Maybe what he is saying is that it won’t continue for general purpose models.
Which is why it is important to understand all this for AI governance because AI governance really has to distinguish between the type of model, the data used for training and the access.
Reply