More thoughts on the state of AI

Red Ochsenbein (he/him) - Apr 4 '23 - - Dev Community

I recently wrote an article with my thoughts on the state of AI. Little did I know about the flurry of developments we would face in only a few days. We are now at a point where high-profile people are asking to pause and think. And I am sure we should pause and think. Does this mean we should stop the development? It's not even a question: The cat is out of the bag - there is no stopping now. What we should do is increase the efforts in the fields of AI safety, ethics, social development, and the definition of human work... by a lot.

There were more than enough examples in the past where those voices asking for more consideration were silenced in the name of profit. We can't do that anymore. The price is simply too high. Do I have the answers? No, nobody has. We just don't know what we are and will be dealing with.

Anyways, here are some thoughts keeping my brain busy lately.

Noise-to-signal ratio

AIs help us to create content at a way faster pace. Even without any training, you can put out texts, images and other things which a few years ago would have needed quite some skills. Now those things are available with a click of a button. Sure, someone might argue skills are still important and that being able to build on the AI's output will lead to better results from experts. Even if this is true (I will discuss this a bit more in-depth below) the big problem will be to find the 'diamonds' within the noise. Not only is there more content burying the great content, but the noise is also louder. In many cases, it takes an expert to be even able to discern the good from the not-so-good stuff.

Just recently Github Copilot X was presented and on their site, it was stated that 46% of code is already created by Copilot (I am not sure about that number, but let's just run with it). The same will probably happen with the flood of generated images and texts. Bots just creating websites with the help of GPT-4 for SEO reasons are probably already at work. If we extrapolate this development it's easy to see that in a not too far future we will basically train the models with 99% AI generated content. This might lead to whole bunch of problems down the line like even further bias amplification and similar things.

Killing creativity and knowledge

Too often we hear the question "Should I even learn to..." especially when it comes to starting to learn to code. The answer is "yes" most of the time. But if we think about artists creating especially digital art, music or texts. Why should they spend thousands of hours honing their skills if anyone can create something with a short prompt which most the people out there wouldn't be able to distinguish from your work? What does a skilled artist think about some models just taking your work and creating thousands of images in a style you developed with years of hard work?

I think this might lead to a decline in artistic and creative work. There is simply no incentive to keep on doing the work anymore. Sure, one might argue you should do it for fun and not for money. But this has been the argument which allowed the exploitation of artists, musicians and many other creative workers for decades.

As we have seen the flood of generated content will drown the work of skilled people, AIs will be trained by generated content and artist have fewer incentives to hone their skills. So, we will see a slow decline in creativity and knowledge.

Privacy and international laws

On March 31st of 2023 (a few days ago) Italy banned ChatGPT and OpenAI was forced to block users from Italy. Italy claimed OpenAI would not comply with GDPR, EUs privacy and data protection law. GDPR requires anyone providing a service in the EU to ensure certain things. Amongst others, anyone has the right to have any data about them to be deleted or corrected.

Now, when you understand how a large language model is trained it quickly gets obvious this is not a small thing to do. How do you delete data in a model which is a black box? How do you correct specific data? And if the model is just hallucinating facts about you, how can these be corrected? The same questions pop up when thinking about pictures of existing people. There are privacy rights like the right to one's image.

These questions will be very important. I'm not entirely convinced Italy's step was the right one, but I am certain these are things that need to be addressed.

To AGI or not to AGI

With the release of GPT-4, the discussion of AGI got more into the picture. AGI is the idea of an AI displaying actual intelligence enabling it to do intellectual tasks across a wide range of domains and contexts without having to specifically program or train for each task. It's a so-called emergent behaviour.

Human-level AI

Some people are interested in getting to human-level AI. This means that the AI should be able to do the same cognitive tasks as a human. I am thinking that maybe nature already found the most efficient way to do this: The human. If we assume that it actually might be the human 'flaws' like having to sleep, biases and imperfect perception are a requirement for human-level intelligence. If this would turn out to be true, then what is the point of building the same again? At this point, we simply don't know.

Non-human AGI

The other possibility is that AGI might not require to be human level. But then the AI might be closer to any alien species than to humans. If this is the case how would we ever be able to make sure the AIs goals are aligned with ours? This opens up a whole new set of challenges and dangers known as the alignment problem.

Consciousness

Another thing to consider is if those models actually could get conscious. How would we be able to determine consciousness in such a model? How would we have to treat it if we only would have to suspect it has some sort of consciousness? Wouldn't morality and even the law not have to extend to machines then? (Well, to be fair, we, as humans, are already pretty bad at treating animals...)

Final words

Those are only a few - rather chaotic - thoughts on AI. We do not know what the future will bring. But I'm pretty sure we need to do better than we do today when it comes to AI safety, ethics, social impact and similar fields.

. . . . . . . . . . . . . .
Terabox Video Player