Artificial Intelligence (…?)

AI is really stealing the headlines recently. I feel somewhat ahead of the popularity curve for once here. I’ve been following AI for quite a few years now and it really is starting to ramp up. I think it’s important to separate the practical from the hypothetical. So in the first part of this post I’ll probably take a little time to discuss the practical, the real, and what is currently going on. In the second half I’ll discuss the hypotheticals. These are more maybe, could be, and so on.

As a recent migrant to Manchester, UK I am aware that one of my favourite scientists of all time, Alan Turing, spent a portion of his life here. Alan had a fascinating life and I fully recommend all readers to take some time to understand his life and personality. There is an excellent movie with Benedict Cumberbatch that is based upon it.

One of my favourite early memories was visiting Bletchley Park, Oxfordshire. We learned all about Alan Turing, his life and research. I wont pretend to be an expert on Alan Turing, but something that kept surfacing throughout my life was mentions of the Turing test. The Turing test is quite simple. A person operates a computer, and another person communicates with the person operating the computer. The person operating the computer may only respond with answers that the computer has provided.

Would this person, the one not operating the computer be able to tell? If they can tell they are talking to a computer, the computer fails the Turing test. (Or imitation game) If it passes, then we could consider it somewhat indistinguishable from a human in that sense.

Recently, it feels that I cannot really tell. Sometimes I’m chatting away to ChatGPT or playing with other models and it really does feel like I’m speaking to a human. There are ways in which they do not. Mainly the politeness and subservience. (I think Microsoft made a whoopsie here, with Sydney!)

So these LLM (large language models) are pretty terrifying. The power of them is insane, and they get cheaper to run every day. OpenAI recently demonstrated this. A good analogy, which I cooked up in the shower:

The internet was merely a bookshelf and google the librarian. AI is more like: a printing press, library, librarian, and an author that’s already read every book ever written all wrapped up into a chatroom style interface

Hopefully that brings it into context. Now really, that is tangible, it is real and honestly it is true. These LLM are like this. (I mean they can’t physically print the books but whatever). There’s work still to be done here, but in my opinion the gold rush has already begun. In the coming months and years we will see a slew of redundancy, job creation, and even potentially civil unrest. Anthropological responses are emergent from a chaotic system and it is often difficult to predict what is going to happen in response to a stimulus.

Now I’ve covered the tangiable, albeit briefly, I will move onto the hypothetical. Many have covered these topics. Manna, a fantastic story about these ‘hypotheticals’, and I should also mention Tim Urbans much circulated blog post regarding these types of things: The AI Revolution: The Road to Superintelligence. (Also quick plug Gwern, another great resource)

The gist of it is this: AI is smart, but not that smart. If it gets too smart, maybe it can work out how to improve itself. If it can do this, it will do this. There could be a feedback loop of infinite efficiency into oblivion. The result? Something entirely different from me, you, and Tony the Cornflakes Tiger.

A common analogy is ants. I loved watching ants as a kid, milling around and doing stuff. I also occasionally burned ants with a lens. I saw it on TV and thought wow interesting. I felt no guilt, and I probably wouldn’t feel too much guilt if I stood on an ant. Would something as smart as an AI feel that way? Maybe. Or maybe it would see us as a mouse, something to avoid squashing but if bothersome? Sure. Squash.

There’s a load of interesting and talented people trying to solve this ‘problem’, which is known as the alignment problem. Governments, LessWrong (A good source!), OpenAI, and almost all other AI companies have spoken about these issues. The issues are already somewhat apparent in how AI’s address ethnic minorities. We can sometimes forget the majority of the internet, from which these AI’s are trained, and the majority of history was written by rich white people. Time have an article that touches on the topic. If the AI is already treating humans unequally, it isn’t a leap for it to see humans as ’low value’!

Obviously this hasn’t been comprehensive, and if you like the post drop me an email. ([email protected]). I will probably continue to talk about this a little more in the future. If anyone wants to chat feel free to contact me. Thanks for reading! :D