Myth-Busting AI
Posted on Friday, August 30, 2024
Category: Book Blog
Author Neil Taylor explores some of the myths around Artificial Intelligence
My name is Neil Taylor, and I recently wrote a fictional novel called Anticipation in which Artificial Intelligence (AI) plays a key role. This doesn't make me an expert, but it has been interesting to observe differences between conversations in the AI community and the way AI is portrayed in the media. So when I was asked to write a short piece on the popular myths around AI, a few things sprang to mind.
Maybe we should start with a definition of AI. At a high level, AI is the ability of computers to simulate human intelligence. However, since the definition of 'intelligence' is somewhat muddy, this is not particularly helpful to the layperson.
At its core, AI has the ability to 'learn' from experience to achieve a goal. A practical way to think of it is by comparing AI to traditional computing applications which perform set routines; you push a button and for a given set of conditions it will churn out the same result (think of a spreadsheet calculation) but it has no ability to 'learn'. It will not change. It will not improve. However, with AI, you are giving the application a 'goal' and a bunch of training data, or a simulation program, and telling it to go figure out how to achieve the goal. It 'learns' to achieve the goal, rather than just repeating a set of steps predefined by a human.
The AI 'myths' arise when 'human' comparisons start to creep in. For instance. . .
Confusing robots with AI:
There is a tendency for the media to merge the field of AI and robotics. News articles about AI often involve a human-looking robot performing a task humans would normally do. When Hollywood gets hold of the subject it tends to end up with humanoid 'Terminator'-style robots running wild and leaving a wake of death and destruction in their path (after all, where would be the story in a robot that quietly makes you a cup of tea?). However, most AIs today are disembodied applications that find patterns in large data sets. These are the applications that decide if you will get a car loan or a mortgage, or what you see on social media. These invisible AI applications are making decisions about you and your future and are more of a concern than AI robots.
Robotics and AI are complimentary, but separate, fields. Simplistically, robotics is about simulating human movement, AI simulates human thinking. Yes, you can apply AI to robots, but only as much as you can apply AI to anything from playing Chess to writing an email. Most robots follow pre-programmed routines to perform repetitive tasks. When you see assembly lines of robots putting cars together, or the cool stuff Boston Dynamics are doing, their movements are eerily human, but they are following pre-programmed routines, there is no real intelligence - no decision-making, no problem solving, no understanding of the task being performed.
Assigning human characteristics to AI (anthropomorphising):
One of the biggest distractions to AI discussions is when people start to assign human qualities to AI such as feelings, emotions, and consciousness. We tend to look at everything through the lens of being human and attach human traits to non-human entities. We cannot help it, we've been doing it for thousands of years; we used to think the weather was controlled by the gods and bad weather was the gods expressing their 'anger' (a very human emotion). We stub our toe and think the bed is a vindictive malicious force out to get us. So, when scientists tell us AI can 'mimic human intelligence' we understandably think of our own intelligence which comes with all the baggage of human feelings and emotions.
However, there is no reason to think AI will spontaneously develop the same feelings, emotions, and motivations that humans developed to survive on the plains of Africa 300,000 years ago. For instance, something we all take for granted is our survival instinct. This is deeply ingrained in humans and all living creatures. AI, though, has not been developed in an environment where only organisms with a strong survival instinct get to pass on their genes to the next generation, which distils that instinct in subsequent generations. Why should an AI developed to answer your questions about the weather and remind you to pick up the dry cleaning develop a survival instinct? Or for that matter, why should it develop the feelings and emotions that humans developed to live in large social groups? I worry less about AI spontaneously developing consciousness and more about the applications that 'humans' use it for.
AI does not make mistakes:
We have come to expect that computers should not make mistakes, and if they do, it is due to a bug that should be fixed. AI, however, is more like human intelligence that works by forming rules based on 'experience' (heuristics) rather than hard mathematical calculations. The very nature of AI means it makes assumptions. Those assumptions may be better and more accurate than humans most of the time, because AI can process more data and find more accurate patterns, but there will be times when its assumptions cannot account for all variables that occur in the real world. Also, it is only as good as the training data you give it - if there are mistakes and biases in the training data, then those will be passed on to the AI's decision-making. We may have to accept a certain level of error or failure. For example, autonomous vehicles may well be much safer than human drivers, but we may have to accept that they will never be perfect, and a certain number of accidents are bound to occur because AI systems cannot account for every variable in the real world.
AI will never be able to mimic human creativity:
We tend to look at the state of technology today and say 'it will never be able to do that'. Whenever new technologies first hit the media all sorts of possibilities and potential are discussed, then it fails to live up to those expectations in the short term, and the media dub the technology as 'not working' and lose interest. But the media, used to reporting on 'events', fail to grasp that technology does not appear fully formed, it develops over time, and just because it can't do something today does not mean it will not be able to do something tomorrow. Over the years many statements have been made that 'AI will never be able to do that better than humans'. One by one, these challenges have been conquered, such as chess, Go, understanding and generating natural language, and driving autonomous cars.
In conclusion, there are many valid concerns about the impact of AI on humankind and about the direction it follows. However, we need to be careful we are worrying about the 'right' problems, and what makes a good story in the media does not necessarily draw our attention to the right problem. Ultimately, we probably need to be more worried about the people and companies who are wielding AI than AI itself.
Anticipation is published this month by Neem Tree Press (£8.99). 'You are being played. Your every move is being watched by businesses hoping to manipulate your behaviour. Every picture, every post, every like, every follow, every purchase, every search..... Read a chapter from Anticipation.