Now that we’ve concluded the introduction series, I’ll be delving into more specialist areas.
In this article, I’ll take a look at artificial intelligence, particularly the machine learning area, a basic overview of how it works, and the dangers of over-reliance on an algorithmic approach to analysis.
What is Artificial Intelligence and Machine Learning
While the term artificial intelligence gets thrown around a lot these days, particularly by marketing specialists, there are substantial misunderstandings of what it means. There are two types of artificial intelligence we talk about in computing – general AI and specialised AI.
General AI is of the type that appears in science fiction rather than anything that actually exists – some experts claim it is close, but it has been ‘close’ now for several decades. This is the type that alarmists refer to when they talk about AI taking over the world and wiping out the human race. However, it’s safe to say this isn’t the most significant threat from AI given the remote chance of it happening outside of Hollywood.
Specialist AI is the type that we have today: autonomous vehicles, facial recognition, weather prediction, and any other specialised task which a human can perform falls into this category. Within this specialist category, there are broadly two different types:
- Expert systems (also known as rules engines, knowledge graphs, or symbolic AI). These systems work through coded procedures based on banks of knowledge or rules, built by taking the appropriate expert and making a flowchart of their thinking processes for a particular task.
- Machine learning systems This type modify themselves when exposed to more data, effectively learning from the information they are given. This is done by setting goals for the machine, providing them with a starting algorithm or set of algorithms, and then feeding back to the machine on the success of its approach. Initially, errors will be more common than success; however, as the algorithm evolves, it will become more accurate. Programming a computer to play a game such as chess better than the programmer would be an example of this.
- Deep learning systems are not a separate category but a subset of the machine learning systems. Essentially, they are multi-layered machine learning systems made up of multiple classical machine learning systems, breaking inputs into pieces and supplying those pieces to the appropriate machine learning layers before recombining to arrive at an answer.
So far so good, and the success of machine learning systems in automating work which would otherwise require significant human resources shows that there is definite value in this approach. I wouldn’t be writing this article if there weren’t some risks to the method, though, and they come in a few forms.
The Wrong Goal
My favourite demonstration of this bias is the case of a machine learning model being trained to land an airplane (fortunately, as you will see, in simulation rather than real life). The program was given the goal of landing an aircraft with the minimum amount of force on landing. Unfortunately, the simulated measurement systems had a slight flat. Instead of setting the aircraft down as gently as possible, the system learned it could achieve its goal by crashing the plane with maximum force – which caused all of those measurement values to overflow and flip over to zero.
Any developer of a machine learning system needs to understand their goals and how they will check them. Errors like the above are amusing only when they happen in a simulation – in real life, it would be an entirely different matter.
Bias
Another, more insidious example is the bias found in facial recognition systems. Because the data fed to these machines tend to be weighted heavily towards a particular set of features (mostly North American Caucasians), they have trouble identifying any other set of features reliably. Anyone who does not fit into the category the machine is expecting is at much higher risk of being misidentified – given that facial recognition is used widely for security and law enforcement purposes, the problem here should be apparent.
Worse still is the case of algorithms used to assess a defendant’s likelihood of recidivism. ProPublica investigated one particular tool, comparing the predicted recidivism rate for white and black defendants against the actual rate. The tool was correct about six times in ten, but the devil is in the details. White defendants were more likely to have their risk of recidivism underestimated. Black defendants were more likely to have their risk of recidivism overestimated. This was not a minor error, as white re-offenders were mistakenly labelled at low risk twice as often as black re-offenders. In contrast, black defendants were incorrectly marked as high risk twice as often as their white counterparts. The picture becomes dramatically worse when you consider that in terms of violent recidivism (given a separate rating), the algorithm was only correct with its predictions 20 percent of the time.
One of the problems, when these biases emerge, is that it is extremely difficult to understand without relying on the system and checking the validity of predictions. In the case of the recidivism tool, this was the product of two years of research covering over 100 000 cases. With the speed that we are developing and applying machine learning to solve problems, a lot of damage can be done before any systemic biases are uncovered – if they ever are.
Black Box Systems
Complicating this further is that once a machine learning model is built, its decisions cannot usually be explained by the developer. For complicated problems, the process of learning results in what are, to all intents and purposes, black-box systems. The algorithms grow so complex that they are simply not understood by those who set the initial conditions, meaning if an unexpected input occurs outside the model, the output is unpredictable. We cannot analyse why a decision was made.
The essential lesson is that machine learning is just providing expert information, and just like any information, it can be wrong. While they are capable of sorting more data more quickly than a human and are arguably less prone to erroneous decisions, they do still make mistakes, and taking them as a source of ultimate truth magnifies the impact of those mistakes. Be skeptical of any decisions from machine learning systems as you would from any human expert, and whenever possible, validate the output.
Cybersecurity Series – Artificial Intelligence, Machine Learning, and Bias
By: James Bore
James Bore is an independent cybersecurity consultant, speaker, and author with over a decade of experience in the domain. He has worked to secure national mobile networks, financial institutions, start-ups, and one of the largest attractions’ companies in the world, among others. If you would like to
get in touch for help with any of the above, please reach out at james@bores.com
Leave a Reply