As you might imagine, crunching through enormous datasets to extract patterns requires lots of computer processing power. In the 1960s they merely did not have machines powerful enough to get it done, which is why that boom failed. Inside the 1980s the computers were powerful enough, nevertheless they discovered that machines only learn effectively if the level of data being fed to them is big enough, and they also were struggling to source big enough levels of data to give the machines.

Then came the net. Not merely did it solve the computing problem for good through the innovations of cloud computing – which essentially allow us to access as many processors since we need at the touch of the mouse – but people on the internet happen to be generating more data every single day than has been created in the whole background of planet earth. The volume of data being produced over a constant basis is totally mind-boggling.

What this means for machine learning is significant: we currently have more than sufficient data to truly start training our machines. Think about the variety of photos on Facebook and also you begin to discover why their facial recognition technology is very accurate. There is now no major barrier (we currently are aware of) preventing A.I. from achieving its potential. Our company is only just starting to determine whatever we can do with it.

When the computers will think by themselves. There is a famous scene through the movie 2001: A Place Odyssey where Dave, the main character, is slowly disabling the artificial intelligence mainframe (called “Hal”) following the latter has malfunctioned and decided to try to kill all of the humans on the space station it was meant to be running. Hal, the A.I., protests Dave’s actions and eerily proclaims that it is fearful of dying.

This movie illustrates one of many big fears surrounding A.I. generally, namely what will happen after the computers start to think on their own as opposed to being controlled by humans. The fear applies: we are already dealing with machine learning constructs called neural networks whose structures are based on the neurons inside the brain. With neural nets, the info is fed in then processed via a vastly complex network of interconnected points that build connections between concepts in much much the same way as associative human memory does. Because of this computers are slowly starting to formulate a library of not simply patterns, but in addition concepts which ultimately cause the basic foundations of understanding rather than recognition.

Imagine you are considering an image of somebody’s face. When you initially see the photo, several things take place in your mind: first, you recognise that it must be a human face. Next, you might recognise that it is male or female, old or young, black or white, etc. You will also possess a quick decision from your brain about whether you recognise the face area, though sometimes the recognition requires deeper thinking depending on how often you have been exposed to this particular face (the knowledge of recognising a person but not knowing right away where). This all happens basically instantly, and computers happen to be able to perform all of this too, at almost exactly the same speed. As an example, Facebook cannot only identify faces, but could also tell you who the face is owned by, if said individual is also on Facebook. Google has technology that will identify the race, age along with other characteristics of any person based just tstqiy a picture of their face. We now have advanced significantly because the 1950s.

But true Artificial Intelligence – which is known as Artificial General Intelligence (AGI), in which the machine is really as advanced as a brain – is quite a distance off. Machines can recognise faces, but they still don’t truly know just what a face is. For example, you could take a look at a human face and infer many things which are drawn coming from a hugely complicated mesh of different memories, learnings and feelings. You might take a look at a picture of the woman and guess she is really a mother, which often might make you assume that she actually is selfless, or indeed the exact opposite depending on your own experiences of mothers and motherhood. A guy might consider the same photo and locate the woman attractive which will lead him to make positive assumptions about her personality (confirmation bias again), or conversely find that she resembles a crazy ex girlfriend that can irrationally make him feel negatively for the woman. These richly varied but often illogical thoughts and experiences are what drive humans towards the various behaviours – positive and negative – that characterise our race. Desperation often results in innovation, fear results in aggression, and so forth.

For computers to truly be dangerous, they require many of these emotional compulsions, but it is a very rich, complex and multi-layered tapestry of numerous concepts that is tough to train a pc on, regardless of how advanced neural networks may be. We will arrive there one day, but there is plenty of time to make sure that when computers do achieve AGI, we is still in a position to switch them off if necessary.

Meanwhile, the advances currently being made are discovering a lot more useful applications inside the human world. Driverless cars, instant translations, A.I. cellular phone assistants, websites that design themselves! Many of these advancements are intended to make our lives better, and therefore we really should not be afraid but alternatively excited about our artificially intelligent future.

Copeland