Last October, in the twilight of President Obama’s second term, his National Science Council’s Committee on Technology released a report on the future of artificial intelligence. At around forty pages it’s maybe not everyone’s idea of a great night in - the first two pages of Google search links for the report are exclusively to university, tech, or policy wonk type sites. But I want to tell you why it excited me a lot.
It’s a broad rather than granular document. And it doesn’t shrink from potential downsides. Here’s the report’s view on the very real human impact AI may have in employment:
“Because AI has the potential to eliminate or drive down wages of some jobs, especially low- and medium-skill jobs, policy interventions will likely be needed to ensure that AI’s economic benefits are broadly shared and that inequality is diminished and not worsened as a consequence”.
That’s a pretty fair reflection on the fear most people feel around AI. But it was the report’s conclusion that really grabbed my attention:
“Developing and studying machine intelligence can help us better understand and appreciate our human intelligence. Used thoughtfully, AI can augment our intelligence, helping us chart a better and wiser path forward”.
Augment… so good I had to stop and read it again. Augment our intelligence… even better second time around! So why my excitement? Why the big deal? Well, aside from the joy at hearing the subject discussed so rationally, augment is exactly the vision that I and the Qrious team share around AI. I think mainstream media’s current party line is erroneous… AI will take your job, the machines are coming, it’s the end of work as we know it. While of course I get the appeal of those sorts of headlines, the reality is likely to be very different.
Just imagine if we didn’t think this way. What if instead we saw AI the way I believe we should? Less “artificial intelligence” and more “augmented intelligence” - because that’s where I truly believe we are today with Google search, and where we are heading in the future, into a new age of augmented human intelligence, where algorithms and machines augment human ability and help unleash human intelligence.
This isn’t new territory for us humans. We’ve been at it for years. Consider dactylonomy. Or finger counting, as it’s more commonly known. Have you ever stopped to wonder why there are sixty seconds in a minute? Or sixty minutes in an hour, with days split into two sets of twelve? Today when we use our fingers to count we allocate a number to each whole finger, letting us count to ten. But it wasn’t always so. Back before the abacus and then calculators let us get a little lazy, ancient Egyptians jazzed counting up with a little augmented intelligence.
Specifically… each finger has three joints. By using the thumb to count these, Egyptians could count to twelve with one hand. And then using the five digits of the second hand to record each instance of twelve, two hands could let you count to sixty (five instances of twelve). By 1500 BC the Egyptians were using sundials, letting them split the day into two sets of component parts. It should surprise no-one that these component parts were split into twelve (hours), and then as the sundial became even more refined, each component hour was further split into sixty (minutes).
To the data scientist this is augmented intelligence in action. Counting to ten on two hands gives you a total of eleven numbers (0-10). Counting to sixty on two hands gives you a total of 61 numbers (0-60). That’s a quantifiable increase in counting efficiency.
It may be a little different these days (with binary counting and powerful modern computers taking the place of finger counting), but that’s what we mean at Qrious when we say AI is about unleashing intelligence. By applying better tools to our basic abilities, exponential leaps in understanding can be achieved.
Counting is one example of augmented human intelligence. Communicating is another. Humans owe their ability to communicate to an area of the brain called the neocortex, named for being the newest part of the cerebral cortex to develop. Highly developed in mammals, the human neocortex is approximately a thousand times more powerful than that of a mouse. Within mammals, the human neocortex is also highly developed (roughly twice the size of a chimpanzee’s as a percentage of all brain matter).
So pretty powerful right? Well, yes but...
What happens if it’s augmented with a smart phone? Or to put it another way… what happens to a human’s intelligence without a smartphone? It might seem a slightly silly example, but just think about it for a minute. Without a smartphone most humans are capable of far less than they are with it. In the decade or so mass produced smartphones have been in existence they have augmented our intelligence and created exponential extra value for humans. The invention of smartphones has helped unleash the intelligence of humans.
Finger counting and smartphones may not seem like such a big deal. But the underlying concept of augmentation using AI is already delivering some truly stunning examples. Lipnet is a piece of software developed by a group of scientists at the University of Oxford. It uses a combination of algorithms and deep learning to teach itself lip reading. Human lip reading is notoriously unreliable, and even good readers struggle to accurately identify 50% of words. Lipnet returns accuracy results of up to 93%... a stunning result. But it’s not alone. Google has developed a system that performs even better.
Why is Google interested in this area? It is expected this technology will be a key component in the successful adoption of driverless cars. Letting you do things like tell the car to drive you and the kids to school while you concentrate on finishing their homework with them.
The success of these products is built on mass collection of data, enhanced by clever algorithms. Here at Qrious we are no different - our anonymous mobile phone data records three billion individual events each day, forming the cornerstone of our data play. But it’s the smarts of humans working at Qrious that delivers real value to our customers. Data and algorithms are just the tools we use to augment those smarts - that’s why we talk about augmented rather than artificial intelligence.
The idea of machines supplanting human intelligence may be a seductive one, but it’s just not the way we see it playing out - and we’re not alone. Andrew Moore, Dean of Carnegie Mellon’s School of Computer Science, thinks 98% of researchers in AI are working on projects to help people make better decisions. And he’s not confident on the chances of the other 2%...
"The idea of building a robot or a software system which, like a human, has got a real notion of its goals being to just generally survive and maybe reproduce? No one has any idea how to do that. It's real science fiction. It's like asking researchers to start designing a time machine."
It seems clear the role of technology is to work for a wider public good. And that is a goal most likely to be achieved by supplementing human knowledge, not replacing it.