Img_oil_Our_channel.jpg
enioilproducts

Your business, our energy

Products and solutions for business and customers Italy and abroad

Img_enjoy_Our_channel.jpg
ENJOY

Get around town easily

Live the city with Eni's car-sharing service

Artificial Intelligence Playing Go

Artificial intelligence: will it develop consciousness?

The possibilities, limits and unexplored boundaries of deep learning in its endless applications.

by Eni Staff
12 min read
byEni Staff
12 min read

Genuinely intelligent artificial intelligences?

In May 2018, Google unveiled its Duplex programme to a gobsmacked audience. This software can call a restaurant and book a table by itself, without the person on the other end realising they've been talking with artificial intelligence. Duplex is not only able to hold a conversation, but even imitate the pauses and interjections typical of human speech. Duplex's abilities may have bowled people over, but they also raised quite a few concerns. After all, how can we be sure that AI won't deceive us in the future by pretending to be human? Do we need to make software declare its artificial nature? Are we just at the beginning of an evolution that will see AI resemble human beings more and more, until it develops consciousness? These are all fascinating questions, but they are far removed from what Duplex has demonstrated so far. “But Google Duplex is not the advance toward meaningful A.I. that many people seem to think,” writes professor of neural sciences, Gary Marcus. “If you read Google’s public statement about Google Duplex, you’ll discover that the initial scope of the project is surprisingly limited.” ... “make restaurant reservations, schedule hair salon appointments.” But why such a limited range of possiblities? The reason lies in the very nature of deep learning, the algorithms on which artificial intelligence is based. To hold a conversation, AI has to be trained with hundreds of thousands of pieces of data, which teach it all the possible interactions in a chat between two human beings. Then, it can statistically assess the right answer to a given question. It's a tall order and one that can only work if the conversation is very circumscribed (as with booking a table at a restaurant). “The reason Google Duplex is so narrow in scope isn’t that it represents a small but important first step toward such goals,” Marcus continues. “The reason is that the field of A.I. doesn’t yet have a clue how to do any better.” ... “Open-ended conversation on a wide range of topics is nowhere in sight”.

One thing at a time

This is not only true for algorithms specialising in language, but all kinds of artificial intelligence. The same goes for image recognition systems which are tasked among other things with recognising animals and people in photos. “...even small changes to pictures can completely change the system’s judgment,” writes Russell Brandon on The Verge. “[An] algorithm can’t recognize an ocelot unless it’s seen thousands of pictures of an ocelot – even if it’s seen pictures of housecats and jaguars, and knows ocelots are somewhere in between.” Put concisely, AI does not have the capacity to generalise and abstract, skills that are fundamental to human intelligence and let us hold conversations of any kind, and recognise an ocelot even if all we know is that it's about halfway between a domestic cat and a jaguar. In this light, artificial intelligence is not just vastly inferior to human, but has not even set off on a path to one day overtaking it. Of course, none of this is to deny the impressive progress made by deep learning. Today artificial intelligence algorithms can diagnose cancer, help lawyers find correlations between legal documents, translate with ever greater precision between languages, identify attacks by hackers before human experts can and do a raft of other things of great importance. What links all these algorithms is that they use statistics and represent limited artificial intelligence, able to perform just one task at a time (in specialist terminology, they are artificial narrow intelligence or ANI). An algorithm designed for translation cannot also diagnose cancer, any more than a system for recognising cats can pick out a cow. Each AI algorithm can only do one task; if it wants to perform a new one, it must be reprogrammed from scratch. In scientific terms, artificial intelligence does not have the human ability to develop a pre-existing behavioural repertoire, to face new challenges without recourse to a mechanism of trial and error prepared by a third party, as Oxford physicist David Deutsch has explained. Deutsch argues that, along with being unable to abstract and generalise, AI does not have creativity or common sense, two fundamental features of general intelligence in its true sense.

On the creativity front, however, we might raise some objections. The software that defeated the world champion at Go (a highly complicated board game) did so by using moves that looked like errors. These are manoeuvres a human being would never have made, which proves that, in a way, AI thought outside the box and displayed a certain creativity. That is not all. On 25 October 2018 the painting Edmond de Belamy sold at Christie’s for $432,500. The artist behind the work was not human. Programmers showed artificial intelligence vast quantities of art from the past and the mechanism reworked them into an original painting. This AI was not creative, then. It simply imitated human creativity. But then, is that not what humans do, too? At the end of the day, no painter, sculptor or performer has ever created something from nothing. They have always reworked, recreated, reinvented what has gone before in the history of art. The datasets used by the AI to bring its works to life could be seen in the same light as the influences of past artists on contemporaries.

single-image-intelligenza-artificiale-deep-learning-ritratto-edmond.jpg

Portrait of Edmond de Belamy (Ovious).

Social media: spot the difference

"And what about common sense?" you ask. Artificial intelligence's failings in this field are demonstrated perfectly by the constant problems with automatic filters on Facebook, designed to remove undesirable content. The algorithm charged with identifying images that infringe the social network's rules cannot distinguish between a pornographic photo (which needs removing) and a nude in a work of art (allowed). This lack of common sense is what makes it impossible for an algorithm to distinguish between a racist post (banned) and one that mocks the arguments (if you can call them that) of racists. Only human beings have the abilities needed to make such subtle and important distinctions. But we should not rule out AI one day overcoming all these limitations. Indeed, this is a hope cherished by investors around the world, who in 2018 beat records by placing $9.3 billion in artificial intelligence start-ups. This figure was 72% higher than that in the previous year. It would appear, then, that at least an iota of true intelligence is shining forth. In October 2016, the company DeepMind (belonging to Google) published a study in Nature about its new AI, able to plan an ideal route on the London Underground without prior attempts. This enormous progress was made possibile by training the machine (with the classic “trial and error” method) using underground maps from other cities. As it trained itself with these maps, the neural network learnt to use its memory to imagine useful information and call it up when needed. The system was called a differentiable neural computer and had an external memory, one that reused what it had already learnt and employed it to work new things out. Although this AI was still unable to carry out more than a single task, its ability to use memory practically, and prove it could learn, looked like the first step towards real artificial intelligence, of the human kind, which learns information on general terms and recalls it to make use of it. In brief, this AI is capable of very basic reasoning.

single-image-intelligenza-artificiale-architettura-differentiable-neural-computer.jpg

Plan of a differentiable neural computer (deepmind.com).

An evenly-matched fight

More recently we have seen the emergence of general adversarial networks (GANs), systems that can use their different algorithms to compete against each other, spurring them on the best possible results. And how do they work? You start by giving the two algorithms data to train them (for example, hundreds of thousands of images of cats), then assigning them each a different task. The first algorithm is the generator and it relies on its database to create original images. The second is the discriminator and it must decide if the images it's shown come from the generator or from the database. The more accurate the generator's work, the more likely it is to deceive the discriminator; they're comparable to an forger and an art critic. Every time the second algorithm correctly refuses what it is shown, realising that it's the work of the generator, the latter must start afresh, forced to improve if it is to hoodwink its adversary. But the discriminator algorithm is doing the very same, honing its skills of identifying the generator's work. Thus the contest remains statistically balanced. This system was used to create the “people who don't exist”, which went viral a few months ago, along with disturbing deepfakes and the works of art already touched on. Above all, this seen as the next step in artificial intelligence, whose full potential is still yet to be revealed but relies on something intrinsically human (and animal): the ability to collaborate and compete to get results. It may be lacking in generalisation and common sense, but AI is making constant and often impressive progress. Truly general artificial intelligence, capable of rivalling or overtaking that of humans, is still a long way off and may never come to pass. But, given the relentless success of deep learning, it would be wise not to rule anything out.

AI studies nature

Predicting potentially dangerous natural phenomena can mean saving human lives, limiting damage and immediately responding to emergencies. The numbers say it all: in 2018 alone, natural disasters caused damage worth $160 billion around the world. Experts in various industries have been working for some time on how to predict natural phenomena with AI. For example, they use data on volcanic activity to predict eruptions, but if this data is scanty, as its is with Stromboli, it's not enough to create predictive models. With volcanoes like Etna, however, whose constant activity has provided experts with a lot of data, they can predict sounds an hour in advance by analysing variations in low-frequency air waves. When it comes to seismic activity in the subsoil, the biggest problem is the paucity of data, but AI can be particularly useful in the period after an earthquake. A system of deep learning, connected to satellite images, can provide a fast and precise estimate of the damage, quickly counting the number of buildings toppled and roads blocked, as happened in Japan. For extreme weather events like floods, tornadoes and hurricanes of high intensity, AI is providing helpful results, as the data on them is plentiful. NASA, for example, is working on a system to trace the hourly progress of hurricanes, to find out exactly where they will strike. This was only made possible a few years ago, until which time people only had six hours' notice. Google, meanwhile, has a lot of people working on an algorithm to predict floods, only 75% of which are foreseeable currently. There are limits to get over when it comes to systems for preventing natural disasters based on AI. Some phenomena are influenced by ongoing rapid climate change, so that although you can predict a river flooding, with the data you have, you can't have an exact predictive model if that data is affected by the ongoing global warming. Another thing to take into account is that these systems are not always reliable. Humans give AI the data it has to process, and may ignore variables or make errors in doing so. Finally the systems cannot be trusted not to raise false alarms and thus lose effectiveness over time.

intelligenza-artificiale-previsione-attivita-vulcaniche-etna.jpg

Etna erupting at night.