Artificial intelligence

Interview with Michèle Sebag on artificial intelligence: its past, its future, its limitations

Date:
Publish on 28/01/2020
Michèle Sebag, co-leader of the TAU research project, was recently elected as a member of the French Academy of Technology, a French academic association founded on 12 December 2000, whose aim is to ‘enlighten society on the best use of its technologies’, and where she participates in a commission on the ethics of computer science. An expert since the 1990s on issues regarding artificial intelligence, the magazine L’Usine Nouvelle also recently named her one of the “pioneers” of artificial intelligence: those “specialists in machine learning and data [who are] advancing the performance of algorithms and launching them to conquer new territories” . We meet the researcher who has been dedicated to AI for nearly 30 years…
Michèle Sebag
© Inria / Photo G. Scagnelli

The 5 major advances that have made AI what it is today

Although the first ‘thinking’ machines appeared very early in tales (the Golem) and science-fiction (Frankenstein, 19th century), it was during the last century that scientists began to devise what they at the time very improperly called ‘electronic brains’. The mathematician and cryptologist Alan Turing (the film Imitation Game was made about him in 2015) in the 1940s devised a universal machine, the Turing machine , and explored how it could solve any problem by manipulating symbols . His work on this started the idea that an artificial system could solve problems ‘like a human’… and therefore began paving the way for AI.

After an initial period of high hopes, AI experienced its first bleak period in the 1970s: too many hopes and broken promises, cuts to funding, and so on. It was not until about the years later that interest, research and investments were rekindled. Thus, expert systems –software able to answer questions by reasoning from facts and known rules– became popular: “They seem childish to us today,” says Michèle Sebag, “but they demonstrate that computer science can be used to reason, in addition to calculating: it is the beginnings of artificial intelligence.” An expert system can reproduce syllogisms: ‘Socrates is a man, all men are mortal, therefore Socrates is mortal.’ But the limits are reached quickly: real-world problems are full of expectations and the rules are difficult to acquire. Hopes then turned to the automatic construction of rules or models from the available data: and so we have the age ofmachine learning.

This discipline benefited from an unprecedented phase of expansion: computing power was increasing enormously; most human activities (from supermarkets to banks, hospitals to education) came to rely on computers, which provided abundant data. Responding to these increases in material (data) and means (computing abilities), algorithms came on in leaps and bounds!

The current era is that of deep neural networks: deep learning . It must be emphasised that neural networks were invented well before the 2000s, but the pioneers, such as Yann Le Cun or Jürgen Schmidhuber started out as unknown prophets … There was a ‘Eureka!’ moment in 2012 during the ImageNET Challenge algorithm image recognition competition. The principle is simple: an ‘intelligent agent’ must recognise the elements present in a stock of images (cats, dogs, planes, lions, sunsets, etc.) Classic programs battled it out; the best of them committed 25% errors, plus or minus epsilon. But that year, the University of Toronto crushed all its competitors with only 16% of errors using a new learning method… developed by Yann Le Cun in the 90s: ‘a network of convolutional neurons’.

Note that these networks of neurons have nothing to do with biological networks (our brains): where the human brain learns to recognise a cat from one or a handful of examples, the computer needs millions of images. An AI does not have DNA, or parents, or curiosity(*), does not move around in the environment…

After that, progress accelerated. In the context of the famousgame of Go (2016),AI was now being created that exceeded the best human players. The difference compared to Chess is that brute force is not enough – for example the number of possible combinations in Go is larger than the number of atoms in the universe! You must play, but focusing on interesting situations: but what are they? How do you recognise them? You must already be a good player to know if a situation is interesting…. In other words, the AI must continually learn to look for new parallel data: we are talking about reinforcement learning . The work started by Sylvain Gelly and Olivier Teytaud (now at Google and Facebook respectively) in the TAO team directed by Marc Schoenauer and Michèle Sebag directly inspires DeepMind’s AlphaGo software. The next challenge is how to make a frugal AI: to give an idea, the number of games played by AlphaGo to reach its level corresponds to the number of games played by a human over several hundred years…

Another advance in AI in recent years, called generative adversarial networks (GANs) , involve two intelligent agents in competition. The first (the creative agent) tries to generate realistic images; the second (the examining agent) separates the real images from the generated images. The creative agent is excellent when he successfully manages to trick the examiner agent. The fields of application of GANs are innumerable: for example, in the context of autonomous vehicles, the generator could create images of difficult situations to test the vehicle; and the presence of the examining agent would ensure that these images are the closest possible to real situations.

How far are we able to go?

We can imagine a very powerful AI in principle, based on all the human knowledge available on the internet, traffic sensors, electricity, water, ubiquitous cameras… to optimise traffic, avoid crashes or accidents… or to help Big Brother. Beyond the hopes and fears (which are immense, see below), is such AI feasible?

There are several barriers. Firstly, AI depends on the quality of the data available and on the fact that this data really reflects the world . But, often, the data is collected by human beings: we don’t notice “what is obvious”; the problem is that what might be self-evident for a human being is not self-evident for an AI.

A second barrier is that, to really learn the data, the environment, it is often necessary to play with it, to interact with the environment and to learn how our actions change it. If the AI does not interact with the environment, it stays on the surface of things; if it acts, how can we limit the dangers?

Can AIs learn ethics?

Despite their current limitations, it is clear that AIs can eventually become the best and the worst of things: invaluable allies and inaccessible opponents. So now we have to think about safeguards. In the same way that the education, justice and police systems set rules to preserve life in society, we need to define rules for AIs and their development.

The first type of safeguard concerns data : AIs, hungry for data, must respect privacy and not destroy the social contract. Suppose, for example, that an insurance model allows one to conclude with confidence that a particular person is ‘high risk’; the consequence would be that the person would have high premiums to pay (or would be unable to get insured); inversely, a person identified as ‘safe’ would have no interest in getting insured. In other words, the solidarity and risk-sharing of insurance would disappear.

The second type of safeguard concerns criteria . A ‘do not be evil’ type rule makes no sense in itself for an AI. What is the expected behaviour? If an autonomous car has the choice between running over six dogs, or three cats and a rabbit, what does it do? Humans (all categories of humans involved, users, engineers, lawyers, philosophers, economists, sociologists, etc.) must agree on the ‘least bad’ way to solve critical cases.

The third type of safeguard concerns humans . Helped by an all-powerful AI, how can we avoid falling into barbarity? Helped by a benevolent AI, how do we avoid falling into laziness? Or into fatalism? What will the new challenges be? Having built good AIs, how do we become better humans?

(*) Jürgen Schmidhuber is therefore one of the first scientists to have proposed the existence of a mechanism of artificial creativity. It is based on a pole of compression of information (a system that looks at images, things, facts and summarises them) and a pole of surprise (a system that tries to surprise the first system).