Currently holder of the Algorithms, Machines and Languages chair at the Collège de France, on the occasion of the celebration of 50 years of Inria Gérard Berry talks about the developments of the discipline of the century.
Researcher at the École des mines
CTO of Esterel Technologies
Inria Research Director
Algorithms, Machines and Languages permanent chair at the Collège de France
The sciences came into my life when I was a child: my mother was a teacher and I probably learned to read from chemistry books. The pictures of bench scientists in grey coats handling pipettes, retorts, etc., fascinated me and then led me to create my own laboratory. At the time, getting hold of real products - such as concentrated sulphuric acid or fuming nitric acid - was not a problem!
At school, it was a different story. I didn't work very hard, because we didn't do much there. What I was interested in, deep down, was understanding what we didn't know, rather than what we already knew. Because of this, it was obvious to me that I would go into research, even if I didn't know the word.
I tinkered with my first computer at the École Polytechnique in 1967. Serious computer science then entered my life at the Corps des Mines, in 1970, with a project named TIF (on file processing and information). Two subjects were emerging at the time: computer science and molecular biology. What guided my choice? The environment: if it was rather an academic one in biology, I saw computer science as a creative environment full of original innovators. Pierre Lafitte, who was head of the École de Mines at the time, immediately told me: computer science is the future!
From the lambda calculus to Esterel
Nothing can beat a conversation between friends. At the end of 1971, on the advice of Philippe Flajolet and Jean-Marc Steyaert, former Polytechnique friends I met in the street, I joined - on a part-time basis - what was called, at the time, IRIA, in order to prepare a thesis under the supervision of Maurice Nivat, who gave me the benefit of his knowledge and extraordinary interpersonal skills. Some young researchers - who would soon be among the great names of computer science - were already working there: Jean Vuillemin, Gérard Huet, Gilles Kahn, then later Jean-Marie Hullot, etc. The atmosphere there was wonderful and there were many international visitors. I worked a lot: alone, at first, then with Jean-Raymond Abrial and finally, for my thesis, with Jean-Jacques Lévy, on the lambda calculus - a sublime logical calculus at the heart of programming. Theory dominated at the time, since French researchers did not have access to the computers used elsewhere in the world.
I also did a lot of mountain climbing at that time. When Pierre Lafitte suggested I join a new joint automatic/computer science laboratory at the Ecole des Mines in Sophia Antipolis, I accepted without any hesitation and arrived there in 1977. The world of computer science research was very different from the one we know today. For the international school on the semantics of programming languages I organised in 1977, everyone concerned by the subject fitted into a lecture hall! In 1982, my interest in continuing with the lambda calculus diminished: the problems were becoming very difficult and my desire to solve them much less. My automation engineer friends, who were taking part in an autonomous mini-car race, had new ideas on how to program them. I jumped at the chance of this new research subject: this was to be the Esterel language, which would keep me busy for over 25 years. At the time, together with Gérard Boudol, I co-managed the Meije project - a joint project between École des Mines and Inria. Why this name? It was a tribute to our shared passion for this mountain.
We were then at the start of the upsurge of parallelism in algorithmics and programming. We had few resources, but they were decided and allocated by the researchers, thereby granting us great freedom. This period was also marked by a spirit of collaboration: in Grenoble, Paul Caspi and Nicolas Halbwachs' Lustre team was developing the Lustre language, which would be adopted by Airbus and, for our part, we worked with Dassault Aviation on the Rafale. I joked: why not have a dogfight in order to decide between the two languages? Ultimately, the two languages were united in the 2000s.
It wasn't until 2001 that I left the world of research for eight years. Esterel had already been industrialised, but Simulog, the company that distributed it, was going to close. Eric Bantegnie, its CEO, suggested the creation of a GIE (economic interest group) between his new company, Esterel Technologies, and our research bodies; however one of them pulled out at the last minute, which led me to join Esterel Technologies And so I devoted my time to Esterel for the design of electronic circuits, a field initiated with Jean Vuillemin in 1990 at Digital Equipment, which met with a lot of success from 1992 onwards. As a result I had many contacts in the USA, which I visited on a regular basis as consultant for Cadence and Synopsys (two fierce competitors) but also for Intel, where I worked with Michael Kishinevsky - who taught me a lot. Together we developed Esterel v7, a much more powerful language. At the time, I was no longer interested in the research results but in industrial scaling-up. So I joined the start-up Esterel Technologies to industrialise Esterel v7. It was a success until the 2008 crisis, which hit us with full force, making us close the circuits section - in which I was particularly involved - in 2009. The section relating to critical software in avionics and many other fields had incorporated SCADE, the industrial version of Lustre, and was in great shape. Esterel Technologies was now part of the American company Ansys.
The explosion of computer science
Since the beginning of the century, the fundamentals of computer science in algorithmics, languages and machines have not really changed, however new ones have emerged. A good example is the extraordinary growth of the Internet, with truly new inventions: search engines, social networks, cloud computing, etc. Another is the processing of big data through machine learning, which was still in its infancy in the 20th century and is currently making remarkable breakthroughs in many fields. Similarly, sciences that were still barely computerised - conversely to astronomy or physics - are today undergoing remarkable informatics growth: biology, computational neurosciences, medicine with imaging and diagosis, etc. There is no doubt that the impacts on society will be great, bringing numerous benefits (in medicine, for example), but also serious dangers. The issues of software quality and IT security in particular are becoming increasingly critical.
A key point for me is that the understanding of the issues by the general public is skewed by an ill-adapted mental blueprint, coming directly from the 20th century and focusing on the classic matter-energy-waves triplet (cf my aforementioned book). However information, which is the raw material of computer science, has a very different nature, and its reasoning and way of acting are also very different. Widespread ignorance results in excitement and fears that are not necessarily founded. We feared the replacement of books, cinemas and concert halls by at-home screens. This has not really happened, with paper books surviving very well and going to the cinema or a concert also remaining a major social act. However it is true that CDs, DVDs and Blu-Rays will disappear, to be replaced by downloads and streaming - the former no doubt having a longer life expectancy since everyone can now access all of the music in the world on their phones.
Other fears are, on the other hand, justified: the impact of ultra-fast information and social networks on our behaviour is still not fully understood, and recent examples of large-scale relaying of rumours or fake news with great potential to cause harm mean that the question must be taken seriously. The mass dissemination of personal data is also a major problem, especially when their anonymisation is perhaps a dream, given the power of current cross-referencing of data. Many other fears are well below the reality, for example regarding IT security, with attacks growing and becoming more refined at great speed. They could seriously slow down the Internet of Things we are being promised: do you really wish to computerise your home and an entire town if they can be turned to chaos remotely?
My first digital memory
My first encounter with a computer, in 1967, the Seti PB 250. It was a rather bizarre French computer, with a somewhat strange machine language, that had punched tapes and a magnetostrictive memory that was difficult to use effectively... I worked on this machine a lot and, by using it, I understood two essential things: that computer science really was delicate and close to logic, and that every detail mattered. As a great lover of paradoxes - much more of a logician than a mathematician - I really liked that!
For certain other fields, we do not yet really know what will happen. There are major openings whose impact and limits we do not yet understand, the best example at the moment being machine learning. However going further towards an artificial intelligence truly deserving of the word 'intelligence' - something, moreover, that nobody has every been able to define properly - will be a far more difficult task about which we still know very little, apart from the fact that there will be many fundamental obstacles. Even by limiting themselves to learning, children learn very quickly and simultaneously in many fields, something automatic systems do not know how to do. And, even if it is remarkable, the success of games such as Go must also be put into perspective. Do we, when for a start we do not know how to do multiplications in our heads - something trivial for a computer - really have strong reasons to be good at games? However a very interesting subject is to design learning machines based on principles other than current methods, in particular with the help of far more analogical computing mechanisms.
But let us not get exciting research and guaranteed success mixed up. I often hear phrases such as "We will be able to do that in 2030", or 2045. For me, these phrases have no sense, especially in a subject with so many unknowns. You only need to look at the peremptory projections of the 1980s: not much was correct...Conversely, nothing says that the classic computer will not disappear: we just don't yet know what could replace it.
The 20th century is over, we have to get used to it. However computer science is still like the Far West, and we are a long way from having explored everything. The number of small start-ups that still succeed based on a relatively mundane idea is proof of this: we are just at the beginning of the story.