Louis Monier, co-founder of the Altavista search engine, has spent forty years drifting between research, industry, and entrepreneurship: from Inria to MIT, from eBay to Silicon Valley start-ups, he has taken an interest in smartphones as well as the partnership between Big Data and health care. His current fascination is with artificial intelligence. A way of thinking about the digital projects of the future and their social and cultural implications.
Normale Supérieure et PhD in Mathematics and Computer Science at Orsay University
Creation of search engine Altavista
Research engineer at DEC, eBay and Google
Research Science Lead at Airbnb
From Inria to Palo Alto
In 1978, I was a student at ENS Cachan and was studying for my Master of Advanced Studies (DEA) at l’IUT d’Orsay. One of my IT teachers, Jean Vuillemin, offered me a research position at Orsay and, soon afterwards, suggested the idea of doing research at Inria. The offer interested me, as did the constant new challenges it represented, and I left ENS Cachan.
I then worked at Inria, while studying for my doctoral thesis at Orsay Paris Sud University. One of the popular fields of study at the time was cryptography because of the new RSA cryptosystem invented by three MIT researchers, which was based on arithmetic properties. It is extremely difficult to factor very large numbers. The security of new encryption methods has always been, and is still, based on this initial computation difficulty. For my thesis, I studied all known factorisation methods and presented areas for further work in this field. It’s still a work in progress!
I always wanted to work in a cutting-edge field where I could use my research for something real and provide tangible outputs for as many people as possible. It’s about explaining “how” but also finding “what”, by creating a real application. I was always surprised by the taboo in France that a link might exist between research and industry. Inria was somewhat unique in this regard. Enough people had spent time in the States to develop a more balanced perspective, which did not reject industrial contacts as much as researchers who had never left France or even Paris.
I still live across the pond today and have kept in touch with Inria through contacts with people like Jean Vuillemin, who regularly visited to immerse himself in American research, and sent me his best students, such as Bertrand Serlet with whom I spent six years at Xerox PARC. He then worked for Steve Jobs for twenty years, becoming Senior Vice President of Software Engineering at Apple.
In the early 1980s, computer research focused on mathematics and algorithms. Inria was somewhat unique as its research centre had a large and innovative computing centre.
In 1980, I succeeded in changing my military service into cooperation with the United States when Carnegie Mellon (Pittsburgh) offered me a postdoctoral position. I was interested in the new and innovative subject of integrated circuits and the tools used to design them. Previously, each firm had developed its own way of creating a circuit, and it got to the point where everyone was working in a bubble. If something changed or if a researcher left, everything had to be started again from scratch - it was as if each company had its own programming language. With the encouragement of the DARPA, some American researchers decided to standardise these circuits, developing the Mead and Conway courses to push for standardised tools and outputs. I spent three years working on the topic.
I then went to Xerox PARC in California, where I went one step further with a project to create basic computer circuits. This project was somewhat crazy as we designed the tools and circuits at the same time, but it was also very rewarding. There are places and moments in time that are just magic, where people come together, and it is impossible to tell whether it’s the people who make the place incredible, or if it’s the place that attracts exceptional individuals. Outstanding talent was concentrated in the same place, with people such as Bertrand Serlet whom I mentioned earlier, and Pradeep Sindhu, who founded Juniper Networks in 1996. Why such interest in integrated circuits? Undoubtedly because it was the field that handled the most data, with increasingly complex objects. At the time, circuits contained millions of transistors and wires; they now have billions. The theoretical problems raised were also real problems with market potential.
In 1989, I moved to a research laboratory at the DEC (Digital Equipment Corporation) in Palo Alto, mainly to work on integrated circuit design tools. In 1994, there was enough critical mass for circuits in start-ups that I moved onto another burning field - the internet. In 1995, I launched a research project that went on to become AltaVista. The history of one of the first search engines is somewhat bittersweet. Research within a major group is not always simple. DEC manufactured old-school hardware and software. The company had invented a minicomputer the size of a refrigerator, but its power and price were reasonable and research and industry players were fighting to get their hands on it in the 1980s. This computer, the VAX 11/780, made DEC both rich and famous. However, it didn’t last as the company failed to adapt to the latest innovations, including those at Sun, which produced computers the size of a takeaway pizza box.
When I started the AltaVista project, DEC had not realised the importance of research on the internet. They tolerated me because the search engine’s success generated a lot of positive press coverage. But I never actually had real support and I was unable to bring out AltaVista as a stand-alone innovation. Commercial success was not in the cards, but the technical success in terms of social impact and growth was enormous! Then Google took over – their researchers didn’t have the same restraints as me. At the same time, AltaVista brought out Babel Fish, the grandfather of Google Translate and the first online translator based on Systran technology, which took the first steps in breaking down the language barrier by giving the public an approximate translation of e-mails, web pages and articles.
Later on, I realised that what I found so fascinating about the internet was being able to search for information and also the sheer scale of the immense mass of data. I left AltaVista in 1999 and split my time between eBay, Google and over ten start-ups. I liked their attitude. Start-ups might be destined to fail, but that’s ok as long as you do it quickly. Failure is an experience in the United States. Entrepreneurs take risks and believe in their project, but if it fails, everyone considers it to be a valuable lesson. I bathed in this environment for ten years. I spent a lot of time working on natural language, search and suggestion tools. In 2013, I launched a start-up that combined healthcare and Big Data. The idea was to process millions of anonymous patient health records to create a predictive tool that worked by cross-referencing data (age, pathologies, lifestyle, medicines, etc.). I hoped to anticipate some pathologies and check the effectiveness of treatment. Our trial with the VA (Veterans Affairs Hospital) was very positive. However, it was impossible to collect sufficient quantities of patient data as American healthcare systems are afraid to reveal their data, apparently for patient confidentiality reasons, but also because they fear that analysis would expose inequalities or questionable practices. The idea was good but the market was too reticent. We were ahead of our time. Perhaps that will change in a few years, no doubt through pressure from the patients themselves.
After AltaVista, eBay, Google and ten years of start-ups, I now want to focus on Deep Learning.Big Data © Inria / Photo H. Raguet
I often joke that there are now only two topics that really warrant further research, both of which have the greatest potential for creating serious discontinuities - deep learning for IT and the CRISPR/Cas9 system for biology. Deep learning is finally delivering the results that artificial intelligence has been promising for 60 years, with image processing, speech recognition and much more. CRISPR is often described as gene scissors and is a new and extremely precise way of editing DNA. It opens the way to new applications that, until now, were pure science fiction, such as correcting genetic diseases in an embryo or person.
Coming back to deep learning, I believe that we have barely scraped the surface. We don’t know how far it will take us and how quickly, but progress is moving swiftly. For a long time, we had an obsolete view of artificial intelligence based on a series of rules - and I was one of its greatest critics. At the same time, a small group of pioneering researchers like Yann LeCun worked almost secretly until the late 2000s to redefine artificial intelligence on the basis of data analysis. A somewhat simplified example of deep learning would be instead of telling a machine how to solve a problem, exposing it to a number of examples of questions/responses, so that a very simple algorithm (which is almost always the same) can create a model, an algorithm which is completely incomprehensible to us, but which will generally be able to complete this task. Deep learning creates solutions that are very different from our own, but they work well. Since 2012, we have extended the scope of what we previously considered uniquely human, such as speech, sight and, to a lesser extent, language comprehension. The subject is currently witnessing exponential growth and the number of articles published and number of researchers specialising in the field will make deep learning the leading subject of the coming decades.
I believe that deep learning will have a serious effect on all industries and on our lives. It currently has applications in many fields, including natural language, voice recognition, image processing and generation, medical diagnosis, targeted advertising, the creation of smart electrical networks, robotics and self-driving cars. And that’s just the start - it will revolutionise our lives!
These applications will need continuous research, particularly to secure physical electronic objects. Deep learning will take software into the physical realm, for example, with self-driving cars in firms such as Tesla which are using this technology very well and very quickly, no doubt paving the way for future production models. Automation will save lives. For example, self-driving cars could prevent 1.3 million deaths in road accidents each year across the world. They will also change our lifestyle and modes of transport, making us more mobile, with much smarter ways of sharing driving, and will probably help redefine our cities. I was lucky enough to drive an almost self-driving car and I can assure you that it’s great and really exciting to be living in the future!
Artificial intelligence naturally poses the question of risks. I believe in the risks caused by human error, but that is true of any tool that we use poorly. However, I do not believe in Terminator-style risks, where the machine rebels. Our goal isn’t to create human-like beings, but to automate tasks that we don’t want to do, thereby “enhancing” humans for tasks that require human qualities. This enhancement is no different from the invention of the electronic calculator in 1970 and no-one ever feared that a calculator would rebel!