Image recognition, language understanding, and decision-making: deep learning has made impressive breakthroughs. That’s why machine learning will have more and more impact on our everyday tasks. Therefore, mastering computational and data skills will be essential for everyone in the near future, says Dr Marcel Salathé from the EPFL in Lausanne in this interview* with Prof Dr Reinhard Riedl from BFH centre Digital Society.
At the Transform Conference in September 2019 you said that computational and data skills will be basic skills for nearly everyone in the future. Does this mean that academics from the humanities and even craftsmen will have to acquire computational thinking and data processing skills?
Yes, although I would formulate it a bit more positively. Everyone should take the opportunity to acquire some basic technology skills, because technology can be used as an amplifier of our intentions, and thus these skills will help you reach your goals more efficiently, whatever these goals are. But truth be told, I’ve been trying to give this a positive spin for quite some time. 2020 seems like a good moment to become more blunt: without digital skills, it will be hard to see how you can remain competitive in your job, especially if your job is closely related to knowledge and services.
What does this imply for basic education in kindergarten and primary school? Do we have to start teaching computational and data skills early, or can we wait until secondary school, i.e. Sekundarschulen and Gymnasien?
This is a great question. I am not an expert in early education, but it would seem sensible to start teaching computational skills early on.
Extension School provides courses on computational thinking and data science which do not require specific secondary school degrees as prerequisites. Can everybody get a degree from Extension School? And, what does it take to obtain such a degree?
Yes, at the EPFL Extension School, we do not ask for any background. We literally do not care who you are or what you know when you start – all that matters is that you are very motivated to learn these skills. Our normal courses end up in certificates, and we offer these on multiple levels, from absolute beginners to experts. But we also have so-called multi-course programs, that end with a real EPFL diploma from the EPFL Extension School.
At the Transform Conference you also stressed the fact that today artificial intelligence means machine intelligence. Does this mean that both expert systems from Old AI and the attempts to establish systems with emerging intelligence in New AI are dead?
Expert systems are by and large gone. Today’s AI is mostly deep learning, with some classical machine learning techniques mixed in for the best results. That we call this intelligence is simply a terminology issue – none of these systems have any real intelligence in the biological sense. They are very good at one particular task, and then fail spectacularly at others. Everyone is trying to figure out who to get artificial general intelligence (AGI) – that would be a real game changer.
Will machine learning have a considerable impact on our everyday tasks in the future?
Yes – given that machine learning is now demonstrably the strongest computational method to tackle problems that were long thought impossible for computers to tackle. Everything that was possible to solve without machine learning has been solved. Thus, all the breakthroughs we observe at the moment, in any domain, are based on modern deep learning. Another way easy way to see that machine learning will have a major impact on everyday tasks is that deep learning has made very impressive breakthroughs in image recognition, language understanding, and decision making. It is hard to think of everyday tasks that do not include at least one of these areas.
Causal machine learning has made a big leap forward in the last three to five years. What will be the impact on science and industry?
That’s still at a very early stage at the moment. We’re currently seeing very few use cases with causal machine learning. At the moment, the major breakthroughs in science and industry is deep learning, which is purely correlational.
Many people fear that the use of machine intelligence may be unethical in many practical cases. How do you perceive the risks of improper use of machine intelligence?
I think the risks are very high, but it depends on what we mean by improper use. The first risk that I see is that in being so risk-averse we close our eyes to the potential of this technology – or we say “let’s approach this very slowly” – and completely lose out in the competitive race with other countries. I think this is particularly a risk in Europe. If you look at the web, it’s obvious that it was invented in Europe, but then commercialized in the US. Most of the large tech companies that dominate our lives (and that coincidentally dominate AI) are originally web companies, such as Google, Facebook, and Amazon. And if you look at where they have sometimes gone overboard, ethically, you can’t help but wonder if the same thing would have happened if all of these companies had been European. If we want to be serious about being able to stand our ground in Europe in the 21st century, we must be hugely successful in AI. The second improper use is to apply this technology for nefarious goals. One example is autonomous lethal weapons (for example drones with killing capabilities that are autonomously flying, and autonomously making decisions). But there are many other, less obvious examples, such as using decision making algorithms that are biased, und not open to scrutinization by the public.
I think the biggest danger is of a structural nature – to live in a country with a technologically incompetent government.
I think the biggest danger is of a structural nature – to live in a country with a technologically incompetent government. That is a guaranteed recipe for disaster. Technology is moving very rapidly, and smart regulation must move along with it. Some people say that technology is moving too fast, we can never regulate it at the same speed, but that is obviously nonsense: The speed of regulation is not a natural law, it’s a process that we control 100%, and thus we can accelerate it dramatically (which in itself will require a technologically competent government). Now if you ask me whether we live in a country with a technologically incompetent government, I would say we’re at high risk. The thing is, in Switzerland you can’t blame the politicians – we are a highly participatory country and if something isn’t working as you think it should, you have the possibility to become politically active and fix it yourself. Which means that it’s really your fault if something is not working, because you are not fixing it, after all. Thus, the risk of having a technologically incompetent government is a direct reflection of a population that is not very competent technologically. And that is something we have to be very careful about. The vast majority of people in Switzerland still think of technology as simply “another thing”, as another vertical, instead of realizing that it is now at the basis of everything, a fundamental horizontal layer. You can see this well in the schools, where technology is still treated like a 3rd class citizen.
How can we guarantee that results of machine intelligence are valid? Is there a specific quality management concept to scrutinize machine intelligence devices?
Quality management can be achieved with testing and benchmarking. There is a valid concern about the black box nature about deep learning algorithms, and I assume we may never fully understand them. They are at a level of complexity that might be just too hard to grasp for us, and one day they may be so complex that demanding an explanation from a neural network may be similar to demanding an explanation from a brain.
The good thing about technical networks though – as opposed to biological ones – is that we can quiz them systematically. That is, we can run a battery of tests on them. Artificial neural networks will never complain when we run a million tests on them to see how they perform. This allows us to check them systematically for bias and other problems.
How can people analyze and understand ethical aspects of the use of machine intelligence? Can we classify algorithms for machine learning in a way that tells us the key properties of the algorithm that are relevant from an ethical perspective?
I doubt that. Ethics are not set in stone, but constantly evolving. They are the result of a continuous human conversation. That said, there may be some principles that we universally agree on, like our dislike for bias. Using the benchmarking idea I mentioned before, we can address these issues, and classify algorithms accordingly.
Our last question is traditionally one looking far into the future. What will be the role of artificial intelligence in 2050? How will it have changed our business and private lives and our democratic participation in government by then?
Predicting something 30 years away is of course impossible. Who could have possibly foreseen the world we live in today in 1990? Or the world in 1990, from the perspective of 1960? That said, there are some interesting parallels. In 1960, computers existed, but there were big clunky things that filled entire halls. But we can now look back and see that they were clearly useful to everyone who had one. In hindsight, it was thus somewhat logical that everyone would want and use one, we just couldn’t see then how they could get very small and affordable. Yet, it happened, thanks to technology. With the internet, we can observe the same thing. In 1990, this was a very small network with a few hundred thousand users. But it was very useful to all of them, and it was thus in hindsight rather clear that eventually, everyone would have constant access to the internet. Indeed, that was already the thinking underlying the dot com bubble, which burst not because the idea was wrong, but the timing.
We can now take that same logic and observe that machine learning algorithms are extremely useful for everyone who has access to them. Thus, it is not hard to predict that our entire world will be driven by AI in 2050, and all of our interactions with machines will be interactions with AI. But that’s the easy prediction. The harder predictions are whether there will be AGI, whether we will have the first bio – machine mergers, and other, hard-to-predict events that will change the dynamic completely. I hope to be around to see all of it!
About the Person
Marcel Salathé is a biologist and programmer. He is currently Associate Professor at the EPFL. There, he has built up and heads the Digital Epidemiology Lab. He also launched the EPFL Extension School, a school based on digitization. Since the outbreak of the corona crisis, Marcel Salathé has become one of the most requested experts in Switzerland. He is involved in the development of a contact tracing app to control the spread of the corona virus in Switzerland.
* This interview took place shortly before the outbreak of the corona crisis and was first published here.