“We bear responsibility for our technologies” – Digitisation from the perspective of the philosopher of technology

Is digitalisation accessible to everyone, can we trust new technologies and who ultimately takes responsibility for their actions? Janina Loh, philosopher of technology at the University of Vienna, deals with these exciting questions. In this interview, she talks about the impact of digital technologies on our everyday lives and would like to be called by her first name.

At the moment, you hear in various industries that the COVID-19 pandemic has triggered a real digitalisation push. How do you see that?

Digitisation is always seen as the global and generally available “thing”. It is often overlooked that digitalisation also brings drastic mechanisms that exclude certain social groups. It is by no means the case that all people have equal access to digitalisation processes or methods. Take schools, for example, where digitisation is currently playing a major role. It is often overlooked there that many families have neither an internet connection nor a laptop or computer. When people then talk about digitalisation being the global technology, I think that is a great arrogance.

As a philosopher, it is part of your job to deal with the dark side of new technologies. But sometimes you get the impression that philosophy lags behind technology and important questions are only asked when technologies or products are in development or already in use. The question of whether philosophy lags behind technology or technology lags behind philosophy is wrongly posed. Because in the end, everything develops side by side and works in confusion. We work together on a social togetherness. Philosophers in their own way and technicians in theirs. As a philosopher, my point is that the question of what is technologically feasible is secondary to the question of what is morally desirable. Many empirical sciences would say that the question of feasibility should be asked first: can we develop an autonomous vehicle, for example? Philosophically, however, the first question is whether we want autonomous vehicles at all – regardless of whether a technology can be developed or not. Companies that develop these systems have already answered these questions of what they want and what is desirable.

From a philosophical point of view, the first thing to clarify is whether we want autonomous vehicles at all – regardless of whether a technology can be developed or not.

And how can we learn to ask these complex questions? As a philosopher, I see it as a task to find a language for these issues so that people from different backgrounds can understand them and find an ethical way of handling them. For me, this starts with not just sitting in the ivory tower, but participating in congresses and public discussions. I also see it as an obligation for schools to offer courses in media sovereignty and for educational institutions that train engineers, for example, to include ethics courses in their curriculum. Thirdly, companies that develop the technologies and bring them to market must also offer compulsory ethical training courses. Fourth, we need political institutions and bodies. We need councils and committees that deal with concrete technologies and the risks and opportunities that come with them. So at the collective level, we have schools, education, businesses and institutions.

What would be needed at the individual level?

Among other things, that we have a dialogue with our children in education about what it means to be online and to share personal information there. Ultimately, we also have an individual responsibility to deal with everyday technologies.

What responsibility does the technology itself bear, i.e. the car that drives autonomously?

The problem with the question of responsibility is that the industry often has a tendency to get out of the way. If the car does something that the technicians didn’t plan for or had no control over, the industry dismisses the responsibility. I like to argue with a counter-example.

In the case of autonomous driving, for example, you have to take a closer look and ask who had how much power, influence or knowledge?

Which would be?

The education of children. We cannot control this conclusively either. We can only try to make them behave the way we expect them to behave. As parents, we have the responsibility, even if the child does not behave according to our expectations. This means that as long as children have not matured into adult, autonomous persons, the responsibility lies with the guardians. So if we take responsibility for the most complex being we know, the human being, surely we also take responsibility for much less complex things like autonomous cars? Ultimately, we are the creators of both our children and our technologies. That’s why we bear responsibility for both. But not the sole responsibility.

Can you elaborate on that a bit more? To put it bluntly, it is not only the programmer who writes the algorithm for autonomous driving who bears sole responsibility, but also the company and ultimately also the lawyers who have developed a legal system in which autonomous cars can be used in a certain way. Responsibility functions in a branched way and rests on many different shoulders here. However, even before autonomous cars, we have found ways and means to identify responsible parties in very complex, opaque contexts. To claim that this is no longer possible with autonomous driving is, in my opinion, premature. You have to look more closely in this example and ask who had how much power, influence or knowledge? At the end of the day, it must be possible to identify the responsibility of the individual participants and the collectives.

How do you see this in relation to the public sector – does it bear more responsibility than, for example, companies? I think it’s about a different form of accountability. There is a saying from the Spiderman comics: “With more power comes more responsibility”. Responsibility is a gradual phenomenon and not an all-or-nothing one. There are certain conditions to be able to ascribe responsibility to someone. First, capacity to act, that is, a person or an institution must be able to act. This goes hand in hand with autonomy and the knowledge of certain consequences. Secondly, there must be the ability to communicate, and thirdly, one needs power of judgement. You have to be able to assess, judge and reflect on a situation. These are essential prerequisites if we consider people in our society to be penal persons, to whom we attribute the ability to bear responsibility and to do so for everything they do. This also means that we, be it the public sector, a company or we as citizens, are responsible for the technologies we deal with in everyday life. Of course, this leads to entanglements that are difficult to separate in everyday life. We have an infinite number of responsibilities as individuals and collectives, some of which may even contradict each other.

Can philosophy offer definitions or handouts on how we should deal with technologies? Be it data, AIs or autonomous vehicles. There are different approaches. The short and unsatisfactory answer: there is no single, correct ethical path we can take. There are different ethical schools, such as utilitarian ethics, virtue ethics, etc., each of which would suggest different ways of dealing with things. As a philosopher, I then point to four blocks of questions that everyone can apply to themselves, whether in relation to data or other technologies.

What aspects do these blocks of questions contain?

First, let’s take the example of data. First, there is the question of production and design: Where does the data come from? Who makes it available to me? Under what conditions did they come into being? Which people were involved in the acquisition of this data and in what way? What kind of companies are they? The second set of questions is about autonomy and remit: What is this data for? What are they supposed to do? What task do they have? And how autonomous can this data be in its respective task area? The third set of questions relates to context and area of use: Where is this data used? Who benefits or could possibly be harmed by it? The fourth set of questions is about security: How can we ensure that third parties do not have access to data? How can misuse be prevented? Ultimately, answering the four blocks of questions is about seeing that we can answer them in a way that corresponds to our ethical awareness. These blocks of questions can therefore be seen as a kind of toolbox for dealing with technologies.

Trust in online shopping is quite low. On the other hand, the habit of ordering online is so great that the question of trust does not play an immediate role.

After all, the issue of trust also plays an important role when it comes to whether certain technologies or products are used. How is it that we willingly make our personal data available online, while some of us are very distrustful of the public sector when it comes to digitalisation? For me, this has less to do with trust in the state than with habit. With a service that is available everywhere and always, it is simply convenience to use this service. It’s about the sheer availability and ubiquity of the technology. These are two aspects that often occur together but are not necessarily dependent on each other.

Can you give an example of this? Let’s take online shopping. First of all, the question: Who do I trust? I don’t trust the online retailer per se, but I trust the company to send me the products I order there. But that’s not really trust, it has to do with expectation. Do I trust the online retailer to use and interpret its knowledge and services with social aspects in mind? No, certainly not. Who could I trust then? The state to contain the company in a legal certain way? Certainly, but I simply expect that, I assume that. Do I trust the state to ethically oblige the company to abide by certain regulations? No, I don’t trust that either, because the state also derives a certain benefit from the fact that citizens in its catchment area order from this online retailer. In other words, trust in online shopping is quite low. On the other hand, the habit of ordering online is so great that the question of trust does not play an immediate role. We would have to consciously break out of the habit. The example shows that when dealing with ubiquitous technologies, it is rarely a question of whether we trust the institutions behind them, be it the industry or the public sector, on the contrary: quite often we do not trust them and use these technologies nevertheless.


To the person

Janina Loh is a philosopher of technology at the University of Vienna and works there as a PostDoc. Loh researches questions of robot ethics, the attribution of responsibility in human-machine interaction and dealing with new and digital technologies. Loh’s other main topics are questions of judgement in the public sphere, trans- and posthumanism, and feminist approaches to the philosophy of technology. She will be lecturing at Transform 2020.

 


Transform 2020

TRANSFORM 2020 will take place on 13 November in Bern City Hall and is dedicated to the transformative power of good data in the public sector. Exciting international experts are expected to attend under the title “In good Data we trust”. The conference is organised by the Institute Public Sector Transformation, the Institute Digital Enabling and the BFH Centre Digital Society. All information about the conference, the programme and the registration form can be found here.

Creative Commons Licence

AUTHOR: smf smf

Create PDF

Related Posts

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *