Digital & AI Literacy as a Pillar of Digital Responsibility – A Report from the Shift Conference on Digital Ethics

Digital ethics is central to the use of artificial intelligence. It was the topic of the recent Shift conference in Zurich. The speakers come from practice and research and offer innovative perspectives on how we as a society can address the opportunities and challenges of digitalisation. The author was present at the conference and shares her impressions and the key messages of the conference with us.

“Is Switzerland civilised?” With this provocative question, Karin Lange of “die Mobiliar” started her talk at the Shift Conference on Digital Ethics after asking the audience what “civilisation” means to them. Responding to the conventional definition of civilisation as “the development of human coexistence that should lead to the most peaceful and aggression-free coexistence possible”, she confronted those present with various questionable practices that can be observed in the digital world: Anarchist business models (GAFA), the attention economy with brutal procedures, the unpunished spreading of untruths, hatred and perversions, the unregulated use of AI. Their opinion is clear: the digital world has become a secondary space and it needs to be more regulated.

The Digital World: Deceptive Designs and Manipulation of Opinion Formation

The unregulated digital world was further illustrated by Markus Zimmer (ZHAW): Deceptive Designs (or Dark Patterns) are tricks used on websites and in apps to make us do things we did not intend to do, e.g. buy something or sign up. A study has shown that although such practices are perceived, they have no impact on purchase and referral intent. Whether due to a lack of motivation for the younger generation or a lack of skills and perceived opportunities for the older, the people probed apparently resign themselves to being manipulated. And although deceptive designs are on the borderline of legality, regulating such practices is troublesome. Indeed, deceptive designs that are manipulative but not necessarily deceptive are difficult to prosecute.

Another hot topic in digital legislation is the influence of digital debate through bots, trolls, misinformation, polarisation and pseudonymisation. Most people do not have the necessary skills to protect themselves from such manipulation techniques. Anna-Lena König reported how the Risk Dialogue Foundation hasdeveloped an interactive workshop for schools and organisations so that users can directly experience and critically question such practices. The aim is to develop strategies for a reflective approach to digital opinion-forming.

Digital self-determination

Both Markus Zimmer and Anna-Lena König emphasised the need for users to better understand the technology and to be sensitised to the corresponding risks. For example, the fact that it is critical to use biometric data that allows a person to be uniquely identified may be unclear to the person concerned. Laetitia Rameler presented the results of a study on three applications of voice, speech and face recognition for TA-SWISS:

  • the early detection of diseases
  • the analysis of emotions for advertising and marketing or in job application procedures
  • and authentication by voice at banks.

With all these applications, the question arises as to the trade-off between the benefits offered and the necessary degree of surveillance. The question further arises as to whether those concerned really feel free to refuse the recognition process, especially in situations where there is an asymmetry of power.

The question of digital self-determination also arises for the use of people analytics in the workplace. For the conference organiser, Cornelia Diethelm, the corporate culture is central in this case: What is to be achieved with this? Amazon and Swiss Post are both active in logistics, but their approach to people analytics is quite different. Among other things, it should be clearly communicated for what purpose the technology will be used and who will benefit from it, but also what values and guidelines the deployment will be based on, such as, for example, the Data Innovation Alliance’s Code of Ethics for Data-Based Value Creation.

In companies, however, compliance with ethical values is not controlled by a committee, but by all employees, Karin Lage emphasised. It is therefore the responsibility of the company to provide the necessary group-targeted training so that employees have the necessary understanding of the technology and the risks involved to understand the code of ethics and ensure that it is adhered to.

Digital Responsibility Swiss Companies

Corporate digital responsibility is also important for institutional investors, as Matthias Narr, from the Ethos Foundation, reported. The foundation represents 247 pension funds and charitable foundations and promotes sustainable and responsible investments. The implementation of a code of digital responsibility is one of the 7 expectations they set for companies as principles for digital responsibility, along with transparency of digital practices, high quality data protection, ethical principles for the use of AI, exclusion of sensitive activities in digitisation, ensuring equitable social transformation and reducing the environmental footprint of digital .

An analysis of these expectations in 40 companies showed that despite the rather poor results (an average of 23 points out of a total of 100), there is a trend towards improvement (in 2021 the score was 11). The aspects of using AI (13.9), sensitive activity committee (11.7) and equitable social transformation (13.4) are the ones with the main potential for improvement. However, some companies such as Swisscom (87), Zurich Insurance (72) or Sonova (68) already have relatively good results overall and serve as references. In the future, social investors will demand more and more transparency about companies’ digital practices in order to maintain a healthy business environment that supports the interests of society as a whole.

Transparency and accountability were the keywords of the conference: people affected by technology need to understand it so that they can make informed decisions and contribute to shaping a responsible digital future.

Creative Commons Licence

AUTHOR: Sarah Degallier Rochat

Prof. Dr. Sarah Degallier Rochat is the Head of the strategic thematic field Humane Digital Transformation. Her research interests include the design of inclusive human-machine interfaces, the upskilling of the workforce and the impact of automation and augmentation on work conditions.

Create PDF

Related Posts

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *