Deepfakes and manipulated reality – a study by TA-Swiss

Deepfake Keyvisual Ta Studie

Artificial intelligence (AI) can generate and modify images, videos and sound recordings. It is already impossible to imagine the digital world without it. But this can be abused and used for manipulation. A mix of different measures is needed to curb this. This is shown by the current TA-SWISS study on deepfakes, which BFH digital expert Prof Dr Reinhard Riedl accompanied.

Societybyte: The topic of the study is deepfakes. Why did you include this topic?

Prof Dr Reinhard Riedl: The short answer is: it is now possible to use AI to produce multimedia material that looks deceptively real. This creates previously unknown possibilities for many things: film, museums, schools, music, art, criminology, justice, etc., but also for bullying, fraud and political attacks on democracy. It also undermines the credibility of multimedia material itself: “Is this sound recording, is this video real?” Specifically, you can even buy the live manipulation of surveillance videos online: it is sometimes offered openly. There is also a real danger of states attacking democratic elections in other countries with deepfakes. When such new possibilities arise – both opportunities and risks – it is time for a TA-Swiss study.

Prof. Dr Reinhard Riedl

What questions did you investigate?

But since we only have a limited budget, it’s not a question of “Is the topic important?”, but rather “Is it more important than others?”. That’s why the study topics are selected in a methodical process – in cooperation between the office and the steering committee. The main addressees of TA-Swiss studies are parliament and the government. The main question is therefore: Which technology trends are particularly important for the legislative and executive branches? Where do they need information? However, we are autonomous in our choice of topics. Our task is both: monitoring all technological developments and analysing critical developments in detail. It is important that we remain committed to scientific objectivity in the recommendations we make. Studies such as those commissioned by German ministers, for example, in which the scientists involved are activists or have clear political preferences, would be inconceivable at TA-Swiss.

How was the collaboration organised?

The steering committee decided that the topic was important enough to fund a study. However, TA-Swiss does not carry out studies itself. It puts them out to tender, selects the best offer, organises an advisory group, decides on publication, prepares an abstract and ensures dissemination. My specific role was to chair the advisory group. Thanks to the good work of the study team and the expertise and commitment of the support group, this was an exciting task.

What are the biggest risks of deep fakes?

The biggest risk is that we believe we can recognise deep fakes. However, in experiments in which people know that they could be deep fakes, the classification is largely random. Recognising by looking closely does not work. This was also demonstrated by a group of students in one of my experiments. It is therefore necessary – and this is very difficult – to think in terms of probabilities of authenticity. Of course, the technologies for recognising deepfakes are getting better and better, but so are the technologies for creating deepfakes.

How do you assess the situation in Switzerland?

The current threat situation is manageable, as the study has shown, but its development is unpredictable. It is not technology that is changing society, but the use of technology. The question is not what AI can do. The question is who uses AI for what. This can only be anticipated in the short term; in the medium to long term, it is completely unclear. The development of technology itself is already difficult to predict. But I don’t want to avoid the answer with clever trivialities: Switzerland is not particularly at risk, but it should prepare itself to help victims of deepfakes and to respond to deepfake attacks from totalitarian states. We will be able to continue to use the expertise we have built up even if criminal offences and political attacks with deepfakes remain the exception.

And in comparison to other countries or the EU?

As far as the comparison with other countries is concerned, I am speaking here as a private individual. Not as a scientist, not as an employee of BFH and not as a member of the TA-Swiss steering committee: Switzerland is too small, too heterogeneous and geopolitically too unimportant to be an attractive target. It is more likely that EU elections will be attacked from Switzerland than that distant superpowers will have Swiss elections attacked. However, this also means that the police must also be prepared for offences from Switzerland abroad. Because if it happens, the fact that the attackers were foreigners will not be enough to manage the resulting crisis in relations.

How did the study team proceed?

The research was based on five methods: a literature analysis, our own technical experiments, a media analysis, expert interviews and a population survey. The processed results were presented to the monitoring group, which provided feedback. The research team dealt with four perspectives in particular detail: legal aspects, deepfakes in journalism, deepfakes in politics and deepfakes in business. As the state of the art and the expected technical developments are fundamental, the technical foundations are already discussed in the introduction to the final report. This is followed by the legal aspects – specifically protection against deepfakes, deepfakes in court proceedings, public law requirements and future regulatory options – before the areas of journalism, politics and business are analysed.

How was your collaboration?

In the conclusion, the cross-references between the individual areas are highlighted once again. This was not easy because researchers are naturally conditioned by the peer review logic to work in a narrowly focussed way. But I think we succeeded. Recommendations were also drawn up on the basis of the research findings. Of course, these recommendations were also discussed intensively between the research team and the support group. This exchange was sometimes highly emotional, but always constructive, and the research team took the suggestions on board. All in all, it was a challenging work process. As a support group, we were certainly not always easy for the research team, but the end result benefited from the approach.

What results did the research team come to?

  • Firstly, that perception is determined by labelling. Deepfakes are perceived negatively, whereas AI-generated content is not. The scientific term “synthetic media” is not even known. Deepfakes are seen as a threat to society, specifically Swiss democracy, whereas individual risks are barely recognised.
  • Secondly, tips on how to deal with them are of little help. Familiarity with digital social media is important. This makes it clear how important the subject of media and IT is. Realistically, however, there are rarely enough resources there for a reflective examination of deepfakes.
  • Thirdly, deepfakes pose a professional and economic challenge for journalism: social platforms can ignore whether videos are deepfakes when they are disseminated, and journalists are obliged to present them appropriately. In practice in Switzerland, however, journalists are currently almost only confronted with this in foreign journalism. In principle, the deepfake phenomenon could also increase the perceived value of quality journalism.
  • Fourthly, the study group’s report outlined a wide variety of dangers. Economic espionage with the help of deepfakes is a relevant threat for Switzerland because many Swiss companies are attractive targets. In addition, the courts now have to deal with the possibilities of falsifying surveillance videos. And fifthly, the report also describes numerous opportunities. Deepfakes have already arrived in the film industry, but many other sectors have barely utilised the theoretical potential. This is unlikely to happen in the foreseeable future.

And what measures do you recommend?

The recommendations are: take responsibility and use technological progress for defence, hold platforms accountable and strengthen victim protection, cooperate internationally in the prosecution of perpetrators – and, above all, provide more information about the dangers and opportunities. We should raise awareness through various channels – including schools, of course – that it is easy to fall for deepfakes. There will probably be no solution to the deepfakes problem. We will have to learn to live with it.

I’d like to say something about the outlook – what regulations should Switzerland come up with?

There are two priority areas for action: The development of police and judicial expertise should be stepped up in Switzerland, even if this is only possible through international cooperation. On the contrary, international co-operation is even desirable. And education should be strengthened. It is not a question of teachers becoming AI experts, but of enabling them to teach the critical handling of information from the internet. To do this, they need teaching materials and lessons. If we can make progress in these two areas, we will have already achieved a great deal. In addition, we should already be thinking about victim support and, if necessary, promoting the positive use of deepfakes through competitions, for example in knowledge transfer. On the other hand, I don’t see any great need for additional regulation at the moment.


About the study

The TA-SWISS Foundation, a centre of excellence of the Swiss Academies of Arts and Sciences, examines the opportunities and risks of new technologies. It commissioned the study from an interdisciplinary team. The study was conducted under the direction of Murat Karaboga (Fraunhofer Institute for Systems and Innovation Research ISI in Karlsruhe). The research team was advised by a group of experts, whose president is Prof Dr Reinhard Riedl.

The study “Deepfakes and manipulated realities” provides an overview. It shows which framework conditions already apply to deepfakes and where there is still a need for regulation. The study also analyses the extent to which citizens can be misled by fake content. With regard to the opportunities offered by deepfakes, examples are given of the areas in which synthetically generated content offers added value.

To the study and the summary.

Creative Commons Licence

AUTHOR: Anne-Careen Stoltze

Anne-Careen Stoltze is Editor in Chief of the science magazine SocietyByte and Host of the podcast "Let's Talk Business". She works in communications at BFH Business School, she is a journalist and geologist.

Create PDF

Related Posts

None found

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *