Is Augmented Intelligence the AI of the future?

In the past, artificial intelligence (AI) was often portrayed as one that could one day replace humans. Today, it is assumed that this will not be the case in the foreseeable future, nor should it be. That is why we now talk about augmented intelligence instead of artificial intelligence. For a long time, the goal of artificial intelligence was to completely replace humans for many tasks. For example, the field was described as follows [1]: “The art of creating machines that perform functions that require intelligence when performed by people.” (1990) [2] “The study of how to make computers do things at which, at the moment, people are better” (1991) [3] This approach aims to create computer programs that can do not only repetitive tasks, but also tasks that require intellectual performance from a person. The Turing Test [4], developed by Alan Turing in 1950, provides an operational definition of artificial intelligence. A user interacts with a computer program in written form and asks questions. If, after completing this test, the user is unable to distinguish whether the answers came from a computer or a person, the Turing Test is considered passed. In recent years, however, the question has often arisen as to whether such artificial intelligence is at all purposeful and desirable.

Digital ethics

There are a number of reports that suggest that the sources used for training software are not always fair. For example, it has been shown that there are differences in how women or men are described in Wikipedia articles [5] [6]. Researchers in the USA were able to show that for years an analysis programme was used to calculate the risk of offenders, which disadvantaged the African-American population [7]. At a large tech company, it was shown that software designed to facilitate the hiring process for new employees was unfair to women [8]. There are many more examples, and the topic of digital ethics has made it into the mainstream media due to these numerous scandals. It is therefore necessary for the digital society of the future to deal with how the cooperation between software and humans should look like. Humans and computers have complementary abilities: Computers are very good at processing large amounts of data in the shortest possible time or performing calculations efficiently. In contrast, they are not capable of reflecting or morally questioning decisions. There are therefore simply certain activities that a computer cannot do, and therefore should not do.

When voice assistants discriminate

Because of this discrepancy in capabilities, the role of artificial intelligence needs to be rethought. We therefore often use the term augmented intelligence instead of artificial intelligence. Behind this is the idea that the computer serves as a tool for humans and augments human intelligence, but does not replace humans [9]. A typical example of such collaboration is a voice assistant, typically found in smartphones. When we ask it to offer us restaurants nearby, the voice assistant does not make the decision where we will eat, but provides us with the information needed to make such a decision. Does this free us from the problems of digital ethics? No, because it is up to us, the human being, to make the decision and bear the responsibility for it. Therefore, it is also up to us to critically question the data provided and to include this reflection in the decision. In the restaurant example, for example, it could be that a certain restaurant, although much closer than the others, was not offered to us at all. So, even in the context of augmented intelligence, we cannot avoid actively and regularly engaging with the generated data and decision suggestions of our tools.

Ethics must be programmed in

The challenge of the next few years is to integrate this new form of collaboration into the processes of software development and application, with all the necessary measures to prevent and control the associated risks in the area of ethics and discrimination. Appropriate processes must be planned, in the project itself, and also regularly during the operation of the software. The concrete issues to be evaluated are not uniform due to the many different areas of application and also the different technologies (video, audio, text, etc.) and different types of problems (forms of discrimination, unethical decisions) and must be specified and evaluated in each project, analogous to a traditional risk management. In the concept of augmented intelligence, the human being takes responsibility and therefore has the active task of reflecting on and critically questioning the machine’s decision-making proposals. Only in this way are we equipped for successful cooperation between humans and machines in the digital society of the future.


  • 1] Russell, S. & Norvig, P., 2010. Artificial Intelligence – a modern approach. Upper Saddle River (New Jersey): Pearson.
  • 2] Kurzweil, R., 1990. The Age of Intelligent Machines. s.l.:MIT Press.
  • 3] Rich, E. & Knight, K., 1991. artificial intelligence (Second Edition). s.l.:McGraw-Hill.
  • 4] Turing, A. M., 2004. the essential turing. s.l.:Oxford University Press.
  • [5] Wagner, C., Graells-Garrido, E., Garcia, D. & Menczer, F., 2016. Women through the glass ceiling: gender asymmetries in Wikipedia. EPJ Data Science, 5(1).
  • 6] Jadidi, M., Strohmaier, M., Wagner, C. & Garcia, D., 2015. It’s a man’s Wikipedia? Assessing gender inequality in an online encyclopedia. s.l., s.n.
  • [7] Larson, J., Mattu, S., Kirchner, L. & Angwin, J., 2016. how we analysed the COMPAS recidivism algorithm. ProPublica, May.
  • [8] Jeffrey, D., 2018. Amazon scraps secret AI recruiting tool that showed bias against women, San Fransico, CA: Reuters.
  • [9]
PDF erstellen

Related Posts

Algorithms also discriminate – as their programmers tell them to do

Companies are increasingly using artificial intelligence (AI) to make decisions or to make decisions based on their suggestions. These suggestions can also be discriminatory. To prevent this, we need not only to understand program codes on a technical level, but also to incorporate human thinking and decision-making processes to detect and reduce systematic deception. CO author Thea Gasser proposes tools and procedures for this in her bachelor’s thesis [1], which was recently awarded a prize at the TDWI conference in Munich. Recently, there has been growing concern about unfair decisions made with the help of algorithmic systems that lead to discrimination against social groups or individuals. For example, Google’s advertising system is accused of displaying high-income jobs to predominantly male users. Facebook’s automatic translation algorithm also caused a stir in 2017 when it chose the wrong translation for a user post, leading to police questioning the user in question [2]. Or soap dispensers that do not work for people with dark skin [3]. In addition, there are several known cases of self-driving cars failing to recognise pedestrians or vehicles, resulting in loss of life [4]. Current research aims to map human intelligence onto AI systems. Robert J. Steinberg [5] defines human intelligence as “…mental competence, which consists of the abilities to learn from experience, adapt to new situations, understand and master abstract concepts, and use knowledge to change one’s environment.” To date, however, AI systems lack, for example, the human trait of self-awareness. The systems still rely on human input in the form of created models and selected training data. This implies that partially intelligent systems are heavily influenced by the views, experiences and backgrounds of humans and can thus also exhibit cognitive biases. Bias is defined as “…the act of unfairly supporting or opposing a particular person or thing by allowing personal opinions to influence judgement” [6]. Causes of cognitive distortions in the human thought process and decision-making are information overload, meaninglessness of information, the need to act quickly, or uncertainty about what needs to be remembered later and what can be forgotten [7]. As a result of cognitive biases, people can be unconsciously deceived and may not recognise the lack of objectivity in their conclusions [8] The findings of the co-author’s bachelor thesis on “Bias – A lurking danger that can convert algorithmic systems into discriminatory entities” (1) first showed that biases in algorithmic systems are a source of unfair and discriminatory decisions. Furthermore, the work results in a framework that aims to contribute to AI safety by proposing measures that help to identify and mitigate biases during the development, implementation and application phases of AI systems. The framework consists of a meta-model that includes 12 essential domains (e.g. “Project Team”, “Environment and Content”, etc.) and covers the entire software lifecycle (see Fig. 1). A checklist is available for each of the areas, through the use of which the areas can be considered and analysed in greater depth.

Figure 1: Metamodel of the Bias Identification and Mitigation Framework

As an example, the area “Project Team” is explained in more detail below (see Fig. 2). Knowledge, views and attitudes of individual team members cannot be deleted or hidden, as these are usually unconscious factors due to the different backgrounds and varied experiences of each member. The resulting bias is likely to be carried over into the algorithmic system.

Figure 2: Checklist excerpt for the “Project Team” section of the metamodel

Therefore, measures need to be taken to ensure that the system has the fairness appropriate to the context. It is necessary that there is an exchange among the project members where everyone shares their views and concerns openly, fully and transparently before the system is designed. Misunderstandings, conflict ideas, too much euphoria and unconscious assumptions or invisible aspects can be uncovered in this way. The Project Team Checklist contains the following concrete measures to solve the problems mentioned above: All project members (1) have participated in training on ethics, (2) are aware of the issue of bias that exists in the human decision-making process, (3) know that bias can be reflected in an algorithmic system, and (4) consider the same attributes and factors as most relevant in the system context. The project team (1) reflects representatives from all possible end-user groups, (2) is a cross-functional team with diversity in terms of ethnicity, gender, culture, education, age and socio-economic status, and (3) consists of representatives from the public and private sectors. The co-author’s bachelor thesis includes checklists for all the areas listed in the metamodel. Based on the results of the work, the framework is intended to be an initial framework that can be adapted to the specific needs in a given project context. The proposed approach takes the form of a guideline, e.g. for the members of a project team. Adaptations of the framework can be made based on a defined understanding of system neutrality, which may be specific to the particular application or application domain. If the framework adapted to the specific context is used in a mandatory framework within a project, it is very likely that the developed application will better reflect the neutrality defined by the project team or company. Checking whether the framework has been applied and the requirements met helps to find out whether the system meets the defined neutrality criteria or whether and where action is needed. To adequately address bias in algorithmic systems, overarching and comprehensive governance must be in place in organisations where AI responsibility is taken seriously. Ideally, project members internalise the framework and consider it a binding standard.


  1. Gasser, T. (2019). Bias – A lurking danger that can convert algorithmic systems into discriminatory entitie: A framework for bias identification and mitigation. Bachelor’s Thesis. Degree Programme in Business Information Technology. Häme University of Applied Sciences.
  2. Cossins, D. (2018). Discriminating algorithms: 5 times AI showed prejudice. Retrieved January 17, 2019.
  3. Plenke, M. (2015). The Reason This “Racist Soap Dispenser” Doesn’t Work on Black Skin. Retrieved 20 June 2019.
  4. Levin, S., & Wong, J. C. (2018). Self-driving Uber kills Arizona woman in first fatal crash involving pedestrian. The Guardian. Retrieved February 17, 2019.
  5. Sternberg, R. J. (2017). Human intelligence. Retrieved June 20, 2019.
  6. Cambridge University Press. (2019). BIAS | meaning in the Cambridge English Dictionary. Retrieved June 20, 2019.
  7. Benson, B. (2016). You are almost definitely not living in reality because your brain doesn’t want you to. Retrieved June 20, 2019.
  8. Tversky, A., & Kahneman, D. (1974). Judgment under Uncertainty. Heuristics and biases. Science, New Series, 185(4157), 1124-1131.
PDF erstellen

Related Posts

None found