Tag Archive for: Machine Learning

Is Augmented Intelligence the AI of the future?

In the past, artificial intelligence (AI) was often portrayed as one that could one day replace humans. Today, it is assumed that this will not be the case in the foreseeable future, nor should it be. That is why we now talk about augmented intelligence instead of artificial intelligence. For a long time, the goal of artificial intelligence was to completely replace humans for many tasks. For example, the field was described as follows [1]: “The art of creating machines that perform functions that require intelligence when performed by people.” (1990) [2] “The study of how to make computers do things at which, at the moment, people are better” (1991) [3] This approach aims to create computer programs that can do not only repetitive tasks, but also tasks that require intellectual performance from a person. The Turing Test [4], developed by Alan Turing in 1950, provides an operational definition of artificial intelligence. A user interacts with a computer program in written form and asks questions. If, after completing this test, the user is unable to distinguish whether the answers came from a computer or a person, the Turing Test is considered passed. In recent years, however, the question has often arisen as to whether such artificial intelligence is at all purposeful and desirable.

Digital ethics

There are a number of reports that suggest that the sources used for training software are not always fair. For example, it has been shown that there are differences in how women or men are described in Wikipedia articles [5] [6]. Researchers in the USA were able to show that for years an analysis programme was used to calculate the risk of offenders, which disadvantaged the African-American population [7]. At a large tech company, it was shown that software designed to facilitate the hiring process for new employees was unfair to women [8]. There are many more examples, and the topic of digital ethics has made it into the mainstream media due to these numerous scandals. It is therefore necessary for the digital society of the future to deal with how the cooperation between software and humans should look like. Humans and computers have complementary abilities: Computers are very good at processing large amounts of data in the shortest possible time or performing calculations efficiently. In contrast, they are not capable of reflecting or morally questioning decisions. There are therefore simply certain activities that a computer cannot do, and therefore should not do.

When voice assistants discriminate

Because of this discrepancy in capabilities, the role of artificial intelligence needs to be rethought. We therefore often use the term augmented intelligence instead of artificial intelligence. Behind this is the idea that the computer serves as a tool for humans and augments human intelligence, but does not replace humans [9]. A typical example of such collaboration is a voice assistant, typically found in smartphones. When we ask it to offer us restaurants nearby, the voice assistant does not make the decision where we will eat, but provides us with the information needed to make such a decision. Does this free us from the problems of digital ethics? No, because it is up to us, the human being, to make the decision and bear the responsibility for it. Therefore, it is also up to us to critically question the data provided and to include this reflection in the decision. In the restaurant example, for example, it could be that a certain restaurant, although much closer than the others, was not offered to us at all. So, even in the context of augmented intelligence, we cannot avoid actively and regularly engaging with the generated data and decision suggestions of our tools.

Ethics must be programmed in

The challenge of the next few years is to integrate this new form of collaboration into the processes of software development and application, with all the necessary measures to prevent and control the associated risks in the area of ethics and discrimination. Appropriate processes must be planned, in the project itself, and also regularly during the operation of the software. The concrete issues to be evaluated are not uniform due to the many different areas of application and also the different technologies (video, audio, text, etc.) and different types of problems (forms of discrimination, unethical decisions) and must be specified and evaluated in each project, analogous to a traditional risk management. In the concept of augmented intelligence, the human being takes responsibility and therefore has the active task of reflecting on and critically questioning the machine’s decision-making proposals. Only in this way are we equipped for successful cooperation between humans and machines in the digital society of the future.


References

  • 1] Russell, S. & Norvig, P., 2010. Artificial Intelligence – a modern approach. Upper Saddle River (New Jersey): Pearson.
  • 2] Kurzweil, R., 1990. The Age of Intelligent Machines. s.l.:MIT Press.
  • 3] Rich, E. & Knight, K., 1991. artificial intelligence (Second Edition). s.l.:McGraw-Hill.
  • 4] Turing, A. M., 2004. the essential turing. s.l.:Oxford University Press.
  • [5] Wagner, C., Graells-Garrido, E., Garcia, D. & Menczer, F., 2016. Women through the glass ceiling: gender asymmetries in Wikipedia. EPJ Data Science, 5(1).
  • 6] Jadidi, M., Strohmaier, M., Wagner, C. & Garcia, D., 2015. It’s a man’s Wikipedia? Assessing gender inequality in an online encyclopedia. s.l., s.n.
  • [7] Larson, J., Mattu, S., Kirchner, L. & Angwin, J., 2016. how we analysed the COMPAS recidivism algorithm. ProPublica, May.
  • [8] Jeffrey, D., 2018. Amazon scraps secret AI recruiting tool that showed bias against women, San Fransico, CA: Reuters.
  • [9] https://digitalreality.ieee.org/publications/what-is-augmented-intelligence
Creative Commons LicenceCreate PDF

Related Posts

Algorithms also discriminate – as their programmers tell them to do

Companies are increasingly using artificial intelligence (AI) to make decisions or to make decisions based on their suggestions. These suggestions can also be discriminatory. To prevent this, we need not only to understand program codes on a technical level, but also to incorporate human thinking and decision-making processes to detect and reduce systematic deception. CO author Thea Gasser proposes tools and procedures for this in her bachelor’s thesis [1], which was recently awarded a prize at the TDWI conference in Munich. Recently, there has been growing concern about unfair decisions made with the help of algorithmic systems that lead to discrimination against social groups or individuals. For example, Google’s advertising system is accused of displaying high-income jobs to predominantly male users. Facebook’s automatic translation algorithm also caused a stir in 2017 when it chose the wrong translation for a user post, leading to police questioning the user in question [2]. Or soap dispensers that do not work for people with dark skin [3]. In addition, there are several known cases of self-driving cars failing to recognise pedestrians or vehicles, resulting in loss of life [4]. Current research aims to map human intelligence onto AI systems. Robert J. Steinberg [5] defines human intelligence as “…mental competence, which consists of the abilities to learn from experience, adapt to new situations, understand and master abstract concepts, and use knowledge to change one’s environment.” To date, however, AI systems lack, for example, the human trait of self-awareness. The systems still rely on human input in the form of created models and selected training data. This implies that partially intelligent systems are heavily influenced by the views, experiences and backgrounds of humans and can thus also exhibit cognitive biases. Bias is defined as “…the act of unfairly supporting or opposing a particular person or thing by allowing personal opinions to influence judgement” [6]. Causes of cognitive distortions in the human thought process and decision-making are information overload, meaninglessness of information, the need to act quickly, or uncertainty about what needs to be remembered later and what can be forgotten [7]. As a result of cognitive biases, people can be unconsciously deceived and may not recognise the lack of objectivity in their conclusions [8] The findings of the co-author’s bachelor thesis on “Bias – A lurking danger that can convert algorithmic systems into discriminatory entities” (1) first showed that biases in algorithmic systems are a source of unfair and discriminatory decisions. Furthermore, the work results in a framework that aims to contribute to AI safety by proposing measures that help to identify and mitigate biases during the development, implementation and application phases of AI systems. The framework consists of a meta-model that includes 12 essential domains (e.g. “Project Team”, “Environment and Content”, etc.) and covers the entire software lifecycle (see Fig. 1). A checklist is available for each of the areas, through the use of which the areas can be considered and analysed in greater depth.

Figure 1: Metamodel of the Bias Identification and Mitigation Framework

As an example, the area “Project Team” is explained in more detail below (see Fig. 2). Knowledge, views and attitudes of individual team members cannot be deleted or hidden, as these are usually unconscious factors due to the different backgrounds and varied experiences of each member. The resulting bias is likely to be carried over into the algorithmic system.

Figure 2: Checklist excerpt for the “Project Team” section of the metamodel

Therefore, measures need to be taken to ensure that the system has the fairness appropriate to the context. It is necessary that there is an exchange among the project members where everyone shares their views and concerns openly, fully and transparently before the system is designed. Misunderstandings, conflict ideas, too much euphoria and unconscious assumptions or invisible aspects can be uncovered in this way. The Project Team Checklist contains the following concrete measures to solve the problems mentioned above: All project members (1) have participated in training on ethics, (2) are aware of the issue of bias that exists in the human decision-making process, (3) know that bias can be reflected in an algorithmic system, and (4) consider the same attributes and factors as most relevant in the system context. The project team (1) reflects representatives from all possible end-user groups, (2) is a cross-functional team with diversity in terms of ethnicity, gender, culture, education, age and socio-economic status, and (3) consists of representatives from the public and private sectors. The co-author’s bachelor thesis includes checklists for all the areas listed in the metamodel. Based on the results of the work, the framework is intended to be an initial framework that can be adapted to the specific needs in a given project context. The proposed approach takes the form of a guideline, e.g. for the members of a project team. Adaptations of the framework can be made based on a defined understanding of system neutrality, which may be specific to the particular application or application domain. If the framework adapted to the specific context is used in a mandatory framework within a project, it is very likely that the developed application will better reflect the neutrality defined by the project team or company. Checking whether the framework has been applied and the requirements met helps to find out whether the system meets the defined neutrality criteria or whether and where action is needed. To adequately address bias in algorithmic systems, overarching and comprehensive governance must be in place in organisations where AI responsibility is taken seriously. Ideally, project members internalise the framework and consider it a binding standard.


References

  1. Gasser, T. (2019). Bias – A lurking danger that can convert algorithmic systems into discriminatory entitie: A framework for bias identification and mitigation. Bachelor’s Thesis. Degree Programme in Business Information Technology. Häme University of Applied Sciences.
  2. Cossins, D. (2018). Discriminating algorithms: 5 times AI showed prejudice. Retrieved January 17, 2019.
  3. Plenke, M. (2015). The Reason This “Racist Soap Dispenser” Doesn’t Work on Black Skin. Retrieved 20 June 2019.
  4. Levin, S., & Wong, J. C. (2018). Self-driving Uber kills Arizona woman in first fatal crash involving pedestrian. The Guardian. Retrieved February 17, 2019.
  5. Sternberg, R. J. (2017). Human intelligence. Retrieved June 20, 2019.
  6. Cambridge University Press. (2019). BIAS | meaning in the Cambridge English Dictionary. Retrieved June 20, 2019.
  7. Benson, B. (2016). You are almost definitely not living in reality because your brain doesn’t want you to. Retrieved June 20, 2019.
  8. Tversky, A., & Kahneman, D. (1974). Judgment under Uncertainty. Heuristics and biases. Science, New Series, 185(4157), 1124-1131.
Creative Commons LicenceCreate PDF

Related Posts

How art institutions can manage their metadata

Virtual assistants need descriptive metadata to work correctly. But many art organizations are poorly positioned with it. To close this gap, a Canadian start-up has developed a tool for art organizations to manage their metadata, writes our author Gregory Saumier-Finch.

Context

Discoverability is changing. We are using screens and virtual assistants, driven by AI, to plan our leisure time. In order to participate in this AI shift, events and artistic productions need descriptive metadata. Without the data, even the best algorithms will fail, and the “long tail” of the internet will disappear. Most arts organizations are poorly placed to benefit from the surge in AI discoverability. While some large arts organizations have technical skills to generate descriptive metadata on their websites, our research shows that there are only a handful. However, for the roughly 2000 non-profit arts organizations in Canada, it is not economically viable for each organization to hire a web developer with the skills needed to publish descriptive metadata. This leaves a majority of arts organizations both unaware (not realizing that their event data is missing) and vulnerable (being mis-represented by 3rd parties who do generate descriptive metadata.) Current status quo of the AI boom has shifted control away from arts organizations and into the hands of 3rd parties who end up controlling the descriptive metadata that appears in search engines and virtual assistants. A quick Google search for “events near me” will show event metadata sourced from meetup sites (meetup.ca), event aggregators (theatrelandltd.com, eventful.com), restaurant aggregators (restomontreal.ca), tourism sites (rove.me), and ticketing platforms (ticketmaster.ca, StubHub.com.) Notably there is almost a complete absence of authoritative metadata sourced from the art organizations that are actually producing or presenting the events. The gap is widening between those companies that have descriptive metadata, such as the commercial film industry, and those that don’t. Cluttered webpages with semi-structured data are carefully and painstakingly curated on web sites by arts organizations. However, there is a trend of diminishing returns, as fewer and fewer people use the websites of arts organizations to learn “what’s happening near me”. We are at a turning point in on-line discoverability where structured and linked open data is becoming the prerequisite for the current generation of findable events. Linked Open Data provides value for both human and machine. If we can close the gap by converting arts organization websites into actionable linked open data, then arts organizations will be well positioned to benefit from the AI boom, and people will be able to ask their virtual assistant “What’s showing near me?” and get an authoritative answer.

Footlight

Footlight is a tool developed by Culture Creates, a Canadian tech-startup specializing in the cultural sector, and designed for arts organizations (meaning all stakeholders ranging from individuals to supporting arts organizations) to manage their descriptive metadata. Footlight has the following design goals:

  • A zero-setup tool designed for arts organizations with a one-hour learning curve.
  • Entity extraction from websites currently managed by arts organizations, but without having to change the website itself (technical changes to websites are often unrealistic due to lack of technical skills within the arts organization). “Entity extraction” refers to the process by which unstructured or semi-structured data is transformed into structured data.
  • Entity linking with external knowledge graph artsdata.ca and wikidata.org. Federated queries create a rich set of information presented to the user to help disambiguate extracted entities.
  • Email notification and issue tracking to manage daily changes to metadata (descriptions, dates, tickets, links to people, venues, performances and performance works, etc.).
  • A community input mechanism to further enrich and interlink metadata while maintaining authority and traceability when multiple “truths” emerge.
  • An inclusive system (multiple points of view) reflecting the diversity present in the arts sector.
  • A publishing tool to push linked open data to multiple platforms including the arts organization’s own website, external knowledge graphs and traditional databases.

Vision of Artsdata.ca Knowledge Graph

Artsdata.ca is a Canadian performing arts knowledge graph started in 2019 with the help of the government of Canada, several arts organizations and Culture Creates. It has multiple sources of data including existing structured data, manually entered data, as well as data aggregated by trusted third parties. artsdata.ca was started in parallel with Footlight and, at the time of writing this article, is still in its infancy. Footlight uses data from the artsdata.ca knowledge graph extensively. The knowledge in artsdata.ca is the key component that enables Footlight to do entity detection, entity extraction and name resolution. The more complete artsdata.ca, the more cross referencing and error detection performed, the more accurately Footlight can do its work. Data structured, linked and validated through Footlight is also fed back into artsdata.ca. While the governance of artsdata.ca is still to be decided, Culture Creates proposes putting this valuable mass of metadata into an innovative model of collective ownership involving arts organizations across Canada in the form of a platform cooperative. Culture Creates seeks to shift the existing power of closed exclusive data access presently held by multinational tech companies to one that is open and accessible for the arts in Canada. And with access to valuable metadata, the arts will be able to generate and capitalize on new opportunities. It is a proposed digital vision designed to better position the Canadian arts sector to seize opportunities, innovate, develop, amplify and over time transform organizational models.

2018 Pilot Project

In the summer of 2018 the first cohort of Canadian arts organizations was launched with 8 members from several provinces. Footlight was able to extract 90% of the events from participating websites, and structure the descriptive event metadata using schema.org with all the mandatory and recommended properties documented by Google (https://developers.google.com/search/docs/data-types/event). Footlight also added additional properties such as linking a subset of venues, people and organizations to artsdata,ca knowledge graph and wikidata.org. By the end of the first pilot project, Footlight was publishing linked open data to artsdata.ca and to several of the participating arts organizations’ websites. To publish data on arts organizations’ websites, the Footlight “code snippet” was used to inject JSON-LD into the appropriate event web page. In one case, the Footlight “code snippet” was added by the digital marketing manager using Google Tag Manager (without having to touch the website HTML). In another case, the “code snippet” was added to the HTML header by the organization’s website provider. The pilot ended with 100% of the events being published and updated daily on artsdata.ca but only some participants installing the “code snippet” for publishing event data on their respective websites.

Benefits

The benefits for the participating arts organizations can be divided into 2 areas: the first area is improved organic search engine optimization (SEO), and the second area is increased data circulation. 1. In the first area of benefits, there was an observed improvement in Google Search for those companies using the “code snippet” to publish structured data on their webpages. Search appearance of events in Google was enhanced with new Rich results, Event listings, and Event details (terminology of Google Search Console). Illustrations 1 and 2 below show the impact on Google search for Canadian Stage. Canadian Stage is an arts organization that participated in the pilot project and succeeded in placing the Footlight “code snippet” on their website. Footlight was able to publish event metadata that was picked up by Google to improve Google Search appearance and Google’s Knowledge Graph.

Illustration 1

Illustration 2

2. In the second area of benefits, a 3rd party data client (regional governmental agency) was successful in adding event listings from Footlight as a single source, without having to manually enter multiple events from multiple arts organizations’ websites. A weekly import of data ensured that the data remained up to date.

Lessons learned

Lesson 1

Arts organizations found it difficult to install the Footlight “code snippet”. To address this, Culture Creates will explore ways to further simplify the “code snippet” installation. One hypothesis is that a “chat bot” could help. The “chat bot” would guide users through the steps of installing the code snippet by presenting different options (Google Tag Manager, CMS, contact their web provider) and then depending on the option selected, provide contextual assistance (i.e. compose emails to communicate with web provider), and finally complete the installation with a system test to confirm proper operation.

Lesson 2

Listing sites, such as the city of Laval in Quebec, would like Footlight to be integrated into their existing calendar system. To address this, Culture Creates is working on a project of integration with a local calendar software called Caligram (caligram.org). The API would enable users of the 3rd party calendar system to perform all of Footlights features from within the user interface of the 3rd party calendar system.

Conclusion

At Culture Creates, we understand that for any digital transformation to occur – in any sector – a critical mass of structured data is needed. We developed Footlight technology to structure and create linked open data for the arts. We have chosen a narrow focus on performance listings and descriptive event metadata. Beyond the benefits of improved find-ability and efficiency, when a critical mass of Canadian arts organizations adopt linked open data, not only will the arts sector become the digital authority of its own metadata, it generates a valuable knowledge graph of usable and connected metadata. If we are truly interested in shifting the existing power from multinational tech companies to a more fair and accessible digital environment for stakeholders in the arts in Canada, we must start with a focus on developing solutions and tools that are as easy to use and understand, that remove complexity, and are made available to all stakeholders in the arts. This is paramount in helping the sector retain agency over their individual and collective metadata.


Join a Culture Creates Pilot Project

Culture Creates continues to develop pilot projects with arts organizations, and is currently looking for new cohorts in Canada. If you are a Canadian arts organizations interested in taking part, please contact tammy@culturecreates.com. Interested arts organizations will be asked to form a cohort that includes several members. Each cohort will have to provide and/or seek joint funding, which will be used for on-boarding and supporting the cohort, as well as cover the cost of Footlight. The lack of descriptive metadata is an international concern. Currently, Culture Creates is focused on Canada because the Canadian government has developed policies and earmarked funding to support the digital transformation of its arts and culture sector. However, in the future, we envision expanding to the international community. Want to know more? Contact tammy@culturecreates.com.

Creative Commons LicenceCreate PDF

Related Posts

None found