Robo-Advisor, Robot Lawyer, and Healthbot – Barriers and Solution Strategies for the Acceptance of AI Advisors
AI-based consulting services are all the rage in the service sector. If we look at the evolution of AI-based applications, relatively simple AI solutions initially took hold and tended to take on supporting roles along various interfaces. AI supported highly standardised and repetitive tasks, e.g. the AI-based product advisor at Vaude, or AI-based support functions were integrated into existing app offerings, such as the Digital Assistant of the Swiss Federal Railways. In the meantime, however, it is even possible to map highly complex, strongly individualised and interaction-based advisory services, such as those from the banking, legal or healthcare sectors, using learning and adaptive AI solutions.
AI-based advisor tools have become very well known in the financial industry under the name Robo-Advisor (Davenport/Bean 2021). Many renowned banks such as UBS or Deutsche Bank AG have introduced such tools in recent years or invested in corresponding FinTechs. Robo-advisors are supposed to democratise asset management by independently developing investment plans for private clients and automating their investment decisions (Jung et al. 2018). But also in the health or legal sector, providers of professional services are increasingly starting to offer AI-based advisory services (e.g. Babylon Health or DoNotPay).
Between efficiency and reservations
Given the rapid technological progress in the fields of fintech, healthtech and legaltech, it is to be expected that AI-based advisor tools will soon surpass their human counterparts such as financial advisors or doctors (Esteva et al. 2017; Uhl/Rohner 2018). Existing tools today are already capable of developing complex recommendations based on big data processing and machine learning. Future generations could even show aspects of intuitive and empathetic intelligence when advising customers (Huang/Rust 2020). This promises huge cost savings for companies, as well as efficiency gains and scalability in service delivery. However, the emerging tension between increased efficiency on the one hand and a lack of customer acceptance on the other is often neglected. For example, despite technological maturity, many bank customers still have strong reservations about robo-advisors in private banking, so that they have not yet been able to develop the hoped-for broad impact. In this context, a study launched by the Federal Council on the topic of artificial intelligence identified the aspect of social acceptance as a key challenge in the implementation of AI-based offerings in industry and services (SERI 2019). In the following, three possible barriers to the acceptance of AI-based advisory services will be discussed in more detail under the points (1) algorithm aversion and uniqueness neglect, (2) creepiness and (3) breach of relationship norms, as well as initial ideas for possible solutions.
Algorithm aversion and uniqueness neglect
Algorithm aversion describes the fact that clients still prefer human guidance despite the advantages and precision of algorithms (Logg et al. 2019; Dietvorst et al. 2015). One possible explanation for this phenomenon is the perceived uniqueness neglect. According to Longoni et al. (2019), the concept is based on an imbalance between two beliefs: Customers have a strong perception of their individuality and perceive themselves as unique compared to others. Machines, on the other hand, are seen as functioning only in a standardised and programmed way and would therefore treat each case as equal. The prospect of being advised by AI tools can thus raise concerns that personal and individual characteristics, circumstances and needs are insufficiently taken into account. The negative effects of algorithm aversion and uniqueness neglect can be countered by the nature and design of the interaction. For example, it can be helpful if the counselling tool explicitly addresses the uniqueness of each individual customer by increasing the amount of information collected about the customer or by making the interaction very personalised. Furthermore, studies in the health context have shown that patients are less averse to AI applications if human professionals continue to have the final say (Longoni et al. 2019).
Creepiness
According to McAndrew/Koehnke (2016) and Tene/Polonetsky (2015), creepiness in interpersonal social contexts is characterised primarily by uncertainty about whether there is something to fear from the human counterpart and/or uncertainty about the exact nature of any threat. Furthermore, a deviation from common norms that is perceived as threatening, for example in terms of behaviour and appearance, can be a trigger for creepiness. Applied to AI advisors, there is reason to believe that a technical counterpart can also trigger feelings of creepiness and that, for example, lack of clarity and transparency regarding the exact processes that advisor tools perform when processing personal data or the deviation from common norms can cause creepiness on the part of users (Watson/Nations 2019). It is therefore crucial for companies to thoroughly research the needs of consumers before introducing AI-based advisory services in order to explore the limits of what customers find acceptable (Ostrom et al. 2019). Pilot tests and experiments can also be a good way to further develop the offer after the launch in constant feedback with customers. Furthermore, the use of hybrid design forms can be useful, especially in the transition phase directly after the launch, in order to avoid creepiness. In this way, a hard break is avoided and the customer can be introduced to the new application step by step. Furthermore, transparency regarding the underlying algorithms, processes and decisions (algorithmic transparency) can be an important factor in avoiding creepiness, especially in the area of life-critical decisions (Watson/Nations 2019; Watson 2019).
Breaking relationship norms
Previous research has shown that companies should strive to build long-term relationships with their customers, arguing that strong relationships lead to increased loyalty and referral behaviour (e.g. Hennig-Thurau et al. 2002; Mende et al. 2013). There are different types of relationships, each with specific needs. In this context, marketing distinguishes between exchange relationships (rational, business-oriented and following a “quid pro quo” logic) and community relationships (caring, altruistic and empathetic) (e.g. Aggarwal 2009; van Doorn et al. 2017). Before introducing AI-based guidance, companies should first segment their customer base according to the type of relationship and offer each segment the form of guidance that meets their relationship needs. For clients who have a more collaborative relationship, it is not advisable to completely replace human advice with AI, as this could be a violation of the client relationship. However, if clients are in an exchange relationship, the risk of the client relationship being damaged by the introduction of automated AI advice is likely to be manageable.
Discussion and outlook
In the “Artificial Intelligence” guidelines, the Federal Council describes artificial intelligence as a central building block in the digitalisation of the economy (SERI 2020). However, in order for AI to unfold its full potential, it is essential to understand the key behavioural mechanisms and challenges on the customer side, in addition to looking purely at technical feasibility. To ensure that these do not become a pitfall in digitisation projects, companies must always carefully examine the introduction of AI-based advisory solutions and, in addition to economic-technical considerations, also include any behavioural-psychological resistance on the customer side in their considerations. In this context, it can be particularly helpful to initially view AI-based offerings as an experiment. Following the trial-and-error principle, companies can then find out step by step, together with the customer, what is accepted and iteratively adapt the offer, thus ensuring the sustainable success of AI-based advisory services.
This is a partial excerpt from:
- Raff S., von Walter B., Wentzel D. (2021) AI-based guidance services – design forms, challenges and implications. In: Bruhn M., Hadwich K. (eds) Artificial Intelligence in Service Management. Forum Service Management. Springer Gabler, Wiesbaden.
You can find the complete chapter here.
References
1. Internet sources & links:
- Davenport, T. H./Bean, R. (2021): The Pursuit of AI-Driven Wealth Management, in: MIT Sloan Management Review: https://sloanreview.mit.edu/article/the-pursuit-of-ai-driven-wealth-management/, last accessed March 14, 2022.
- State Secretariat for Education, Research and Innovation: file:///C:/Users/ras6/Downloads/report_idag_ki_e%20(1).pdf
- Guidelines “Artificial Intelligence” for the Confederation Orientation framework for dealing with artificial intelligence in the federal administration: file:///C:/Users/ras6/Downloads/Leitlinien%20K%C3%BCnstliche%20Intelligenz%20-%20DE%20(1).pdf
2. Bibliography
- Aggarwal, P. (2009): Using Relationship Norms to Understand Consumer Brand interactions, in: Handbook of Brand Relationships, pp. 24-42.
- Dietvorst, B.J./Simmons, J.P./Massey, C. (2015): Algorithm aversion: people erroneously avoid algorithms after seeing them err, in: Journal of Experimental Psychology: General, Vol. 144, No. 1, pp. 114-126.
- Esteva, A./Kuprel, B./Novoa, R.A./Ko, J./Swetter, S.M./Blau, H.M./Thrun, S. (2017): Dermatologist-level classification of skin cancer with deep neural networks, in: Nature, Vol. 542, No. 7639, pp. 115-118.
- Hennig-Thurau, T./Gwinner, K.P./Gremler, D.D. (2002): Understanding Relationship Marketing Outcomes, in: Journal of Service Research, Vol. 4, No. 3, pp. 230-247.
- Huang, M. H./Rust, R. T. (2021): Engaged to a robot? The role of AI in service, in: Journal of Service Research, Vol. 24, No. 1, pp. 30-41.
- Jung, D./Dorner, V./Glaser, F./Morana, S. (2018): Robo-Advisory, in: Business & Information Systems Engineering, Vol. 60, No. 1, pp. 81-86.
- Logg, J.M./Minson, J.A./Moore, D.A. (2019): Algorithm appreciation: people prefer algorithmic to human judgment, in: Organizational Behavior and Human Decision Processes, Vol. 151, pp. 90-103.
- Longoni, C./Bonezzi, A./Morewedge, C.K.. (2019): Resistance to Medical Artificial Intelligence, in Journal of Consumer Research, Vol. 46, No. 4, pp. 629-650.
- McAndrew, F.T./Koehnke, S.S. (2016): On the nature of creepiness, in: New Ideas in Psychology, Vol. 43, pp. 10-15.
- Mende, M./Bolton, R.N./Bitner, M.J. (2013): Decoding Customer-Firm Relationships: How Attachment Styles Help Explain Customers’ Preferences for Closeness, Repurchase Intentions, and Changes in Relationship Breadth, in: Journal of Marketing Research, Vol. 50, No. 1, pp. 125-142.
- Ostrom, A.L./Fotheringham, D./Bitner, M.J. (2019): Customer Acceptance of AI in Service Encounters: Understanding Antecedents and Consequences, in: Maglio, P.P./ Kieliszewski, C.A./Spohrer, J.C./Lyons, K./Patrício, L./Sawatani, Y. (eds.): Handbook of Service science, Cham/Switzerland, pp. 77-103.
- Tene, O./Polonetsky, J. (2015) A Theory of Creepy: Technology, Privacy, and Shifting Social Norms, in: Yale Journal of Law and Technology, Vol. 16, No. 1.
- Uhl, M.W./Rohner, P. (2018) Robo-Advisors versus Traditional Investment Advisors: An Unequal Game, in: The Journal of Wealth Management, Vol. 21, No. 1, pp. 44-50.
- van Doorn, J./Mende, M./Noble, S.M./Hulland, J./Ostrom, A.L./Grewal, D./Petersen, J.A. (2017) Domo Arigato Mr. Roboto, in: Journal of Service Research, Vol. 20, No. 1, pp. 43-58.
- Watson, H.J. (2019): Update Tutorial: Big Data Analytics: Concepts, Technology, and Applications, in: Communications of the Association for Information Systems, Vol. 44, No.1, pp. 364-379.
- Watson, H.J./Nations, C. (2019): Addressing the Growing Need for Algorithmic Transparency, in: Communications of the Association for Information Systems, Vol. 45, No. 26, pp. 488-510.
Leave a Reply
Want to join the discussion?Feel free to contribute!