Digital personalisation – future opportunity or dead end (2)
Part 1 of this column highlighted that digital personalisation optimises conditional expectation values to offer tailored products and services and prices. The condition results from the existing data and information about the customers. Ideally, this brings great advantages to customers and providers and changes society profoundly, because the orientation towards the average becomes the orientation towards concrete people. This is substantial progress, but it depends very much on the concrete implementation. The “out of the Procrustean bed of the average person” (often also: the average man) can quickly become a “into the Procrustean bed of data personalisation” if personalisation is forced on customers without opt-out options. It remains crucial how seriously individual views of customers are taken. Data and empathy must come together in customer orientation so that digital personalisation creates added value. In practice, things often turn out quite differently: customers, rarely providers or both, suffer from digital personalisation. In Part 1, three fundamental ambivalences of digital personalisation were pointed out in this regard. Firstly, it is not necessarily the wish of those affected to receive digitally personalised offers. Secondly, even if it is their wish, these offers do not always work as intended. And even if they do work, thirdly, their effect can be very negative without the people concerned being aware of it.
Individual, comparative and community perception
If we are faced with the choice of receiving digitally personalised offers, the answer is not always easy. Do I want to pay a fixed price in the supermarket or a personalised one? How do I like it when some others get even better offers? Does it annoy me to have my preferences chosen by others? Do I accept that an algorithm prioritises my service preference instead of practising fair first-come-first-served (FCFS)? And even if I am given preference, is this how I want to separate myself from the community? For this last question, most economists will assume a self-evident yes, but this is not always realistic. There are often good reasons not to stand out – not to mention situations in which the individual advantage is supposedly or actually at the expense of the immediate surroundings. Mostly, however, the subjective perception of one’s own advantage will be decisive in determining whether I like digital personalisation or not. Ignorance promotes acceptance, while more knowledge and understanding tends to reduce it. This is because personalisation usually aims for a double value optimisation: on the one hand, an optimisation of the value of products, services and communication practices for the customers and, on the other hand, an optimisation of the value of the customers for the providers. The latter is often not at all in the interest of the clients. Besides the subjective perception of the absolute benefit of the individual and the relative benefit compared to others – often a relative disadvantage weighs more heavily than an objective advantage – other aspects are also important for acceptance, especially autonomy and solidarity. Third-party personalisation can not only be a real nuisance, but even be subjectively perceived as a psychological injury. The latter is especially the case if it calls one’s own autonomy into question. Personalisation can also be interpreted as intentional discrimination. And it can be perceived as an attack on togetherness. If we all experience the world differently as a result of different offers and prices, then this promotes individual egoism at the expense of social solidarity. Depending on the political positioning, some will see this as progress and others as a dangerous step backwards.
Escaping the Procrustean bed of the average – 7 critical aspects
But even if both sides – personalisers and personalised – are positive about it, there are many serious challenges. Fundamentally, data-based personalisation frees us from the Procrustean bed of the (mostly used male) average. But if handled incompetently, it forces us anew into the Procrustean bed of data-based calculated preferences, which may not match our actual intentions. Decisive for the effect of a digital personalisation are the fitness of the design (structure and design quality), the quality of the practical implementation (process quality) and the practical use case (result context). These 3 quality dimensions are essentially determined by 7 aspects
- How much data is available about an individual and how appropriate is it to formulate the conditions (for optimising the expected value) in the most meaningful way?
- How much relevant data is available on other individuals to adequately estimate conditional probabilities (for the expected value calculation)?
- How good are the algorithms for calculating the estimated conditional probabilities in relation to the specific context?
- To what extent can products, services or actions be effectively adapted to the estimated preferences/dispositions?
- How consistently and how customer-friendly is data-based personalisation implemented operationally?
- How seriously is the personalisation controlled with feedback mechanisms and critical scrutiny of the results (and corrected accordingly in case of deficits)?
- How extensive is the impact on those affected?
These aspects sound supposedly harmless, but in practice huge gulfs open up, depending on the answer to the question quoted. Digital personalisation without a feedback mechanism is pure discrimination – even if there is somehow some scientific data analysis behind it. An economically sensible as well as an ethically responsible digital personalisation therefore requires that one examines the WHAT and the quality of the HOW. The individual aspects should be assessed in relation to each other. For example, the greater the impact on those affected, the more demanding the quality management should be. However, it would be wrong to draw the conclusion that it is better to leave digital personalisation alone – both from an economic and an ethical point of view. For example, it is ethically questionable to forego a major benefit of personalised therapies in medicine because one wants to use personal health data for this purpose.
Digital transformation of heaven and hell: the wish fulfilment machine
In the past, it was part of the occidental art of preaching to describe heaven and hell impressively. James Joyce wrote particularly impressive examples of this. Today, digital utopias and dystopias are popular instead, but remarkably only the collective – especially in the case of dystopias. Instead of the individual, humanity is now the focus of reward or punishment. This is as bland literarily as it is philosophically – as far as we know – unproductive. So take some time to playfully digitally transform the good old heaven and the good old hell in your mind! You will find that digital gadgets are terrifyingly well suited for use in heaven or hell. The joke is that you don’t experiment with them – that’s only suitable for heaven or hell on earth – but actually acquire them from the relevant management perspective. Explicit warning: the thought experiment is not without side effects! If one wants to address the question more seriously, a digitally personalised wish-fulfilment machine proves to be a good starting point. It is programmed to give the “feeling” of optimal personalisation. It quickly corrects occasional mistakes. Except that in heaven it confronts people with new challenges that are just manageable and in hell it promotes inactivity and blocks inner incentives to learn something new by fulfilling other needs. This vividly illustrates what the basic problem with desired and functioning digital personalisation is: it influences our lives without us really noticing this influence. Depending on the target perspective of the providers, it helps us to learn and develop, or it restricts us, makes us stick to activities and makes us more immobile.
Unethical personalisation: the search for the weak
Besides ambivalent forms, there are also clearly unethical forms of digital personalisation. In the US, data is traded on people in difficult personal situations whose weakness can be exploited commercially. These people then receive offers that would not be made to people in stable personal circumstances, for example trying to recruit them as students for commercially oriented universities – knowing that they will probably not complete their studies and will have a hard time ever paying back the student loan. The practice outlined is not illegal and is similar to the business practice of gyms, with the difference that one does not have to take out a loan for a year’s gym subscription, which in most cases is then not used, and usually does not even get around to struggling before giving up. The business logic of this student recruitment is ingeniously perfidious: one does not recruit exactly those who have no chance of passing their studies – but the result comes very close. The digital data that puts you on the list of those to be recruited excludes you from many really attractive offers and job opportunities.
Part 3 of this column discusses personal strategies for dealing with personalisation, as well as technical options and raising awareness in schools.