True or False? How artificial intelligence is making it increasingly difficult to discern reality
The increasing spread of artificial intelligence and digital communication poses new challenges for human rights. Different value and legal systems and a lack of regulations lead to high cognitive demands when it comes to distinguishing between truth and falsehood. A new legal standard, such as the “human right to truth”, is urgently needed.
Young people between facts and fiction
A study conducted by the Friedrich Naumann Foundation in 2025 (Friedrich-Naumann-Stiftung 2025), based on data collected at the end of 2024, revealed an alarming level of knowledge among young people. For example, 23.6% of those born after 1980 agreed with the statement: “Russia is more interested in peace in Ukraine than the Western Europe.” Even higher approval rates were recorded among the TikTok users (36.3%) and the users of Platform X (40.3%). Assuming comparable cognitive abilities in the groups, it can be assumed that these results are significantly influenced by the information conveyed by the used media.
The distinction between truth and lies has become a cognitive challenge that the individual can no longer cope with alone.
Value pluralism, the legal order and the virtualisation of social reality in the 21st century
Conflicts arise from differences between ethical value systems that are shaped by cultural groups and internalised by individuals (DaDeppo, 2015). Various classifications, such as Scheler’s five value levels (1921/2007) or Rokeach’s 36 values (1973), illustrate the diversity of these systems. The law order functions as an institutionalised interpretation of ethics and allows the individual to live in harmony with the group. Deviations lead to conflicts, initially between the individual and the group, later also between groups.
To minimise such conflicts, the law order transforms ethical values into normative systems based on the three powers of the state according to Montesquieu (1748): Legislative, executive and judicial. Different value systems and legal orders can coexist as long as groups maintain their autonomy. However, increasing human mobility has favoured normative and cultural tensions.
In response to the devastating consequences of conflicts of values, particularly during the Second World War, the United Nations adopted the Universal Declaration of Human Rights in 1948 (United Nations, 2025). The European Convention on Human Rights (ECHR) followed in 1950 as a regional counterpart, was ratified by 47 member states of the Council of Europe (Council of Europe 1950). Switzerland signed the ECHR in 1974, Poland in 1993, although the latter recognised reservations regarding collective complaints. It was not until 2020 that the European Union received a mandate to begin negotiations on accession to the ECHR, a process that had been prepared by European jurisprudence since the 1970s (Jacqué, 2016; 2025).
In addition to law, communication also contributes to the mediating conflicts of values. Until the invention of the telephone in 1876, the exchange of information was limited (Astinus, 2015). Today, over 90 % of the population in Switzerland and over 75 % in Poland use smartphones, which transmit diverse, often non-verbal and symbolic content. As a result, individuals are constantly exposed to messages from other value systems, which increase cultural tensions, disinformation and normative incoherence.
A modern challenge is the virtualisation of individuals and groups through technologies such as artificial intelligence. Recognising false, often manipulative information is becoming increasingly difficult, which creates cognitive uncertainty and impairs the ability to live in accordance with one’s own value system. This weakens individual autonomy, psychological well-being, social trust and the coherence of democratic communities.
Contemporary state measures to control information
The electronic exchange of information has been subject to various forms of surveillance in most countries around the world for over two decades. Traditional control mechanisms are based on keyword-based filters designed to identify potentially undesirable content.
However, in the age of artificial intelligence-generated messages, these standard monitoring methods are proving inadequate. Content created by advanced language models can deliberately avoid recognisable keywords while conveying hidden meanings or manipulative narratives. As a result, modern communication systems are increasingly escaping existing control mechanisms.
China is a pioneer in the legal regulation of artificial intelligence, having begun developing the relevant legal framework in 2017 and completing the first phase in 2020. The regulations focus on ethical aspects and the transparency of algorithms. One of the central legal requirements is the obligation to label AI-generated content as artificial, although comparatively low penalties are foreseen for violations of these regulations, even including the license revocation and backlisting (Deng, 2025).
Regulation of artificial intelligence in the United States
The lack of coherence in US presidential policy makes it difficult to create a stable and long-term legal framework for artificial intelligence. Although the US Congress has been working on a comprehensive law to regulate this area since 2013, no uniform federal law has yet been passed.
In the meantime, President Joe Biden issued a comparatively restrictive Executive Order in 2023, which included, among others, the obligation to assess the risk of AI systems and the recommendations on transparency and data protection (Biden, J.R., 2023). However, in January 2025, his successor, President Donald Trump, significantly relaxed these regulations by issuing a new Executive Order (Trump, 2025).
In the absence of a uniform federal framework, some states – including California, New York and Illinois – have taken their own legislative initiatives to regulate the use of artificial intelligence in certain areas such as employment, education or advertising.
However, none of the existing legislation – neither at the federal nor the state level – contains precisely defined criminal penalties for violations of the law. As a result, enforcement of the law in this area remains reliant on individual court actions and the subjective interpretation of judges (AI Watch, 2025).
AI Act – the European response to the challenges of artificial intelligence
In 2024, the European Parliament and the Council of the European Union adopted the AI Act – a comprehensive legal framework to regulate the development, implementation and use of AI systems throughout their life cycle (PE I RUE, 2024). The main objective of the Act is to harmonise regulations in the EU Member States while ensuring the protection of citizens’ fundamental rights.
The AI Act introduces a classification of AI systems based on risk level assessment. For systems categorised as high-risk, the following requirements, among others, are envisaged:
- Strict measures to ensure the transparency of their operation
- An obligation to assess the impact on fundamental rights,
- Documentation demonstrating legal compliance.
Representatives of non-EU countries – including the USA, Canada and Mexico – were also involved in the drafting of the legislation, which influenced the formulation of certain provisions. In particular, the application of specific rules to private actors and measures in the area of national security were waived. These exceptions are controversial, as they can lead to deviations from the established human rights standards. Particularly problematic is the absence of clear provisions on accountability for AI-generated content.
In order to ensure effective enforcement of the regulations, every company offering an AI system in the EU must designate a natural or legal person established or residing in the Union who is responsible for cooperation with supervisory authorities. Compliance with the AI Act is monitored by the market surveillance authorities of the Member States.
Article 99 lays down significant sanctions for infringements:
- Fines up to 35 million euros,
- or up to 7% of the company’s total worldwide annual turnover – whichever is higher.
In February 2025 the Swiss Federal Council ratified the AI Act. Over the following two years, national legislation is scheduled to be harmonized with the European regulations (UVEK 2025).
Comparative table of the regulation of artificial intelligence
Comparative table outlines the regulation of artificial intelligence across three important jurisdictions: the European Union (AI Act), China and the United States (as of 2025). The table is based on five key assessment criteria.
| Criterion | European Union (AI Act) | China | USA |
| Classification of the risk of AI systems | Yes – four levels: unacceptable, high, limited, minimal | Yes – industry and function-based classification | No nationwide classification – depends on the state or the authority |
| Labelling requirement for AI generated content | Yes – mandatory for deepfakes, synthetic content and chatbots | Yes – clear labelling requirement for AI-generated content | No – no general labelling requirement |
| Sanctions for infringements | Up to €35 million or 7% of annual total worldwide turnover, whichever is higher | Relatively low fines, enforced by local supervisory authorities, possible licence revocation and blacklisting | No established sanctions; enforcement primarily thorough civil litigation |
| Scope of application | Applies to all AI systems offered within the EU | Primarily covers domestic activities and Chinese companies operating abroad | Fragmented – dependent on state-level and sector-specific regulations |
| Enforcement and supervision | Market surveillance by national authorities in the EU member states | Strong state regulators, strong centralised control | No centralised body – enforcement fragmented and judicial |
The table shows that:
- The EU has adopted the most comprehensive and systematic approach, albeit with exceptions concerning security and companies outside the EU.
- China relies on centralised control, ethics and content labelling; however sanctions are moderate and enforcement is selective.
- The US continues to lack uniform federal regulations, and AI oversight is based on decentralisation and judicial mechanisms, which weakens the overall level of protection.
Conclusions
A sense of happiness and life satisfaction has a direct impact on the economic success of a community (Proto and Rustichini, 2013). Therefore, social groups – just as protect their members from armed aggression – should also protect them from informational incertainity.
In this context, users of Chinese media theoretically have better chances of distinguishing truth from falsehood, as artificially generated content must be clearly labelled. By contrast, average users in Europe, the USA and Canada receive only limited support in dealing with disinformation; its detection and contestation are usually only possible through legal action – which means that there is no immediate protection.
The only realistic course of action appears to be an amendment to the Protocol to the European Convention on Human Rights by introducing a new Article 7: The Human Right to Truth.
A similar solution within the framework of the UN Universal Declaration of Human Rights (Article 31?) will likely take several more years to materialize….
This text is a transcript of a lecture given by the author at the XVII International Scientific Conference on Human Rights held at the Polish Parliament. The event took place from 31 March to 1 April 2025 in Warsaw under the patronage of the President of the Council of Europe, Alain Berset.
All references:
Create PDF

Contributions as RSS
Comments as RSS
Leave a Reply
Want to join the discussion?Feel free to contribute!