ChatGPT: an oracle, a doctor or a joker? – Ethical boundaries of different tools usage

ChatGPT is not a perfect, but effective tool for working with information. But a higher level of anxiety is justified by the emergence of apps (Voicemod, Free AI voice cloning) that replace real facts, and open up wide prospects for manipulation. Will the defeat of technology-rooted society in ethical challenges lead to a digital dictatorship?

ChatGPT has caused an almost explosive reaction from users all around the world. Statements of the 4th technological revolution look more obvious and convincing now. Indeed, Siri and Alexa, the AI-powered user apps, have not been effective enough, but rather have provoked a wave of memes. ChatGPT is a linguistic program with a high level of AI, capable of producing logical and semantically meaningful messages. Why high-quality work of a linguistic program with effective AI algorithms “is already making huge waves in the technology sector”[2]? What explains this interest?

Figure 1: This image is generated by DALL.E[1]. 

The main reason for such demand should be called the increase in the rate of data production, not just speaking about ChatGPT, but as for AI in general. Humanity has been producing knowledge for centuries. And if today the period of doubling the knowledge accumulated by humankind is already 18 months, then in the near future it will be 12 hours[3]. Therefore, we need a fast and efficient tool for data management.

In this regard, many researchers have seen risks for current search engines (“the death of Google”[4]), to education[5],[6] and creativity[7],[8]. Many teachers forbid students to use ChatGPT in preparing assignments, or even cancel such tasks as writing an essay, replacing it with an oral presentation. An ironic example is when a student passed an exam with an essay generated by ChatGPT[9] on the course of “AI Ethics”. But ChatGPT can be very useful, for example, to check text written in a non-native language[10] or generate code. The question is how to apply this tool, how “demystifying the technology and clearly explaining its limitations[11]. Or, “Use them to improve, not do, your work[12]. A critical and analytical approach is needed to evaluate and apply content produced by ChatGPT.

ChatGPT has “a bazillion practical uses that can make our lives easier and more efficient”[13]. But with the high speed of actions, may occur a “moral vertigo”[14], when the norms and rules of ethical behavior do not seem immutable and categorical anymore[15]. So, ChatGPT can be perceived by users as an oracle that gives revelation in solving complex issues: how to reform the education system, what is the future of creative industries[16], how to write a successful grant proposal, etc. Or as a diagnostician who provides concentrated and systematic information about the causes of symptoms, and is more convincing in view of the lack of alternative data submission[17]. Or like a joker provoking unethical behavior[18]. Of course, the program itself contains obviously needed moral “safeguards”, so it is pointless to ask ChatGPT about how to cheat the tax, or plan a terrorist attack, or how to commit other illegal actions[19]. But at the same time, the program is open to manipulative or sophistical influence. So, depending on the form of the request used by pranksters of informal vocabulary and style of speech, such “ethical fuses” fail, and the program’s responses may contain direct calls for breaking of social norms and violence[20]. But even in ordinary using ChatGPT, there are significant ethical risks and limitations associated with incorrect data (or even jokes) and content manipulation, privacy violations[21], copyright infringement, discrimination, impact on vulnerable groups of people[22], [23], [24], [25].

Subtitle

But despite all these important factors, the ChatGPT is not the most serious challenge. The principle of its operations is a combinatorics with data, or “stochastic parroting”[26]. But what is the goal of producing technologies that can “displace” authentic reality? For example, Voicemod[27], or Free AI voice cloning technology[28]. Such programs produce a new “fourth order” of simulacra[29] in J. Baudrillard’s terminology. Recall that the author identifies three orders of simulacra: forgery, mass production and simulation (propaganda). A distinctive feature of the simulacra of these levels is a fundamental reference with reality and the ability to find and justify the authenticity or artificiality (depending on the context – falsity) of a statement or fact interpretation. The fourth level of simulacra transforms real life.

Since Ancient times, the sounding voice of a person has been an act of taking responsibility for the truth of the statement. Or the Stoic “lekton”, which means a sounding voice that brings order and regularity to the chaos of change. Current technologies are able not just make the voice of an individual anonymous, but also to demonstrate words and deeds that a person did not commit. This tool can be directly used for various manipulations at the political, public, personal level. And in the context of distance education, it casts doubt on the legitimacy of the student’s oral answer in the exam.

There is a direct dependence between moral norms and the liberal system of society: civil society does not tolerate dictatorship in view of a sufficient level of ethical culture, and where internal regulators are not effective enough, severe laws are needed. “Big brother” is watching you… Soon…


References

[1] “DALL-E 2 are deep learning models developed by OpenAI to generate digital images from natural language descriptions, called “prompts” (https://en.wikipedia.org/wiki/DALL-E), used key words to form an image: communication, simulacra, ChatGPT, reality, fictions.

[2] https://www.theweek.co.uk/news/technology/958787/chat-gpt-generative-ai-and-the-future-of-creative-work

[3] Jurkiewicz C. L., (2018) Big Data, Big Concerns: Ethics in the Digital Age. Public Integrity. Vol.20, Issue sup1: International Colloquium on Ethical Leadership: Past, Present, and Future of Ethics Research. P. S46-S59, https://doi.org/10.1080/10999922.2018.1448218

[4] https://www.the-sun.com/tech/7242493/chatgpt-death-of-google-ai-analyst/

[5] https://blogs.iadb.org/educacion/en/chatgpt-education/

[6] https://www.grid.news/story/technology/2023/02/21/is-chatgpt-the-future-of-cheating-or-the-future-of-teaching/

[7] https://creative.salon/articles/features/qotw-chatgpt-creative-tool-threat-to-creativity

[8] https://www.vanityfair.com/news/2022/12/chatgpt-question-creative-human-robotos

[9] https://gizmodo.com/ai-chatgpt-ethics-class-essay-cheating-bing-google-bard-1850129519

[10] https://www.nytimes.com/2022/12/21/technology/personaltech/how-to-use-chatgpt-ethically.html

[11] https://www.scu.edu/ethics-spotlight/generative-ai-ethics/chatgpt-and-the-ethics-of-deployment-and-disclosure/

[12] https://www.nytimes.com/2022/12/21/technology/personaltech/how-to-use-chatgpt-ethically.html

[13] https://medium.com/swlh/7-ways-to-use-chat-gpt-ethically-in-your-day-to-day-9b9729c7ba5b

[14] https://theconversation.com/chatgpt-dall-e-2-and-the-collapse-of-the-creative-process-196461

[15] https://www.wsj.com/articles/is-there-anything-chatgpt-kant-do-openai-artificial-intelligence-automation-morality-immanuel-kant-philosophy-91f306ca

[16] https://theconversation.com/chatgpt-dall-e-2-and-the-collapse-of-the-creative-process-196461

[17] https://bioethicstoday.org/blog/chatgpt-ethics-temptations-of-progress/#

[18] Zhuo, T. Y., Huang, Y., Chen, Ch., Xing, Zh., (2023) Exploring AI Ethics of ChatGPT: A Diagnostic Analysis. arXiv. doi = {10.48550/ARXIV.2301.12867}

[19] https://medium.com/@adilmarcoelhodantas/ethics-in-chatgpt-and-other-ais-ee31ce8e9f09

[20] https://futurism.com/amazing-jailbreak-chatgpt

[21] https://www.latimes.com/opinion/story/2022-12-21/artificial-intelligence-artists-stability-ai-digital-images

[22] https://doi.org/10.48550/arXiv.2301.12867

[23] https://incora.software/insights/chatgpt-limitations

[24] https://dataethics.eu/testing-chatgpts-ethical-readiness/

[25] https://www.europarl.europa.eu/RegData/etudes/STUD/2020/654179/EPRS_STU(2020)654179_EN.pdf

[26] Bender, E. M., Gebru, T., McMillan-Major, A., Shmitchell S. On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? ACM: Assotiaon for computing machinery. https://dl.acm.org/doi/10.1145/3442188.3445922

[27] https://www.voicemod.net/fr/

[28] chan users embrace AI voice clone tool to generate celebrity hatespeech – The Verge

[29] «For Baudrillard, the simulacrum is essentially the copy of a copy, that is to say, the copy of something that is not itself an original, and is hence an utterly degraded form». (https://www.oxfordreference.com/display/10.1093/oi/authority.20110803100507502;jsessionid=268D1FA98FB6D150D4F74AA315B93C2D)

Creative Commons Licence

AUTHOR: Olena Yatsenko

Olena Yatsenko is visiting reseacher at the Laboratory of Virtual Reality and robotics at the Bern University of Applied sciences and University philosophy lecturer at the National Pedagogical Drahomanov University (Kiyv, Ukraine).

Create PDF

Related Posts

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *