Approach ChatGPT like an Online Community and You are Good to Go

Robot Chatbot Head Icon Sign Made With Binary Code. Chatbot Assistant Application. Ai Concept. Digital Binary Data And Streaming Digital Code. Matrix Background With Digits 1.0. Vector Illustration.

When reading across mainstream media and scientific magazines, it’s hard not to buy into a collective frenzy that says that generative AI systems like ChatGPT are bringing about a “paradigm change” in the way we work with information in general and search in particular. Some scientists go as far as to compare the looming widespread use of generative AI systems like ChatGPT to the societal changes brought upon by the introduction of typewriters or even the emergence of written language itself (LMU 2023). In instruction, ChatGPT is expected to be used “whether I like it or not, whether I feel safe or not, this is changing our landscape and information space.”

“Be careful – you never know what you’ll get!” is crucial advice when engaging with online communities and ChatGPT-like AI alike.

ChatGPT’s performance is intriguing in the sense that it produces, at times, responses so elaborate that they look like they were authored by skilled human writers. At other times, ChatGPT produces texts that appear to be largely convincing due to their authoritative voice but fall apart when examined more closely, to the extent that some of the writings may be utter nonsense. There is literature full of glorious examples (Knight 2022), and counting. ChatGPT may also produce false information such as the titles of publications that do not exist. The computational processes that produce that false information are often anthropomorphized as hallucinating which is unfortunate since it downplays the fact that the system is malfunctioning. As a result of that false information, scientists have been approached for copies of scientific papers they never wrote (e.g., Lemire 2023). When I queried ChatGPT about the top five publications that I wrote on embodied information behavior it returned five references that looked relevant but did not exist either. When I repeated the query a year later in January 2024, ChatGPT was coy at first (“I’m sorry, but I cannot provide real-time or specific research paper listings”) but after a bit of probing it delivered two references that looked convincing but once again, none of them actually exists.

A tool, not a source

Making sense of complex systems like expert systems or generative AI systems like ChatGPT is important because it helps people manage expectations regarding truthfulness, accuracy, and responsibility. Crucially, people may not always be aware that they are using ChatGPT or similar generative AI products since the technology is deeply integrated in familiar products ranging from search engines to word processors.

Wolfram (2023) provides a thorough scientific investigation of the inner workings of ChatGPT like systems but only very few of the countless people using ChatGPT will read the paper not to mention understand its sophisticated mathematical concepts. Smith (2023) cites the frequently promoted view that “[ChatGPT] is a tool, not a source” however, suggesting to view ChatGPT as a tool may be confusing since we expect tools to function properly when used properly. Or, to emphasize the point I made earlier, tools don’t hallucinate.

In what follows I will offer a way to make sense of ChatGPT’s performance by comparing it to the more common experience of engaging with online communities. The approach makes sense in many ways, including that Large Language Models (LLMs) may actually be repurposing text patterns they have learned from examples they scaped off online community discussions, typically without asking for permission.

How people engage with ChatGPT

In many ways, ChatGPT resembles engaging with online communities. If we look at an online community as if it was an information system (e.g., Schwabe and Prestipino 2005), information quality criteria including completeness, structure, personalization, and timeliness of information can be used to measure the quality of the information that online communities return in response to queries (participant questions).

When I explored how online community members respond to queries, I identified patterns of behaviors in online communities that go well beyond their documented capacity as information systems (Lueg 2006). Specifically, members of online communities would often engage with information seekers’ requests in ways that resemble engaging with a skilled intermediary (Lueg 2007). I introduced the terms ‘mediation’ and ‘expansion’ to describe the phenomenon that online community members often help information seekers understand, and often also re-consider, the information needs that they articulated when approaching the online community.

While engaging with online communities often delivers quality information in terms of completeness, structure, personalization, and timeliness, community members may also deliver information that is outdated, plain wrong and/or deliberately misleading. For example, in the context of fandom, Lee et al (2022) identified the presence of a wide variety of mis-/disinformation such as evidence collages, disinformation, playful misrepresentation, and perpetuating debunked content, all of which have also been observed in other contexts including political, health, or disaster contexts. While the online community that these researchers explored had some tactics in place to mitigate the effects of the mis-/disinformation, it is a reminder that depending on the specific context, caution is advised when considering information provided by online communities. This is particularly important when information seekers don’t have own subject expertise.

By viewing how people engage with ChatGPT as similarly to how they engage with online communities, it could help people make sense of the complex computational technology and help them manage their expectations regarding truthfulness, accuracy, and responsibility. The latter seems especially pertinent since no-one would expect an online community to accept responsibility for the information they provide whereas anthropomorphizing ChatGPT as a trustworthy “expert” might actually cause that to occur against better knowledge. Like expert systems before, ChatGPT should not be seen as a reliable expert that solves one’s information problems but at best as a decision support system that still requires substantial knowledge on the part of the information seeker.


References

  1. Knight, W. (2022). ChatGPT’s Most Charming Trick Is Also Its Biggest Flaw. Wired, 7 Dec 2022. https://www.wired.com/story/openai-chatgpts-most-charming-trick-hides-its-biggest-flaw/
  2. Lee, J. H., Santero, N., Bhattacharya, A., May, E., & Spiro, E. (2022). Community-based strategies for combating misinformation: Learning from a popular culture fandom. Misinformation Review. Harvard Kennedy School. https://doi.org/10.37016/mr-2020-105
  3. Lemire, D. (2023) Tweet posted March 6, 2023. https://twitter.com/lemire/status/1632823733617324033
  4. LMU (2023). ChatGPT: Einschätzungen von LMU- Forschenden. Blog post 20.02.2023 https://www.lmu.de/de/newsroom/newsuebersicht/news/chatgpt-einschaetzungen-von-lmu-forschenden.html
  5. Lueg, C. (2006). Mediation, Expansion and Immediacy: How online communities revolutionize information access in the tourism sector. Proc ECIS 2006. Göteborg, Sweden. https://aisel.aisnet.org/cgi/viewcontent.cgi?article=1106&context=ecis2006
  6. Lueg, C. (2007). Querying Information Systems or Interacting with Intermediaries? Towards Understanding the Informational Capacity of Online Communities. Proc ASIS&T 2007, Milwaukee WI, USA. https://asistdl.onlinelibrary.wiley.com/doi/10.1002/meet.1450440249
  7. Schwabe, G., Prestipino, M., (2005). How Tourism Communities can change travel information quality. Proc. ECIS 2005. https://aisel.aisnet.org/cgi/viewcontent.cgi?article=1147&context=ecis2005
  8. SinhaRoy, S. (2024). Is ChatGPT a Liar? Survey looks at library professionals’ attitudes toward the AI-powered chatbot. American libraries, January 20, 2024. https://americanlibrariesmagazine.org/blogs/the-scoop/is-chatgpt-a-liar/
  9. Smith, C. (2023). Information Literacy for the ChatGPT Age. What library workers should know about generative AI. American libraries, June 25, 2023. https://americanlibrariesmagazine.org/blogs/the-scoop/information-literacy-chatgpt/
  10. Wolfram, S. (2023). What Is ChatGPT Doing … and Why Does It Work? Feb 14, 2023. https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work/
Creative Commons Licence

AUTHOR: Christopher P. Lueg

Christopher Lueg is a professor at the University of Illinois. Before that, he was a professor of medical informatics at BFH Technik & Informatik. He has made it his mission to rid the world of bad user interfaces. He has taught Human Centered Design and Interaction Design for more than a decade at universities in Switzerland, Australia and the USA

Create PDF

Related Posts

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *