The ChatGPT hype

For Dutch / voor Nederlands: De ChatGPT-hype.

Recently social media blew up over the newest technological hype that will shake up our educational system: ChatGPT. Having a background in AI and years of experience working in higher education, I was surprised by the grand statements and the lack of decent, quality examples of how the application is used in education. Even the risk analyses that sometimes, quite often as a footnote, flit across my screen do not address the biggest issue: ChatGPT is nothing more than a very powerful bullshit generator. This explains its popularity on social media, but that this technology is seen as groundbreaking makes me worried about the amount of bullshit both in and outside of our education system.

In this blog we examine what ChatGPT is and, more importantly, is not. I hope to add a touch of realism to the discussion about using ChatGPT in education.

ChatGPT

ChatGPT is a language model. This means that it can make a prediction, on the basis of a sequence of words such as a question, about which word has to follow [1]. This is not new. An example that we know is the automatic word suggestions by your phone. However, ChatGPT is many times more powerful and very good at predicting what word comes next. The application is so good that many examples of interesting conversations with the chatbot can be found (for example here: [3]).

It is essential to realise that predicting words is the only thing ChatGPT can do, a fact that is often overlooked by the posts on social media that we have seen in the last few weeks. A general AI, that can win a game of chess and make a croque monsieur, is still a dream [2]. Because ChatGPT only can predict what the following words will be in a text, we ask it a different question than we think we do. If we type ‘The first person to walk on the Moon was?’ in ChatGPT, what we actually ask is: ‘Given the statistical distribution of words in the vast public corpus of (English) text, what words are most likely to follow the sequence ‘The first person to walk on the Moon was?’ [12]. That ChatGPT responds with ‘Neil Armstrong’ and many people interpret this as ChatGPT having actual historical knowledge is a big interpretation error. This error becomes even more clear if we replace that question by another, such as ‘how much is 5 + 5?’. As we educators know, it is not only important whether the answer is correct, but also how you got there. Knowing that 10 often follows 5+5 is different from actually being able to calculate it. Therefore it is not surprising that ChatGPT makes many mistakes.

ChatGPT and bullshit
So, ChatGPT is very good at predicting words, but cannot calculate [6, 7], programme [14], reference sources [8, 9, 10] or be trusted to tell us about our history [16]. ChatGPT’s success is mostly due to its strength in convincing people that it gives meaningful answers [4]. ChatGPT itself has no idea whether its answers are true or not. Many examples can be found of ChatGPT making mistakes, but making an error or giving a false answer is not the problem. The real problem is that ChatGPT does not care whether it gives you a true of a false statement [13, 11]. American psychologist Harry G. Frankfurt describes this as “bullshitting” and sees it as a much more serious threat than a lie:

“The liar cares about the truth and attempts to hide it; the bullshitter doesn’t care if what they say is true or false, but cares only whether the listener is persuaded.” [5]

For liars, truthfulness still plays a role in the game they play whereas bullshitters disregard truthfulness entirely.

Harry G. Frankfurt writes that people accept bullshit more often than a lie, even though bullshit is far more damaging. That the makers of ChatGPT follow this line of reasoning becomes clear when we look at how ChatGPT ‘learns’ from its mistakes. At the moment you only get a negative response when you ask it for incorrect mathematical proof, made up references or an outright lie. Clear lies, which ChatGPT previously provided whether asked for, or not (!). This does not mean that ChatGPT does not make these mistakes anymore. Only that the bullshit hides them better [10].

ChatGPT, bullshit and education
In the current discussion whether ChatGPT is groundbreaking in an educational setting we must primarily ask ourselves which role bullshit has in education (and our society). What does it say about a student’s (future) profession that a task (such as writing a social media post) can also be done by a bullshit generator? And even more important: what does it say about the assignments we give our students if we are afraid a bullshit generator could actually do them?

The answers to these questions do not exclude inclusion of ChatGPT in an educational setting. It is, at the very least, a good way to make students familiar with the (im)possibilities of AI. The OOK-team is available to support with questions regarding AI and education within the HAN University of Apllied Sciences.

ChatGPT can be found on: https://openai.com/blog/chatgpt/
More information on AI in education within the HAN University of Applied Sciences can be found at (in Dutch):
https://www.han.nl/onderwijsondersteuning/leren-werken-met-ict/artificial-intelligence/

Translation from Dutch by: Sigrid Noordam

Referenties:

[1] T. van Osch, “From Eliza to ChatGPT: the stormy development of language models”, surf.nl, https://communities.surf.nl/en/artificial-intelligence/article/from-eliza-to-chatgpt-the-stormy-development-of-language-models (Bezocht op 11 januari 2023)

[2] N. Kasteleijn, “’Computer verslaat grootmeester bordspel, maar kan geen tosti maken’”, nos.nl, https://nos.nl/artikel/2175632-computer-verslaat-grootmeester-bordspel-maar-kan-geen-tosti-maken (Bezocht op 11 januari 2023)

[3] M. J. White, “Top 10 Most Insane Things ChatGPT Has Done This Week”, springboard.com, https://www.springboard.com/blog/news/chatgpt-revolution/ (Bezocht op 11 januari 2023)

[4] G.N. Smith, “An AI that can “write” is feeding delusions about how smart artificial intelligence really is”, salon.com, https://www.salon.com/2023/01/01/an-ai-that-can-write-is-feeding-delusions-about-how-smart-artificial-intelligence-really-is/ (Bezocht op 11 januari 2023)

[5] H. G. Frankfurt, “On bullshit”, Princeton University Press, 2009. (Aanrader om te lezen! Er is een samenvatting op Wikipedia: https://en.wikipedia.org/wiki/On_Bullshit )

[6] “Why is ChatGPT bad at math?”, stackoverflow.com, https://ai.stackexchange.com/questions/38220/why-is-chatgpt-bad-at-math (Bezocht op 11 januari 2023)

[7] Sekhar M, “Arguing With AI Over A Mathematics Problem — Meet ChatGPT “, medium.com, https://medium.com/mlearning-ai/arguing-with-ai-over-a-mathematics-problem-meet-chatgpt-c8c1ceb9b264 (Bezocht op 1 januari 2023)

[8] “ChatGPT produces made-up nonexistent references”, ycombinator.com, https://news.ycombinator.com/item?id=33841672 (Bezocht op 11 januari 2023)

[9] “ChatGPT returns incorrect academic references. Help.”, reddit.com, https://www.reddit.com/r/ChatGPT/comments/zpgayt/chatgpt_returns_incorrect_academic_references_help/ (Bezocht op 11 januari 2023)

[10] T. Hirst, “Information Literacy and Generating Fake Citations and Abstracts With ChatGPT”, ouseful.info, https://blog.ouseful.info/2022/12/16/information-litteracy-and-generating-fake-citations-and-abstracts-with-chatgpt/ (Bezocht op 11 januari 2023)

[11] T. Kubacka, twitter.com, https://twitter.com/paniterka_ch/status/1599893718214901760 (Bezocht op 1 januari 2023)

[12] M. Shanahan,  “Talking About Large Language Models.” arXiv preprint arXiv:2212.03551 (2022). (https://arxiv.org/pdf/2212.03551.pdf)

[13] R. Goodwins, “ChatGPT has mastered the confidence trick, and that’s a terrible look for AI”, theregister.com, https://www.theregister.com/2022/12/12/chatgpt_has_mastered_the_confidence/ (Bezocht op 11 januari 2023)

[14] B. Vigliarolo, “Stack Overflow bans ChatGPT as ‘substantially harmful’ for coding issues”, theregister.com, https://www.theregister.com/2022/12/05/stack_overflow_bans_chatgpt/ (Bezocht op 11 januari 2023)

[15] T. Ansari, “Freaky ChatGPT Fails That Caught Our Eyes!”, analyticsindiamag.com, https://analyticsindiamag.com/freaky-chatgpt-fails-that-caught-our-eyes/  (Bezocht op 11 januari 2023)

[16] B. L. Turner, twitter.com, https://twitter.com/bltphd/status/1599806815146893313 (Bezocht op 11 januari 2023)

[17] Image generated by Midjourney. Image is used under Creative Commons Noncommercial 4.0 Attribution International License. (https://midjourney.gitbook.io/docs/terms-of-service#4.-copyright-and-trademark).