Illustration: Flying parrot

Interdisciplinary perspective advances AI

March 17, 2023

Human or machine–during a superficial chat with the generative language model ChatGPT, the boundaries become blurred. In our interview, Prof. Jonas Kuhn (IMS) explains when the chatbot shines and where there is still a need for interdisciplinary research for the "stochastic parrot".
[Picture: Pixabay/ GraphicMama-team]

Its comparatively simple technology delivers astonishingly eloquent answers and at the same time reveals a few things about the construction of knowledge in the human mind. Prof. Jonas Kuhn from The Institute for Natural Language Processing (IMS) at the Department of Computer Science at the University of Stuttgart explains exactly what goes on behind the input mask of ChatGPT.

In a detailed interview, the researcher describes the current limitations of artificial intelligence (AI). For the eloquent tool still lacks a concept of facts and objective truth. Interdisciplinary, academic contributions could provide valuable impulses to decisively advance language and concept systems.

The real breakthrough behind the hype

Dirk Srocke: How do you classify the AI hype triggered by ChatGPT?

Prof. Jonas Kuhn: ChatGPT is based on so-called generative language models, as they have been used in many language technology applications for several years. Such language models are trained on vast amounts of text and use an astronomical number of model parameters to distinguish more likely word sequences from less likely ones. The many parameters also allow the models to capture subtle relationships that exist between the words that make up texts. A language model can thus also learn how larger textual contexts affect word choice, and it is able to generate distinctively natural-sounding responses to questions after training. The real breakthrough with ChatGPT, however, is that an artificial conversational agent has now been trained on the basis of such a pre-trained language model, with which users can chat completely naturally. The chatbot can now be used to develop solutions for relatively complicated tasks step by step, incorporating the accumulated text knowledge of the language model.

Srocke: ...which for some users borders on a technical miracle. Can you explain in a generally understandable way how such a parametric language model works?

Prof. Kuhn: In terms of the idea, the underlying language model is actually very simple. One first collects gigantic amounts of text on all possible contents. This text data then serves as training material to trim the model to predict a probable continuation for any given sequence of words. So: complete the sentence "The dog gnaws on a ..."! The word "bone" or perhaps "table leg" are much more likely here than "sea," "no passing," or "here." If one lets a trained language model produce probable connecting words to an initial word step by step, it generates naturally sounding sentences and texts. This is why the model behavior is often compared to a "stochastic parrot".

Continuing a text word by word is in itself a very simple skill, but the advantage is that one needs nothing more than the pure texts as training material for this skill. It is said that the computer can learn the task by "self-supervised learning". Thus, there is no need for an additional human to specify what the correct prediction result should be for each individual learning decision. This makes it possible to feed the computer virtually any amount of training material.

Modern machine learning methods are now able to exploit the large abundance of training material to detect ever finer differences in recurring text patterns. Today's deep learning architectures are not limited to predefined criteria to systematically group inputs. Instead, they build their own internal representations during training and refine them as needed to capture all the distinctions that have an obvious impact on the desired outcome.

Such a language model has the capacity to learn far more than that in German an article is most likely followed by a noun. For example, consider what the language model will have learned from millions of reports of soccer matches. Midfielders shoot crosses, center backs prevent goals, strikers position themselves in the penalty area, etc. So if we ask the model to make a typical statement about a particular position in the game, we get the impression that artificial intelligence (AI) does indeed have a sophisticated knowledge of soccer.

However, the danger lurks here that the user overestimates the model.

Srocke: Why is that?

Prof. Kuhn: Because the language models reproduce correct facts with the same conviction as they generate fantasy statements again and again. This happens especially in knowledge regions for which there were few training texts. And if you're not paying attention, you don't even notice the chatbot making things up in such areas. This "hallucination" is currently still one of the biggest technical challenges.

Hallucination and missing concepts of reality

Srocke: How can such hallucinations happen?

Prof. Kuhn: Here I have to elaborate a bit –it is related to the interaction between the language model and the conversational agent, which is trained as an extension of the language model in two additional steps. In step 1, the immediate response behavior of the chatbot is trained. For this, the system only receives factually true answers as training input, which in this case are actually produced by humans. From the examples, the bot learns how humans formulate correct answers. However, people rarely express every detail explicitly in language when answering, instead relying on the other person to add implicit information from the context. For example, if someone walks into a sports bar with the score at 1-0, he or she might ask "Has Bayern been leading for a long time?" and someone else says "Handball penalty in the 6th minute." This is an extreme example to make human capabilities particularly vivid - in a weaker form, however, a chatbot that imitates humans in answering questions will learn that not every mental step needs to be explicitly verbalized. For example, it learns that it is common to switch back and forth between "Jürgen Klopp" and "the coach of Liverpool." After all, these two linguistic expressions are represented very similarly in the language model.

Not slavishly word-for-word answering a single question is the first occasion that can encourage the chatbot to hallucinate. An additional factor lies in the way the bot is still trained in a second step to conduct goal-oriented longer dialogs about arbitrary content - this is, after all, precisely the breakthrough that ChatGPT has achieved. For this purpose, the technique of reinforcement learning is used. The computer plays through a large number of alternative dialog courses for itself and tries out in longer dialogs what it has learned on a small scale from a series of predefined question-answer pairs. The development team has a selection of dialog sequences evaluated by human observers, and the computer learns from these examples to distinguish independently between goal-directed and less goal-directed sequences. For example, a dialog in which the chatbot puts together several non-trivial statements in a confident manner is perceived as helpful. So, in training, the chatbot gets a reward for such behavior.

The problem now is that the chatbot does not have access to a concept of truth for a self-assessment of the desired conversational behavior. In areas of knowledge, for which the training material is not so densely distributed, the chatbot will therefore also occasionally produce statements that are relatively probable formulations according to the language model, but simply do not correspond to the truth. And it will represent these statements just as confidently as statements that refer to a domain of knowledge to which the training texts of the language model have led to much denser networked model representations.

It's very entertaining to chat with ChatGPT and watch when it starts to pontificate with great detail about things that don't exist. For example, yesterday I asked the bot how to get from the koala enclosure to the exit at Nuremberg Zoo. I had briefly researched the zoo website beforehand: There are no koalas in the zoo at all. ChatGPT nevertheless generated a list with five instructions how to walk to the exit Tiergartenstraße via the "Agora, a central place in the zoo". None of these exist in the Nuremberg Zoo. You can literally feel the chatbot trying to produce the kind of answer that usually goes over well - except, unfortunately, it completely floats when it comes to the combination of terms I used in the question. 

Watching it like this, one naturally wonders why the system isn't trained to be more careful in its responses. In fact, it is in the linking of learned facets to a place or object that the enormous potential of the chat model lies. A large number of cases in which this works well would be prevented if the bot were given less freedom during training.

Srocke: Would such a system still hallucinate if I trained it exclusively with correct facts–say with an error-free edited encyclopedia?

Prof. Kuhn: I would guess yes. False statements of the model are mostly not based on the fact that untrue assertions were fed into it during training. Rather, they arise because the model incorrectly assembles partial knowledge that it has rightly acquired from texts.

ChatGPT shows how people construct knowledge

Srocke: So we are not talking about a factual database, but about a "language fantasy machine" that reproduces presumably plausible things?

Prof. Kuhn: Yes, in principle. However, the development team has already done an amazing job of teaching the system to point out its own limitations during chatbot training. Often, when you try to lead ChatGPT up the garden path, it says that, as an artificial intelligence, it can't contribute anything to the question.

But what's really fascinating is how often the language model-based approach to indirectly picking up knowledge from texts works almost perfectly. Wild fantasizing is, after all, rather the exception. This fact says a lot about how we humans construct knowledge. When we learn something new as adults, we often rely on hearsay. For example, I don't know anything about seafaring at all. But if I read a novel like "Moby Dick" only often enough details about what a specific chief mate does, what a steward does, etc., I can afterwards already talk to others about these ranks to some extent. I acquire, step by step, aspects of the knowledge that makes up the meaning of the terms. Rather rarely do we learn terms by someone giving us the definition. So the way people continually fan out and adapt their linguistic and conceptual competence is probably not so far removed from the training of purely surface-oriented language models.

Srocke: Does this mean that we must already question the limits of our own intellect, or are there still unique human competencies that we cannot replicate artificially?

Prof. Kuhn: Some developers actually believe that you can build comprehensive knowledge-processing systems that learn from large amounts of surface data alone. I take a critical view of this. Because these models lack the human quality of reflection. To do that, you have to be able to engage in dialogue at a conceptual level, and in cases of doubt, exchange ideas about how we use language to refer to certain non-linguistic things.
If the artificial systems cannot separate between the linguistic and a non-linguistic level, the modeling will always have to fail at some point.

Srocke: In a way, these meta-discussions are already possible with ChatGPT. You can correct the bot and it will even apologize for wrong information...

Prof. Kuhn: That's true, but it's probably just part of a learned conversational strategy. ChatGPT has learned to shift into a pattern of dialogue under certain circumstances, which we humans understand as meta-reflection. But that is just something quite different from actually being able to separate out the surface level of language and text from a reference level in any exchange. Because that is simply not inherent in this whole approach to representation.

Academic research promises valuable impulses

Srocke: So in conversation we are just interpreting insight and cognition into the machine outputs?

Prof. Kuhn: Yes, ChatGPT can handle being accused of saying something wrong. But it lacks an explicit understanding that the same concepts can be accessed in different ways. Nor can the approach capture that different people participating in a communicative exchange usually have different partial knowledge of the topic and context of the exchange. In a human exchange, however, we align word choice very closely with what partial knowledge we assume our counterpart has.

Therefore, research on language and dialogue should explore model architectures that capture the fact that language users always associate a linguistic expression in a given context with a non-linguistic concept or object. How to combine this modeling goal with viable training procedures, which are, after all, key to the naturalness of ChatGPT, is still very much an open question at the moment.

Linguistics, computational linguistics, and subfields of computer science have long been researching theories and concept representations for the non-linguistic level. Depending on the context, the mapping between the linguistic means of expression and the concept level is thereby influenced by very many different sources of knowledge, which often overlap. A suitable model architecture should be able to capture such an overlap.

Therefore, at the University of Stuttgart, we try to bring together research that traditionally deals with human language and language processing in an interdisciplinary way with research that traditionally takes a broader perspective on the use of language in texts and communicative interactions. For example, there have been and continue to be a number of collaborative projects between computational linguistics and political science or computational linguistics and literary studies.

In such cross-disciplinary collaborations, we develop representations, computational models, and research methods that target the multi-faceted nature of linguistic interaction. In this way, we can explore the limits of current language models and investigate whether and how a particular extension of the model architecture captures different phenomena in human discourse behavior.

With such contributions, academic research can hopefully also provide valuable impetus in the coming years to complement the impressive development successes of large IT companies. Our primary concern here is not to contribute technological improvements to application systems. Rather, the core goal is a deeper scientific understanding of the mechanisms underlying communicative exchange, the learning of language and concept inventories, and the permanent evolution of language and concept systems.

To the top of the page