Despite the inherent unexplainability of LLMs, convincing attempts at explaining their brilliance are being made.
Table of contents
- I. Introduction
- II. Intentions and messages
- III. Latent space model
- IV. Inferring intention with marginal distribution of messages
- V. In-context learning
- VI. Chain-of-thought prompting
- VII. Conclusion
I. Introduction
It goes without saying that LLMs (Large Language Models) are a revolutionary technology. Not only are their science fiction-like capabilities mesmerizing for users and promising for investors, but also the lack of apparent limitations to their potential has led to the seemingly never-ending wave of ever-growing and increasingly impressive models. The underlying scaling laws remain elusive, with the on-going discussions around terms like emergent abilities (which are essentially attempts to formalize unpredictability) strongly indicating we’re not nearing any definitive conclusions.
However, researchers are tirelessly working towards the goal of understanding LLMs. Given the inherent lack of explainability in machine learning, many of the results revolve around empirical verification of what LLMs are capable of. Nevertheless, some results take on the task of closer evaluation of their architecture (which is hampered by the prohibitive costs of training), to answer some of the why questions, even if these results can only be speculative.
One of the theories possibly explaining the scaling laws of LLMs is that they are approximators of the latent space model. In this article, we’ll take a closer look at this idea (as defined in [1]), and briefly summarize how it might relate to the crucial properties of LLMs, such as in-context learning or chain-of-thought prompting.
II. Intentions and messages
Before LLMs, many attempts at automated generation of coherent text have been made, starting in the 60s (e. g. Eliza). Some of them have heavily revolved around the concept of messages and intentions, which we’ll also be using in this article.
A message is an instance of the simplest unit in a language sufficient to convey meaningful information (for instance, in a programming language this would be a statement, and in a spoken language this would be a sentence). An example of a message could be ‘I have a cat’.
Intention is an instance of the simplest unit of information that we would like to convey during communication. Intentions are expressed with messages. One intention can be usually expressed with many different messages. For example - all of the messages: ‘I have a cat’, ‘I own a cat’ and ‘My pet is a cat’ are describing the same intention.
In some languages, the intention can be recovered from the message with certainty. This holds by design for programming languages (the action described by a statement is always clear, though not always meaningful). Such languages are called unambiguous. Other languages (most importantly, the spoken ones) are denoted as ambiguous, and inferring intentions from their messages usually involves some degree of uncertainty.
III. Latent space model
While the syntactic rules of natural languages are usually precise (though sometimes a bit convoluted) and can be easily translated to a computer program, capturing the semantics of human communication is much less trivial. The key obstacle is that in order to convincingly continue the communication, we have to be able to infer the underlying intention from the message itself. This is difficult, as the space of possible intentions is seemingly dependent on the world we live in.
To capture this intuition, the latent space model of stochastic language generation has been introduced.
The main purpose of the latent space model is to sample sequences of messages, where is an integer parameter of the whole model. The probability of sampling an -sequence is denoted by . For by we denote the probability that the sampled sequence starts with .
Building on those distributions, we can easily define contextual sampling of continuations. Probability that a message follows a context of is defined as
The key assumption of the latent space model is that language generation is a 2-step process:
- first, we sample intention, which intuitively is “what we want to say”,
- then, out of all messages conveying our sampled intention, we sample one to decide “how we say it”.
Formally, -sequences of messages are sampled as follows:
- we sample a sequence of intentions from a distribution . This distribution is another input parameter to the latent space model.
- then, for every intention , we sample one of messages conveying this intention from the distribution . Similarly, distributions are input parameters of the latent space model.
Putting everything together, the probability of sampling a sequence of messages is equivalent to
The ideal latent space model is achieved by using input distributions that accurately reflect real-life data, and it is the holy grail of automated natural language generation.
Over the course of last 40 years, many attempts at implementation of the ideal latent space model have been made ([2], [3]). Most of them made the assumption that explicit descriptions of the true to real life distributions and are necessary. For this reason, none of them was convincing. A precise understanding of these distributions is difficult to achieve, as the notion of intention is very abstract, and possible nuances in long sequences of intentions are extremely difficult to capture, even if we limit ourselves to a single task. In addition, the underlying intention space is directly connected to the world we live in.
LLM-based approach is different, and revolves around using a model to approximate the marginal distribution based on the training data. What’s interesting is that despite no intermediate steps devoted to intention sampling, the estimation they achieve is an approximation of the ideal latent space model. Formally, it can be proven that
Transformer-based LLMs trained on text corpus sampled from the ideal latent space model are approximators of the distribution .
The key assumption is that the training corpus is sampled from the ideal latent space model. However, it is reasonable - we train LLMs on text produced by actual humans (or at least we hope so).
Intuitively, LLM with an infinite number of parameters and trained on an infinitely-sized text corpus “knows” the exact values of .
State-of-the-art LLMs have trillions of parameters, and are trained on multiple trillion tokens, so it is possible that they are within a “touching distance” of infinity.
In following sections, we’ll explore how this property can be used to explain many strengths of large language models.
IV. Inferring intention with marginal distribution of messages
As we mentioned before, to maintain a natural conversation it is crucial to infer intentions. Empirically, LLMs are clearly excellent at this. Now, let’s go over a sketch of a theoretical proof of this fact.
Let be the random variable describing the intention underlying the -th message in the randomly sampled sequence from the ideal latent space model. By we denote the probability of sampling the sequence and that the -th intention in the underlying intention sequence (which is also sampled randomly!) is equivalent to .
We also set
First, let’s focus on the case of unambiguous languages, in which the intention can be recovered from the message with certainty. Recall that this property holds for programming languages. This analysis serves as a direct justification of applicability of LLMs for code generation.
Let be a message in an unambiguous language, and let be its unique underlying intention. We have
Consider a “perfect” LLM, whose distribution of messages is equivalent to the marginal distribution in the ideal latent space model with . When prompted with a message that conveys the intention , the probability of generating the continuation is
The probability of sampling sequence can be rewritten as
The last transition follows from .
Altogether, we obtain
which means that the distribution of possible responses is equivalent to the situation, in which the intention behind the prompt was provided explicitly.
The notion of unambiguity isn’t applicable to natural languages. However, it is often assumed that natural language are -ambiguous - in other words, there exists , such that the intention can be inferred from the message with the probability of at least (for all messages!). This assumption is natural - while ambiguity is present in our communication, it is usually efficiently overcome with redundancy. There are no truly meaningless or gibberish phrases in spoken languages.
To accommodate for ambiguity, we can generalize the result on intention inference to
where denotes the ambiguity of the prompt. For -ambiguous languages, it is always upper-bounded by .
It can also be proven that if we provide more messages conveying the same intention as inputs, then the ambiguity decreases exponentially. This formalizes the observation that redundancy in the prompt increases the reliability of the results.
On the other hand, in human communication, the ambiguity of a message is decreased by external factors, such as body language or shared experience and knowledge. There is clearly no way to replicate these circumstances in LLMs, so this might be considered a shortcoming.
V. In-context learning
Capitalizing on results from the previous section, one can quickly derive a formula for efficiency of few-shot prompting. Assume we are prompting the LLM for a completion of the input with instruction , providing example input-output pairs , which are all trying to convey the same intention . For an -ambiguous language, it holds
which clearly underlines the significance of the examples, while demonstrating that the point of diminishing returns with respect to their number usually arrives relatively quickly.
VI. Chain-of-thought prompting
Discussions surrounding genuity of reasoning capabilities of LLMs are still ongoing, and the latent space model is somewhat orthogonal to these considerations (though it’s slightly leaning towards the hypothesis of brute-force memoization of causal transitions). However, it does make it apparent how models are capable of capitalizing on their superior accuracy of simpler logical transitions.
By we denote the probability of a logical transition, and we define it is as
When , is defined analogously to .
For simplicity, consider the few-shot chain-of-thought prompting technique and assume we’re providing examples showcasing a coherent chain-of-thought . Let be the input message, and let be an example message reaching the correct conclusion. When zero-shot prompting directly for a conclusion, the probability of the model returning the correct answer is equivalent to
Most notably, it the transition is under-represented in the training corpus, this probability is low. This wouldn’t change even if we considered all of correct simultaneously.
In the case of chain-of-thought, this probability changes to (at least)
We control the number of examples, so we can easily assume that the first term is close to 1. The middle term is capturing the soundness of the showcased chain-of-thought with respect to the training corpus of the LLM. As empirical experiments show, for many different applications the task of finding such is perfectly viable.
VII. Conclusion
While explanations of proficiency of LLMs are still speculative, mostly due to the insurmountable volume of parameters and training corpora, our understanding of their abilities and limitations are continuously increasing. A formal proof that transformer-based LLMs are capturing the marginal distributions in the ideal latent space model is a tangible confirmation of some breakthrough, as this is a decades-old problem, with many documented, fruitless attempts. In addition, it provides a neat explanation of multiple advantages of LLMs, which might not be definitive, but is definitely intuitive.