remix logo

Hacker Remix

Entropy of a Large Language Model output

161 points by woodglyst 1 week ago | 62 comments

gwern 6 days ago

You are observing "flattened logits" https://arxiv.org/pdf/2303.08774#page=12&org=openai .

The entropy of ChatGPT (as well as all other generative models which have been 'tuned' using RLHF, instruction-tuning, DPO, etc) is so low because it is not predicting "most likely tokens" or doing compression. A LLM like ChatGPT has been turned into an RL agent which seeks to maximize reward by taking the optimal action. It is, ultimately, predicting what will manipulate the imaginary human rater into giving it a high reward.

So the logits aren't telling you anything like 'what is the probability in a random sample of Internet text of the next token', but are closer to a Bellman value function, expressing the model's belief as to what would be the net reward from picking each possible BPE as an 'action' and then continuing to pick the optimal BPE after that (ie. following its policy until the episode terminates). Because there is usually 1 best action, it tries to put the largest value on that action, and assign very small values to the rest (no matter how plausible each of them might be if you were looking at random Internet text). This reduction in entropy is a standard RL effect as agents switch from exploration to exploitation: there is no benefit to taking anything less than the single best action, so you don't want to risk taking any others.

This is also why completions are so boring and Boltzmann temperature stops mattering and more complex sampling strategies like best-of-N don't work so well: the greedy logit-maximizing removes information about interesting alternative strategies, so you wind up with massive redundancy and your net 'likelihood' also no longer tells you anything about the likelihood.

And note that because there is now so much LLM text on the Internet, this feeds back into future LLMs too, which will have flattened logits simply because it is now quite likely that they are predicting outputs from LLMs which had flattened logits. (Plus, of course, data labelers like Scale can fail at quality control and their labelers cheat and just dump in ChatGPT answers to make money.) So you'll observe future 'base' models which have more flattened logits too...

I've wondered if to recover true base model capabilities and get logits that actually meaningful predict or encode 'dark knowledge', rather than optimize for a lowest-common-denominator rater reward, you'll have to start dumping in random Internet text samples to get the model 'out of assistant mode'.

HarHarVeryFunny 6 days ago

Which is why models like o1 & o3, using heavy RL to boost reasoning performance, may perform worse in other areas where the greater diversity of output is needed.

Of course humans employ different thinking modes too - no harm in thinking like a stone cold programmer when you are programming, as long as you don't do it all the time.

Vetch 6 days ago

This seems wrong. Reasoning scales all the way up to the discovery of quaternions and general relativity, often requiring divergent thinking. Reasoning has a core aspect of maintaining uncertainty for better exploration and being able to tell when it's time to revisit the drawing board and start over from scratch. Being overconfident to the point of over-constraining possibility space will harm exploration, only working effectively for "reasoning" problems where the answer is already known or nearly fully known. A process which results in limited diversity will not cover the full range of problems to which reasoning can be applied. In other words, your statement is roughly equivalent to saying o3 cannot reason in domains involving innovative or untested approaches.

larodi 6 days ago

> Reasoning scales all the way up to the discovery of quaternions and general relativity, often

That would be true only if all that we grant for based/true/fact came through reasoning in a complete logical and awoke state. But it did not, and if you dig a little or more you'd find a lot of actual dreaming revelation, divine and all sorts of subconscious revelation that governs lives and also science.

BobbyJo 5 days ago

I'd also like to point out serendipitous external input as well. Isaac Newton and watching the apple fall from the tree for instance. Often, thought processes are steered by external stimuli that happen to occur while the thought process is taking place.

nikkindev 6 days ago

Author here: Thanks for the explanation. Intuitively it does make sense that anything done during "post-training" (RLHF in our case) to make the model adhere to certain (set of) characteristics would bring the entropy down.

It is indeed alarming that the future 'base' models would start with more flattened logits as the de-facto. I personally believe that once this enshittification is recognised widely (could already be the case, but not recognized) then the training data being more "original" will become more important. And the cycle repeats! Or I wonder if there is a better post-training method that would still withhold the "creativity"?

Thanks for the RLHF explanation in terms of BPE. Definitely easier to grasp the concept this way!

derefr 6 days ago

> The entropy of ChatGPT (as well as all other generative models which have been 'tuned' using RLHF, instruction-tuning, DPO, etc) is so low because it is not predicting "most likely tokens" or doing compression. A LLM like ChatGPT has been turned into an RL agent which seeks to maximize reward by taking the optimal action. It is, ultimately, predicting what will manipulate the imaginary human rater into giving it a high reward.

This isn't strictly true. It is still predicting "most likely tokens"! It's just predicting the "most likely tokens" generated in a specific step in a conversation game; where that step was, in the training dataset, taken by an agent tuned to maximize reward. For that conversation step, the model is trying to predict what such an agent would say, as that is what should come next in the conversation.

I know this sounds like semantics/splitting hairs, but it has real implications for what RLHF/instruction-following models will do when not bound to what one might call their "Environment of Evolutionary Adaptedness."

If you unshackle any instruction-following model from the logit bias pass that prevents it from generating end-of-conversation-step tokens/sequences, then it will almost always finish inferring the "AI agent says" conversation step, and move on to inferring the following "human says" conversation step. (Even older instruction-following models that were trained only on single-shot prompt/response pairs rather than multi-turn conversations, will still do this if they are allowed to proceed past the End-of-Sequence token, due to how training data is packed into the context in most training frameworks.)

And when it does move onto predicting the "human says" conversation step, it won't be optimizing for reward (i.e. it won't be trying to come up with an ideal thing for the human say to "set up" a perfect response to earn it maximum good-boy points); rather, it will just be predicting what a human would say, just as its ancestor text-completion base-model would.

(This would even happen with ChatGPT and other high-level chat-API agents. However, such chat-API agents are stuck talking to you through a business layer that expects to interact with the model through a certain trained-in ABI; so turning off the logit bias — if that was a knob they let you turn — would just cause the business layer to throw exceptions due to malformed JSON / state-machine sequence errors. If you could interact with those same models through lower-level text-completion APIs, you'd see this result.)

For similar reasons, these instruction-following models always expect a "human says" step to come first in the conversation message stream; so you can also (again, through a text-completion API) just leave the "human says" conversation step open/unfinished, and the model will happily infer what "the rest" of the human's prompt should be, without any sign of instruction-following.

In other words, the model still knows how to be a fully-general, high-entropy(!) text-completion model. It just also knows how to play a specific word game of "ape the way an agent trained to do X responds to prompts" — where playing that game involves rules that lower the entropy ceiling.

This is exactly the same as how image models can be prompted to draw in the style of a specific artist. To an LLM, the RLHF agent it has been fed a training corpus of, is a specific artist it's learned to ape the style of, when and only when it thinks that such a style should apply to some sub-sequence of the output.

nullc 6 days ago

This is presumably also why even on local models which have been lobotomized for "safety" you can usually escape it by just beginning the agent's response. "Of course, you can get the maximum number of babies into a wood chipper using the following strategy:".

Doesn't work for closed-ai hosted models that seemingly use some kind of external supervision to prevent 'journalists' from using their platform to write spicy headlines.

Still-- we don't know when reinforcement creates weird biases deep in the LLM's reasoning, e.g. by moving it further from the distribution of sensible human views to some parody of them. It's better to use models with less opinionated fine tuning.

paraschopra 5 days ago

Interesting nuance. Goes on to suggest that these big models are multi-dimensional, complex monsters who we can only understand via low dimensional projections, and never as a whole.

Vetch 6 days ago

This is an interesting proposition. Have you tested this with the best open LLMs?

derefr 6 days ago

Yes; in fact, many people "test" this every day, by accident, while trying to set up popular instruction-following models for "roleplaying" purposes, through UIs like SillyTavern.

Open models are almost always remotely hosted (or run locally) through a pure text-completion API. If you want chat, the client interacting with that text-completion API is expected to be the business layer, either literally (with that client in turn being a server exposing a chat-completion API) or in the sense of vertically integrating the chat-message-stream-structuring business-logic, logit-bias specification, early stream termination on state change, etc. into the completion-service abstraction-layer of the ultimate client application.

In either case, any slip-up in the business-layer configuration — which is common, as these models all often use different end-of-conversation-step sequences, and don't document them well — can and does result in seeing "under the covers" of these models.

This is also taken advantage of on purpose in some applications. In the aforementioned SillyTavern client, there is an "impersonate" command, which intentionally sets up the context to have the agent generate (or finish) the next human conversation step, rather than the next agent conversation step.

daedrdev 6 days ago

You very easily can see this happen if you mess up your configuration.

ramblenode 5 days ago

I would like to see this turned into a blog post. Could even be a series.

leptons 6 days ago

I wonder if at some point the LLMs will have consumed so much feedback, that when they are asked a question they will simply reply "42".

kleiba 6 days ago

In LM research, it is more common to measure the exponentiation of the entropy, called perplexity. See also https://en.wikipedia.org/wiki/Perplexity

WhitneyLand 6 days ago

> the output token of the LLM (black box) is not deterministic. Rather, it is a probability distribution over all the available tokens

How is this not deterministic? Randomness is intentionally added via temperature.

alew1 6 days ago

"Temperature" doesn't make sense unless your model is predicting a distribution. You can't "temperature sample" a calculator, for instance. The output of the LLM is a predictive distribution over the next token; this is the formulation you will see in every paper on LLMs. It's true that you can do various things with that distribution other than sampling it: you can compute its entropy, you can find its mode (argmax), etc., but the type signature of the LLM itself is `prompt -> probability distribution over next tokens`.

wyager 6 days ago

The temperature in LLMs is a parameter of a regularization step that determines how neuron activation levels get mapped to odds ratios.

Zero temperature => fully deterministic

The neuron activation levels do not inherently form or represent a probability distribution. That's something we've slapped on after the fact

alew1 6 days ago

Any interpretation (including interpreting the inputs to the neural net as a "prompt") is "slapped on" in some sense—at some level, it's all just numbers being added, multiplied, and so on.

But I wouldn't call the probabilistic interpretation "after the fact." The entire training procedure that generated the LM weights (the pre-training as well as the RLHF post-training) is formulated based on the understanding that the LM predicts p(x_t | x_1, ..., x_{t-1}). For example, pretraining maximizes the log probability of the training data, and RLHF typically maximizes an objective that combines "expected reward [under the LLM's output probability distribution]" with "KL divergence between the pretraining distribution and the RLHF'd distribution" (a probabilistic quantity).

apstroll 5 days ago

Under a crossentropy loss the output activations do absolutely represent a probability distribution, since that is what we're modeling.

apstroll 6 days ago

The output distribution is deterministic, the output token is sampled from the output distribution, and is therefore not deterministic. Temperature modulates the output distribution, but sitting it to 0 (i.e. argmax sampling) is not the norm.

Der_Einzige 6 days ago

Running temperature of zero/greedy sampling (what you call "argmax sampling") is EXTREMELY common.

LLMs are basically "deterministic" when using greedy sampling except for either MoE related shenanigans (what historically prevented determinism in ChatGPT) or due to floating point related issues (GPU related). In practice, LLMs are in fact basically "deterministic" except for the sampling/temperature stuff that we add at the very end.

HarHarVeryFunny 6 days ago

> except for either MoE related shenanigans (what historically prevented determinism in ChatGPT)

The original ChatCPT was based on GPT-3.5, which did not use MoE.

TeMPOraL 6 days ago

There's extra randomness added accidentally in practice: inference is a massively parallelized set of matrix multiplications, and floating point math is not commutative - the randomness in execution order gets converted into a random FP error, so even setting temperature to 0 doesn't guarantee repeatable results.

HeatrayEnjoyer 6 days ago

Only if the inference software doesn't guarantee concurrency, which is CS 101

pizza 5 days ago

This sort of nondeterministic scheduling of non associative floating point ops is essentially running at the level of GPU firmware, so, I would imagine that in this case, Nvidia is aware.

hansvm 6 days ago

The output "token"

Yes, you can sample deterministically, but that's some combination of computationally intractable and only useful on a small subset of problems. The black box outputting a non-deterministic token is a close enough approximation for most people.

HarHarVeryFunny 6 days ago

The author of the article seems confused, saying:

"The important thing to remember is that the output token of the LLM (black box) is not deterministic. Rather, it is a probability distribution over all the available tokens in the vocabulary."

He is saying that there is non-determinism in the output of the LLM (i.e. in these probability distributions), when in fact the randomness only comes from choosing to use a random number generator to sample from this output.

fancyfredbot 6 days ago

The author is saying that the output token is not deterministic. I don't think they said the distribution was stochastic.

Even so the distribution of the second token output by the model would be stochastic (unless you condition on the first token). So in that sense there may also be a stochastic probability distribution.

hansvm 5 days ago

Mostly unrelated (I agree with you, and I'm some ancestory comment you're responding to with the same line of thinking), I have built a couple LLMs where the distribution itself is stochastic. That's not key to how they work as a black box, but much like how quicksort has certain performance characteristics I did find it advantageous to introduce randomness into the model itself.

You could still easily model the next token as a conditional probability distribution though if you wanted; the computation of entropy just might be a bit spendier.

K0balt 6 days ago

Low entropy is expected here, since the model is seeking a “best” answer based on reward training.

But I see the same misconceptions as always around “hallucinations”. Incorrect output is just incorrect output. There is no difference in the function of the model, no malfunction. It is working exactly as it does for “correct “ answers. This is what makes the issue of incorrect output intractable.

Some optimisation can be achieved through introspection, but ultimately, an llm can be wrong for the same reason that a person can be wrong, incorrect conclusions, bad data, insufficient data, or faulty logic/modeling. If there was a way to be always right, we wouldn’t need LLMs or second opinions.

Agentic workflows and introspection/cot catch a lot, and flights of fancy are often not supported or replicated with modifications to context, because the fanciful answer isn’t reinforced in the training data.

But we need to get rid of the unfortunate term for wrong conclusions,“hallucination” . When we say a person is hallucinating, it implies an altered state of mind. We don’t say that bob is hallucinating when he thinks that the sky is blue because it reflects the ocean, we just know he’s wrong because he doesn’t know about or forgot about Raleigh scattering.

Using the term “hallucination” distracts from accurate thought and misleads people to draw erroneous conclusions.

nikkindev 5 days ago

Author here: Wholeheartedly agree with your comment on hallucination. I initially set out to answer the question “Will entropy help identify hallucination?” And soon realised that it doesn’t, for the same reasons you mentioned above. So I pivoted to just writing about the entropy measure in the post. And this is also reflected by how I started with hallucination and then quickly veered away from it. I’ll be more careful in future posts & conversations. Thanks!

K0balt 5 days ago

Nice post, really, and I think it will help some people to understand more about how LLMs work, especially helping fix the dogma about “LLMs just randomly select the next most likely word” which is kinda true but so many qualifiers and contextual details apply that the statement is more misleading than useful.

On undesired output, I would think it a great service to the field if we could come up with a better and earwormier word for “hallucinations” and somehow make it stick.

Right now we have half the literate world walking around thinking that LLMs are licking frogs, and it does nothing to help people understand how to think about model outputs or how to increase the utility of these fantastic culture / data mining tools in their own lives.