The Matrix: A Bayesian learning model for LLMs
139 points
14 days ago
| 5 comments
| arxiv.org
| HN
dosinga
14 days ago
[-]
Conclusion from the paper is:

In this paper we present a new model to explain the behavior of Large Language Models. Our frame of reference is an abstract probability matrix, which contains the multinomial probabilities for next token prediction in each row, where the row represents a specific prompt. We then demonstrate that LLM text generation is consistent with a compact representation of this abstract matrix through a combination of embeddings and Bayesian learning. Our model explains (the emergence of) In-Context learning with scale of the LLMs, as also other phenomena like Chain of Thought reasoning and the problem with large context windows. Finally, we outline implications of our model and some directions for future exploration.

Where does the "Cannot Recursively Improve" come from?

reply
ajb
14 days ago
[-]
Looks like someone edited the title of this thread. I assume "Cannot Recursively Improve" was in the old one?
reply
dosinga
14 days ago
[-]
Yeah, looks like it. This is better!
reply
avi_vallarapu
13 days ago
[-]
Theoretically this sounds great. I would worry about scalability issues with the Bayesian learning models practical implementation when dealing with the vast parameter space and data requirements of state of the-art models like GPT-3 and beyond.

Would love to see practical implementations on large-scale datasets and in varied contexts. I Liked the use of Dirichlet distributions to approximate any prior over multinomial distributions.

reply
programjames
13 days ago
[-]
I didn't read through the paper (just the abstract), but isn't the whole point of the KL divergence loss to get the best compression, which is equivalent to Bayesian learning? I don't really see how this is novel, like I'm sure people were doing this with Markov chains back in the 90s.
reply
jksk61
13 days ago
[-]
in fact it is nothing new.
reply
drbig
14 days ago
[-]
The title of the paper is actually `The Matrix: A Bayesian learning model for LLMs` and the conclusion presented in the title of this post is not to be found in the abstract... Just a heads up y'all.
reply
zoky
14 days ago
[-]
I don’t really care, I just want some vague reassurance that we’re probably not on the verge of launching Skynet…
reply
toxik
14 days ago
[-]
Completely editorialized title. The article talks about LLMs, not transformers.
reply