A few months ago my curiosity around how-are-LLMs-'learning' took me down the rabbit hole of AI and Psychology history and I ended up finding a string of very interesting and related developments from the last 120 years: 1905: Harvard-graduate psychologist Edward L....
NLG and the range of tasks within it
The first 10 minutes of this Stanford CS224N lecture explained NLU, NLG and the tasks within and it was helpful. Natural Language Understanding (NLU) is a subset of NLP (processing), which uses syntactic and semantic analysis of text and speech to determine the...
Language Models and GPT’s evolution
As explained in this Stanford CS50 tech talk, Language Models (LMs) are basically a probability distribution over some vocabulary. For every word we give an LM, it can determine what the most probable word to come after that. It's trained to predict the Nth word,...
Vector embeddings
This seemed like the core ideas so I wanted to clarify them conceptually. "Embeddings" emphasizes the notion of representing data in a meaningful and structured way, while "vectors" refers to the numerical representation itself. 'Vector embeddings' is a way to...
When LLM experts say “We don’t’ know how”
I recently heard Jeff Bezos briefly talk about his views on LLMs here. Less than a minute into the conversation, he said something that struck a chord with me: LLMs in their current form are not inventions, they are discoveries. He followed that up with "we are...