Note: I had stopped writing posts in 2017. Slowly getting back to it starting late 2024, mostly for AI.

Think about local minima in thousands of dimensions

When I first learned about Gradient Descent about two years ago, I pictured it in the most obvious 3D way - where one imagines two input variables (as x and y axis in a 2D plane) and the loss being the third (z) axis. In terms of 'local minima' I imagined it as the...

read more

Language Models and GPT’s evolution

As explained in this Stanford CS50 tech talk, Language Models (LMs) are basically a probability distribution over some vocabulary. For every word we give an LM, it can determine what the most probable word to come after that. It's trained to predict the Nth word,...

read more

Vector embeddings

This seemed like the core ideas so I wanted to clarify them conceptually. "Embeddings" emphasizes the notion of representing data in a meaningful and structured way, while "vectors" refers to the numerical representation itself. 'Vector embeddings' is a way to...

read more

When LLM experts say “We don’t’ know how”

I recently heard Jeff Bezos briefly talk about his views on LLMs here. Less than a minute into the conversation, he said something that struck a chord with me: LLMs in their current form are not inventions, they are discoveries. He followed that up with "we are...

read more