All How LLMs Work AI and Health Build Notes
Chain Rule of Calculus

Chain Rule = how you take derivatives when a value depends on another value, which itself depends on another value (i.e. a composition of functions) Intuition If A...

How LLMs Work
How Diffusion Models Power AI Videos: An Incredible Visual Explanation

I first wrapped my head around diffusion models in 2023, thanks to MIT 6.S191 Lecture on ‘Deep Learning New Frontiers‘. The idea of reverse-denoising just clicked for me—it...

Build Notes
Embedding

“Embeddings” emphasizes the notion of representing data in a meaningful and structured way, while “[[Vectors]]” refers to the numerical representation itself. ‘Vector embeddings’ is a way to represent...

How LLMs Work
Three Years of Learning AI: Resources That Shaped My Intuition

This weekend I finally finished reading Why Machines Learn: The Elegant Math Behind AI (by Anil Ananthaswamy). It took me seven months—an unusually long time for a 500-page...

Build Notes
Dot Products

Conceptually, for two vectors x and y, x.y is defined as magnitude of a multiplied by projection of y onto x (think of it as shadow cast by...

How LLMs Work
Vectors

Vectors have two properties 1. Magnitude (length), 2. Direction. From a computer science perspective, it’s just an ordered list of numbers Vectors can be added, multiplied (=scaled) A...

How LLMs Work
Layers

Layers = groups of perceptrons (See [[Perceptron]]) stacked so that each layer’s outputs become the next layer’s inputs, letting the network learn increasingly abstract features. Stacked/layered perceptrons create...

How LLMs Work
Perceptron

Perceptron = the simplest artificial neuron that takes multiple inputs, multiplies them by weights, adds a bias, and turns the result into a yes/no (or score) output. Inputs:...

How LLMs Work
How AI helped me talk (and listen) to >2000 pages of medical content

Earlier this year, I began researching mitral valve surgery (specifically, valve repair vs. replacement) to help someone close to me. This experience introduced me to Retrieval-Augmented Generation (RAG)...

Build Notes
Exploring the Basics: Biological vs. Artificial Neurons

Alright, OpenAI o1 is out. If you are anything like me, you first chuckled at the description that it was “designed to spend more time thinking before they...

How LLMs Work
BeekeeperAI

Website. Interesting application of federated learning to solve healthcare’s data sharing issues. Their platform allows the secure interaction of algorithm and data – from different entities. Like an...

Generative AI and Healthcare: An ongoing list of application areas

It’s easy to feel the immense transformational capacity of Generative AI as a solution. And healthcare has no shortage of problems to solve. The real insight is in...

AI and Health
Three simple examples of LLM confabulations

Large Language Models (LLMs) like ChatGPT can handle two aspects of communication very well: plausibility and fluency. Given an input context they determine what are the most probable...

AI and Health
Curious historical connection between psychology and LLMs

A few months ago my curiosity around how-are-LLMs-‘learning’ took me down the rabbit hole of AI and Psychology history and I ended up finding a string of very...

AI and Health
NLG and the range of tasks within it

The first 10 minutes of this Stanford CS224N lecture explained NLU, NLG and the tasks within and it was helpful. Natural Language Understanding (NLU) is a subset of NLP...

AI and Health
Earlier Writing (2008-2017): These are my old notes on health IT and the digital health industry before AI changed the conversation.