How AI helped me talk (and listen) to >2000 pages of medical content

Earlier this year, I began researching mitral valve surgery (specifically, valve repair vs. replacement) to help someone close to me. This experience introduced me to Retrieval-Augmented Generation (RAG) tools, which made navigating such a complex topic much easier. If you’ve ever wondered how to tackle a complicated subject and organize a large body of information, read on—I think you’ll find...

read more

Exploring the Basics: Biological vs. Artificial Neurons

Alright, OpenAI o1 is out. If you are anything like me, you first chuckled at the description that it was "designed to spend more time thinking before they respond". But once I delved deeper, it quickly became mind-blowing. (By the way, Ethan Mollick offers an excellent explanation of the power of dedicating more computational resources to “thinking.”) Developments like this deepen my admiration...

read more

Generative AI and Healthcare: An ongoing list of application areas

It's easy to feel the immense transformational capacity of Generative AI as a solution. And healthcare has no shortage of problems to solve. The real insight is in figuring out viable application areas and use cases. Things are becoming a bit clearer in that aspect and it's worthwhile to keep an ongoing list of where Gen AI application makes sense in healthcare. This post is always under...

read more

Three simple examples of LLM confabulations

Large Language Models (LLMs) like ChatGPT can handle two aspects of communication very well: plausibility and fluency. Given an input context they determine what are the most probable sequence of words and string them in a way that is superbly eloquent. That makes the output very convincing. But it's no secret that LLMs can provide entirely false outputs - they can confabulate. Not hallucinate...

read more

Curious historical connection between psychology and LLMs

A few months ago my curiosity around how-are-LLMs-'learning' took me down the rabbit hole of AI and Psychology history and I ended up finding a string of very interesting and related developments from the last 120 years: 1905: Harvard-graduate psychologist Edward L. Thorndike published his 'Law of Effect' which basically says that animal behaviors are shaped by consequences. That is, behaviors...

read more

NLG and the range of tasks within it

The first 10 minutes of this Stanford CS224N lecture explained NLU, NLG and the tasks within and it was helpful. Natural Language Understanding (NLU) is a subset of NLP (processing), which uses syntactic and semantic analysis of text and speech to determine the meaning of a sentence. Natural Language Generation (NLG) is another subset of NLP focusing on the process of producing a human language...

read more

Language Models and GPT’s evolution

As explained in this Stanford CS50 tech talk, Language Models (LMs) are basically a probability distribution over some vocabulary. For every word we give an LM, it can determine what the most probable word to come after that. It's trained to predict the Nth word, given the previous N-1 words. If that sounds like simple probability calculation, you are not realizing that predicting the next word...

read more

Vector embeddings

This seemed like the core ideas so I wanted to clarify them conceptually. "Embeddings" emphasizes the notion of representing data in a meaningful and structured way, while "vectors" refers to the numerical representation itself. 'Vector embeddings' is a way to represent different data types (like words, sentences, articles etc) as points in a multidimensional space. Somewhat regrettably, both...

read more

When LLM experts say “We don’t’ know how”

I recently heard Jeff Bezos briefly talk about his views on LLMs here. Less than a minute into the conversation, he said something that struck a chord with me: LLMs in their current form are not inventions, they are discoveries. He followed that up with "we are constantly surprised by their capabilities and they are not really engineered objects". That quote resonated because that we-don't-know...

read more