Perceptron = the simplest artificial neuron that takes multiple inputs, multiplies them by weights, adds a bias, and turns the result into a yes/no (or score) output. Inputs: a vector of features (x1,x2,...,xn) Parameters: One weight per input (w1,w2,...,wn) An...
2.1 Perceptron
Perceptron = the simplest artificial neuron that takes multiple inputs, multiplies them by weights, adds a bias, and turns the result into a yes/no (or score) output. Inputs: a vector of features (x1,x2,...,xn) Parameters: One weight per input (w1,w2,...,wn) An...
Redesigning Apprenticeship for the AI Era
I first heard Ethan Mollick on The Ezra Klein Show in April 2024 (“How Should I Be Using A.I. Right Now?”). He offered sensible, practical ways to use AI without the hype. Shortly after, I read Co-Intelligence and have followed his writing and talks since. In a recent...
Faster writing, different thinking with AI Voice Dictation
AI voice dictation is having a moment. These tools do more than transcribe—they read context, add punctuation, and learn your style. Many creators say they work two to three times faster. Two weeks ago I started using Wispr Flow Pro. Here is what I found. The Good...
Metaprompting
Dharmesh's post made me realize there’s a name for something I’ve been doing implicitly for a while—using AI to help me write better prompts. Strictly speaking, that’s AI-assisted prompt refinement. There’s a closely related idea called metaprompting—writing prompts...
Four Weekends Building Munshi: Notes on Product Thinking and AI Development
It happens to most of us multiple times a week: someone emails asking for a good time to meet, and before you know it, you're stuck in a back-and-forth scheduling spiral. It's a mundane friction that adds up. Four weekends ago, I decided to vibe-code my way to a...
Think about local minima in thousands of dimensions
When I first learned about Gradient Descent about two years ago, I pictured it in the most obvious 3D way - where one imagines two input variables (as x and y axis in a 2D plane) and the loss being the third (z) axis. In terms of 'local minima' I imagined it as the...
How Diffusion Models Power AI Videos: An Incredible Visual Explanation
I first wrapped my head around diffusion models in 2023, thanks to MIT 6.S191 Lecture on 'Deep Learning New Frontiers'. The idea of reverse-denoising just clicked for me—it reminded me of how our brains pick out shapes and objects in clouds or random mosaics....
Three Years of Learning AI: Resources That Shaped My Intuition
This weekend I finally finished reading Why Machines Learn: The Elegant Math Behind AI (by Anil Ananthaswamy). It took me seven months—an unusually long time for a 500-page book. But the detour was worth it: the book kept sending me down side paths, like brushing up...
How AI helped me talk (and listen) to >2000 pages of medical content
Earlier this year, I began researching mitral valve surgery (specifically, valve repair vs. replacement) to help someone close to me. This experience introduced me to Retrieval-Augmented Generation (RAG) tools, which made navigating such a complex topic much easier....