Dot Products

Conceptually, for two vectors x and y, x.y is defined as magnitude of a multiplied by projection of y onto x (think of it as shadow cast by y onto x) if the x and y are at right angles (orthogonal), x.y will be zero, regardless of the length of either of them Complex stuff The set of weights in a neuron are nothing but a vector (w1, w2, ..) That weight vector is orthogonal to the line that is...

read more

Vectors

Vectors have two properties 1. Magnitude (length), 2. Direction. From a computer science perspective, it’s just an ordered list of numbers Vectors can be added, multiplied (=scaled) A key operation on vectors is the dot product. Conceptually the dot product a.b is defined as the magnitude of vector a multiplied by the projection of vector b onto a. Projection can be thought of as the “shadow...

read more

Layers

Layers = groups of perceptrons (See [[Perceptron]]) stacked so that each layer’s outputs become the next layer’s inputs, letting the network learn increasingly abstract features. Stacked/layered perceptrons create a feed-forward network (ie going from an input to an output layer, never backwards). The layers between input and output are called hidden layers because they are the intermediate...

read more

Perceptron

Perceptron = the simplest artificial neuron that takes multiple inputs, multiplies them by weights, adds a bias, and turns the result into a yes/no (or score) output. Inputs: a vector of features (x1,x2,...,xn) Parameters: One weight per input (w1,w2,...,wn) An overall bias (b) → sometimes written as w0 for a neutral input Computation: weighted sum Non-linearity (activation function) that...

read more

Exploring the Basics: Biological vs. Artificial Neurons

Alright, OpenAI o1 is out. If you are anything like me, you first chuckled at the description that it was "designed to spend more time thinking before they respond". But once I delved deeper, it quickly became mind-blowing. (By the way, Ethan Mollick offers an excellent explanation of the power of dedicating more computational resources to “thinking.”) Developments like this deepen my admiration...

read more

Vector embeddings

This seemed like the core ideas so I wanted to clarify them conceptually. "Embeddings" emphasizes the notion of representing data in a meaningful and structured way, while "vectors" refers to the numerical representation itself. 'Vector embeddings' is a way to represent different data types (like words, sentences, articles etc) as points in a multidimensional space. Somewhat regrettably, both...

read more