
Part 1 of 2
Lecture 1: Micrograd & Backpropagation
Build an autograd engine from scratch. Covers derivatives, computation graphs, the chain rule, and training a multi-layer perceptron with gradient descent.
February 15, 2026
Watch workshopHands-on video workshops with interactive Jupyter notebooks. Sign in to watch and code along.

Build an autograd engine from scratch. Covers derivatives, computation graphs, the chain rule, and training a multi-layer perceptron with gradient descent.
February 15, 2026
Watch workshop
Build a character-level language model. Covers bigram statistics, maximum likelihood estimation, softmax, and training a neural network to predict the next character.
February 22, 2026
Watch workshop