An expository talk about neural network training (progressive sharpening, edge of stability, self-stabilization) based on the two papers "Self-Stabilization: The Implicit Bias of Gradient Descent at the Edge of Stability" by Damian et al., and "Gradient descent on neural networks typically occurs at the edge of stability" by Cohen et la.