Today's AI narrative is anchored in scale, and models are far from textbook statistical modeling. And yet, the plague that are hallucination shows us that statistical concepts such as uncertainty, still matter. I will discuss uncertainty quantification on a black-box classifier, in particular how errors can be decomposed, connecting epistemic and calibration error, and corresponding estimators. I will show how roping in a bit of decision theory, these, fairly theoretical, tools can be used to build better AI systems.