Natural language understanding models are trained on a sample of the real-world situations they may encounter. Commonsense and world knowledge, language, and reasoning skills can help them address unknown situations sensibly. In this talk I will discuss two lines of work, addressing knowledge and reasoning respectively. I will first present a method for discovering relevant knowledge which is unstated but may be required for solving a particular problem, through a process of asking information-seeking questions. I will then discuss nonmonotonic reasoning in natural language, a core human reasoning ability that has been studied in classical AI but mostly overlooked in modern NLP. I will talk about several recent papers addressing abductive reasoning (reasoning about plausible explanations), counterfactual reasoning (what if?) and defeasible reasoning (updating beliefs given additional information). Finally, I will discuss open problems and future directions in building NLP models with commonsense reasoning abilities.