• Powerful language models like GPT[1] demonstrated impressive few-shot learning ability even without fine-tuning • Madotto et al.[2] applied GPT-2 by priming the model with examples for language understanding, state tracking, dialogue policy and language generation tasks respectively • Describe task with questions • Dialogue state tracking as a question answering (QA) or machine reading (MR) problem[3],[4] [1] Language models are unsupervised multitask learners, 2018 [2] Language models as few-shot learner for task-oriented dialogue systems, 2020 [3] Zero-shot generalization in dialog state tracking through generative question answering, EACL 2021 [4] From machine reading comprehension to dialogue state tracking: Bridging the gap, ACL 2020 Dialogue state tracking