As states begin implementing through-year assessment models, assessment providers must consider how to create a comprehensive summative result from multiple discrete testing occasions. In this talk we describe a process for evaluating a wide range of summative scoring models (e.g., item response theory, diagnostic classification, and innovative hybrid models) based on our work in the Pathways for Instructionally Embedded Assessment (PIE), a Competitive Grant for State Assessment project with the Missouri Department of Elementary and Secondary Education. This process involves not just a psychometric evaluation of summative models but also careful consideration of whether the scores support assessment claims made in a theory of action and can be used to generate indicators in a state accountability model (e.g., growth models). We then illustrate how summative scores are produced using diagnostic classification models for the Dynamic Learning Maps Alternate Assessment System, a through-year assessment that has been operational since 2015 and fully meets federal peer review requirements. We discuss the specific assessment claims and how the data collected throughout the year is used to provide both real-time instructional feedback and summative results, including an overall performance level for use in accountability systems.