Slide 1

Slide 1 text

Applied diagnostic classification modeling with the R package measr W. Jake Thompson, Ph.D.

Slide 2

Slide 2 text

Diagnostic classification modeling

Slide 3

Slide 3 text

Provides statistical classifications on a predefined set of knowledge, skills, and understandings (i.e., attributes) Actionable feedback on specific skills that have been acquired and ones that still need additional instruction Valid and reliable results with fewer items, reducing the total time needed for assessment Benefits of diagnostic measurement

Slide 4

Slide 4 text

Lack of practical guidance for applied researchers and psychometricians Existing software is: Expensive Inaccessible Focused on specific DCM subtypes that may have limited applicability Scarce recommendations for evaluating models once they are estimated Barriers to adoption

Slide 5

Slide 5 text

No content

Slide 6

Slide 6 text

Methodological innovation project funded by the Institute for Education Sciences R package interfacing with the Stan probabilistic programming language to provide a fully Bayesian estimation procedure Focus on a general DCM (the loglinear cognitive diagnostic model; LCDM) to support a variety of applications Other DCM subtypes are also supported Diagnostic modeling with measr

Slide 7

Slide 7 text

Initial release to CRAN planned in the coming months Development version is stable and available on GitHub Installing measr # install.packages("remotes") remotes::install_github("wjakethompson/measr") library(measr)

Slide 8

Slide 8 text

Two example data sets included for example analyses Examination for Certification of Proficiency in English (ECPE; Templin & Hoffman, 2013) MacReady & Dayton (1977) multiplication data Example data sets ?ecpe ?mdm

Slide 9

Slide 9 text

2,922 respondents; 28 items measuring 3 attributes The ECPE data set ecpe_qmatrix #> # A tibble: 28 × 4 #> item_id morphosyntactic cohesive lexical #> #> 1 E1 1 1 0 #> 2 E2 0 1 0 #> 3 E3 1 0 1 #> 4 E4 0 0 1 #> 5 E5 0 0 1 #> # ℹ 23 more rows #> # ℹ Use `print(n = ...)` to see more rows

Slide 10

Slide 10 text

Specify a data set and Q-matrix Optional arguments for refining the estimation process (reasonable defaults provided) For a complete list of options, see ?measr_dcm Model estimation ecpe <- measr_dcm(data = ecpe_data, qmatrix = ecpe_qmatrix, resp_id = "resp_id", item_id = "item_id", type = "lcdm", method = "mcmc", backend = "rstan", chains = 4, warmup = 1000, iter = 2000, cores = 4)

Slide 11

Slide 11 text

Model estimation: Parameter estimates measr_extract(ecpe, what = "strc_param") #> # A tibble: 8 × 2 #> class estimate #> #> 1 [0,0,0] 0.2992 ± 0.0174 #> 2 [1,0,0] 0.0116 ± 0.0063 #> 3 [0,1,0] 0.0155 ± 0.0107 #> 4 [0,0,1] 0.1288 ± 0.0195 #> 5 [1,1,0] 0.0095 ± 0.0055 #> 6 [1,0,1] 0.0185 ± 0.0102 #> 7 [0,1,1] 0.1725 ± 0.0198 #> 8 [1,1,1] 0.3445 ± 0.0168

Slide 12

Slide 12 text

Model evaluation Measures of model fit M2 (Liu et al., 2016); PPMC (Park et al., 2015) Information criteria for model comparisons LOO (Vehtari et al., 2017); WAIC (Watanabe, 2010) Reliability indices Classification consistency and accuracy (Johnson & Sinhary, 2018) # Add model fit information ecpe <- add_fit( ecpe, method = c("m2", "ppmc") ) # Add information criteria ecpe <- add_criterion( ecpe, criterion = "waic" ) # Add reliability information ecpe <- add_reliability(ecpe)

Slide 13

Slide 13 text

Use measr_extract() to pull out summaries of evaluation elements that have been added to the model Extracting model evaluation elements measr_extract(ecpe, what = "m2") #> M2 = 513.051, df = 325, p = 0 #> RMSEA = 0.014, CI: [0.012,0.016] #> SRMSR = 0.032

Slide 14

Slide 14 text

Use measr_extract() to pull out summaries of evaluation elements that have been added to the model Extracting model evaluation elements measr_extract(ecpe, what = "odds_ratio_flags", ppmc_interval = 0.95) #> # A tibble: 77 × 7 #> item_1 item_2 obs_or ppmc_mean `2.5%` `97.5%` ppp #> #> 1 E1 E13 1.80 1.43 1.14 1.78 0.0195 #> 2 E1 E17 2.02 1.40 1.04 1.86 0.005 #> 3 E1 E26 1.61 1.25 1.01 1.52 0.005 #> 4 E1 E28 1.86 1.41 1.10 1.76 0.008 #> # ℹ 73 more rows #> # ℹ Use `print(n = ...)` to see more rows

Slide 15

Slide 15 text

Use measr_extract() to pull out summaries of evaluation elements that have been added to the model For all options, see ?measr_extract Extracting model evaluation elements measr_extract(ecpe, what = "classification_reliability") #> # A tibble: 3 × 3 #> attribute accuracy consistency #> #> 1 morphosyntactic 0.896 0.835 #> 2 cohesive 0.852 0.808 #> 3 lexical 0.916 0.857

Slide 16

Slide 16 text

Current version is stable, but we continue to add new features Upcoming features: Additional DCM subtypes Tools for evaluating attribute hierarchies Case studies Submit requests or feedback on GitHub: https://github.com/wjakethompson/measr/issues Future development

Slide 17

Slide 17 text

https://measr.info Upcoming training sessions and workshops (materials will be made available on the project website): StanCon 2023 (June 20–23, 2023; St. Louis, MO) Achievement & Assessment Institute's Summer Research Methods Camp (July 2023; virtual and asynchronous) Where to learn more?

Slide 18

Slide 18 text

Thank you! Get in touch! The research reported here was supported by the Institute of Education Sciences, U.S. Department of Education, through Grant R305D210045 to the University of Kansas. The opinions expressed are those of the authors and do not represent the views of the the Institute or the U.S. Department of Education. measr.info [email protected] @wjakethompson @wjakethompson