Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Applied diagnostic classification modeling with the R package measr

Applied diagnostic classification modeling with the R package measr

Diagnostic assessments provide reliable and actionable results with shorter test lengths. However, these methods are not often used in applied research due to in part to limited and inaccessible software. In this presentation we describe a new and free software, measr, that can easily estimate and evaluate diagnostic models.

Jake Thompson

March 29, 2023
Tweet

More Decks by Jake Thompson

Other Decks in Education

Transcript

  1. Applied diagnostic
    classification modeling with
    the R package measr
    W. Jake Thompson, Ph.D.

    View Slide

  2. Diagnostic classification
    modeling

    View Slide

  3. Provides statistical classifications on a predefined set of knowledge,
    skills, and understandings (i.e., attributes)
    Actionable feedback on specific skills that have been acquired and
    ones that still need additional instruction
    Valid and reliable results with fewer items, reducing the total time
    needed for assessment
    Benefits of diagnostic measurement

    View Slide

  4. Lack of practical guidance for applied researchers and
    psychometricians
    Existing software is:
    Expensive
    Inaccessible
    Focused on specific DCM subtypes that may have limited applicability
    Scarce recommendations for evaluating models once they are
    estimated
    Barriers to adoption

    View Slide

  5. View Slide

  6. Methodological innovation project funded by the Institute for
    Education Sciences
    R package interfacing with the Stan probabilistic programming
    language to provide a fully Bayesian estimation procedure
    Focus on a general DCM (the loglinear cognitive diagnostic model;
    LCDM) to support a variety of applications
    Other DCM subtypes are also supported
    Diagnostic modeling with measr

    View Slide

  7. Initial release to CRAN planned in the coming months
    Development version is stable and available on GitHub
    Installing measr
    # install.packages("remotes")
    remotes::install_github("wjakethompson/measr")
    library(measr)

    View Slide

  8. Two example data sets included for example analyses
    Examination for Certification of Proficiency in English (ECPE; Templin &
    Hoffman, 2013)
    MacReady & Dayton (1977) multiplication data
    Example data sets
    ?ecpe
    ?mdm

    View Slide

  9. 2,922 respondents; 28 items measuring 3 attributes
    The ECPE data set
    ecpe_qmatrix
    #> # A tibble: 28 × 4
    #> item_id morphosyntactic cohesive lexical
    #>
    #> 1 E1 1 1 0
    #> 2 E2 0 1 0
    #> 3 E3 1 0 1
    #> 4 E4 0 0 1
    #> 5 E5 0 0 1
    #> # ℹ 23 more rows
    #> # ℹ Use `print(n = ...)` to see more rows

    View Slide

  10. Specify a data set and Q-matrix
    Optional arguments for refining the estimation process (reasonable
    defaults provided)
    For a complete list of options, see ?measr_dcm
    Model estimation
    ecpe <- measr_dcm(data = ecpe_data, qmatrix = ecpe_qmatrix,
    resp_id = "resp_id", item_id = "item_id",
    type = "lcdm", method = "mcmc", backend = "rstan",
    chains = 4, warmup = 1000, iter = 2000, cores = 4)

    View Slide

  11. Model estimation: Parameter estimates
    measr_extract(ecpe, what = "strc_param")
    #> # A tibble: 8 × 2
    #> class estimate
    #>
    #> 1 [0,0,0] 0.2992 ± 0.0174
    #> 2 [1,0,0] 0.0116 ± 0.0063
    #> 3 [0,1,0] 0.0155 ± 0.0107
    #> 4 [0,0,1] 0.1288 ± 0.0195
    #> 5 [1,1,0] 0.0095 ± 0.0055
    #> 6 [1,0,1] 0.0185 ± 0.0102
    #> 7 [0,1,1] 0.1725 ± 0.0198
    #> 8 [1,1,1] 0.3445 ± 0.0168

    View Slide

  12. Model evaluation
    Measures of model fit
    M2
    (Liu et al., 2016); PPMC (Park et
    al., 2015)
    Information criteria for model
    comparisons
    LOO (Vehtari et al., 2017); WAIC
    (Watanabe, 2010)
    Reliability indices
    Classification consistency and
    accuracy (Johnson & Sinhary, 2018)
    # Add model fit information
    ecpe <- add_fit(
    ecpe, method = c("m2", "ppmc")
    )
    # Add information criteria
    ecpe <- add_criterion(
    ecpe, criterion = "waic"
    )
    # Add reliability information
    ecpe <- add_reliability(ecpe)

    View Slide

  13. Use measr_extract() to pull out summaries of evaluation
    elements that have been added to the model
    Extracting model evaluation elements
    measr_extract(ecpe, what = "m2")
    #> M2 = 513.051, df = 325, p = 0
    #> RMSEA = 0.014, CI: [0.012,0.016]
    #> SRMSR = 0.032

    View Slide

  14. Use measr_extract() to pull out summaries of evaluation
    elements that have been added to the model
    Extracting model evaluation elements
    measr_extract(ecpe, what = "odds_ratio_flags", ppmc_interval = 0.95)
    #> # A tibble: 77 × 7
    #> item_1 item_2 obs_or ppmc_mean `2.5%` `97.5%` ppp
    #>
    #> 1 E1 E13 1.80 1.43 1.14 1.78 0.0195
    #> 2 E1 E17 2.02 1.40 1.04 1.86 0.005
    #> 3 E1 E26 1.61 1.25 1.01 1.52 0.005
    #> 4 E1 E28 1.86 1.41 1.10 1.76 0.008
    #> # ℹ 73 more rows
    #> # ℹ Use `print(n = ...)` to see more rows

    View Slide

  15. Use measr_extract() to pull out summaries of evaluation
    elements that have been added to the model
    For all options, see ?measr_extract
    Extracting model evaluation elements
    measr_extract(ecpe, what = "classification_reliability")
    #> # A tibble: 3 × 3
    #> attribute accuracy consistency
    #>
    #> 1 morphosyntactic 0.896 0.835
    #> 2 cohesive 0.852 0.808
    #> 3 lexical 0.916 0.857

    View Slide

  16. Current version is stable, but we continue to add new features
    Upcoming features:
    Additional DCM subtypes
    Tools for evaluating attribute hierarchies
    Case studies
    Submit requests or feedback on GitHub:
    https://github.com/wjakethompson/measr/issues
    Future development

    View Slide

  17. https://measr.info
    Upcoming training sessions and workshops (materials will be made
    available on the project website):
    StanCon 2023 (June 20–23, 2023; St. Louis, MO)
    Achievement & Assessment Institute's Summer Research Methods
    Camp (July 2023; virtual and asynchronous)
    Where to learn more?

    View Slide

  18. Thank you!
    Get in touch!
    The research reported here was supported by the Institute of
    Education Sciences, U.S. Department of Education, through
    Grant R305D210045 to the University of Kansas. The opinions
    expressed are those of the authors and do not represent the views
    of the the Institute or the U.S. Department of Education.
    measr.info
    [email protected]
    @wjakethompson
    @wjakethompson

    View Slide