$30 off During Our Annual Pro Sale. View Details »

論文を批判的に読むときのチェックリスト

KRSK
July 14, 2023

 論文を批判的に読むときのチェックリスト

主に医学・疫学・社会科学系の論文を読むときに、こんなことに注目しながら査読やknowledge gap探し(先行研究の弱点を見つけること)をしていますというリストです

KRSK

July 14, 2023
Tweet

More Decks by KRSK

Other Decks in Science

Transcript

  1. 1. Question:
    • Is the research novel?
    • Is it important to public health and addressing health disparities?
    • Do the study’s methods provide an answer to this?
    • Watch out for any disconnection between methods and questions: this is
    surprisingly common.
    2. Sampling:
    • Consider who’s included and who’s excluded (i.e., "who’s on the map and who’s
    off the map").
    • Consider the implications on the target population and health disparities.
    • Look for selection bias.
    3. Measurement:
    • Assess the concordance between the research question/hypothesis and what’s
    measured.
    • Check the validity and accuracy of measurements.
    4. Causal Estimand:
    • Is the study estimating ATE (Average Treatment Effect), CATE (Conditional
    Average Treatment Effect), LATE (Local Average Treatment Effect) or else?
    • What is the hypothetical intervention being considered?
    • Cumulative treatment vs incident treatment
    • Time-fixed vs time-varying
    • Are these choices explicit? Does the Discussion acknowledge the choice?
    • Any disconnection between the estimand and the discussion/interpretation?
    5. Identification:
    • What are the assumptions? Are they plausible and explicitly acknowledged?
    • Confounding
    • measurement errors and residual confounding
    • unobserved confounders
    1. Is the bias big conditional on adjusted covariates?
    • Selection bias
    • Attrition (before or after the exposure?)
    • How is missing data handled?
    • (Differential) measurement errors
    • Check for ambiguous “time zero”.
    6. Estimation:
    • Watch for potential misspecification, such as non-linearity.
    7. Interpretation:
    • Look for evidence of confounding in Table 1.
    • Look for evidence of selection bias.
    • Watch for authors misinterpreting their results.
    • Treating the absence of evidence as evidence of absence.
    • Making a false dichotomy of p-values.
    • Generalizing results too much without considering causal estimand and
    potential effect heterogeneity.
    • Consider potential mechanisms/policy implications.
    • Consider implications on health disparities.

    View Slide