aliquote.org

Cognitive diagnosis models

December 10, 2012

As its name suggests, a cognitive diagnosis model aims at “diagnosing” which skills examinees have or do not have. It has become very popular in recent years because it overcomes standard limitations of summated scale scores derived from classical test or item response theory.

There is a detailed overview of CDM by DiBello and coll. in the Handbook of Statistics, vol. 26: DiBello, L.V., Roussos, L.A., and Stout, W.F. (2007). Review of cognitively diagnostic assessment and a summary of psychometric models. In C.R. Rao & S. Sinharay (Eds.), Handbook of Statistics, Volume 26: Psychometrics (pp. 979–1030). Amsterdam, Netherlands: Elsevier. Mark J. Giel and Jacqueline P. Leighton(1) also explain how cognitive models can be used together with psychometric models to account for a hierarchical processing of information, where “problem solving is assumed to require the careful processing of complex information using a relevant sequence of operations (p. 1103).”

The definitive reference is probably the book authored by Rupp and coll.: Rupp, A.A., Templin, J., and Henson, R.A. (2010). Diagnostic Measurement: Theory, Methods, and Applications. Guilford Press.

A well-known model is the so-called DINA model for dichotomous items, which stands for deterministic-input noisy-AND model. It relies on the idea that a particular examinee need to master some skills (attributes) to endorse a given set of items. This relationship between items and attributes is operationalized into a $Q$-matrix which has binary elements $q_{jk}$ indicating whether mastery of the $k$th attribute is required by the $j$th item. We also consider a vector of indicators for subject $i$’s attribute mastery for the $k$ attributes. In the DINA model, items divide the population of examinees in two latent classes: those who have all required attributes and those who don’t.(2) Lacking one required attribute for an item is the same as lacking all the required attributes, and this model assumes equal probability of success for all attribute vectors in each latent group, a condition that is relaxed in the generalized DINA model.(3)

At first sight, it shares some of the ideas of clustering or, better, latent class models (PDF). The idea of considering “items bundle” is not new, think testlet or the linear logistic test model(9–11) which allows to take into account item properties, or its generalization, the random weights LLTM.(13) Other IRT models are listed in Table 2 (p. 997) of DiBello et al’s chapter.

The $Q$-matrix in the DINA model specifies the set of latent traits necessary for each item, and it is comparable to the weight matrix in the LLTM (or loadings in factor analysis). Importantly, it permits within-item multidimensionality (items needing more than one attribute). However, the LLTM is essentially used with unidimensional constructs; items parameters are linked linearly in this log-linear approach. Cognitive diagnosis model, like the DINA model, are sort of mixture models where several population and several dimensions are considered at once. Only the latter provide basis for individual feedback about skill mastery.

Regarding existing software, here are few links:

References

  1. Gierl, M.J. and Leighton, J.P. (2007). Linking cognitively-based models and psychometric methods. In C.R. Rao & S. Sinharay (Eds.), Handbook of Statistics, Volume 26: Psychometrics (pp. 979–1030). Amsterdam, Netherlands: Elsevier.
  2. Henson, R. and Douglas, J. (2005). Test Construction for Cognitive Diagnosis. Applied Psychological Measurement, 29(4), 262–277.
  3. de la Torre, J. (2011). The generalized DINA model framework. Psychometrika, 76, 179–199.
  4. de la Torre, J., & Douglas, J. (2004). Higher-order latent trait models for cognitive diagnosis. Psychometrika, 69, 333–353.
  5. de la Torre, J. (2009). DINA model and parameter estimation: A didactic. Journal of Educational and Behavioral Statistics, 34, 115–130.
  6. Huebner, A. (2010). An overview of recent developments in cognitive diagnostic computer adaptive assessments.Practical Assessment, Research & Evaluation, 15(3).
  7. Junker, B. and Sijtsma, K. (2001). Cognitive assessment models with few assumptions, and connections with nonparametric IRT. Applied Psychological Measurement, 25, 258–272.
  8. Templin, J. and Henson, R. (2006). Measurement of psychological disorders using cognitive diagnosis models. Psychological Methods, 11, 287–305. ( slides)
  9. Fischer, G.H. and Formann, A.K. (1982). Some applications of logistic latent trait models with linear constraints on the parameters. Applied Psychological Measurement, 6(4), 397-416.
  10. De Boeck, P. and Wilson, M. (2004). Explanatory Item Response Models. A Generalized Linear and Nonlinear Approach. Springer
  11. Poinstingl, H. (2009). The linear logistic test model (LLTM) as the methodological foundation of item generating rules for a new verbal reasoning test. Psychology Science Quarterly, 51(2), 123–134.
  12. MacDonald, G. and Kromrey, G. (2011). Linear Logistic Test Model: Using SAS® to Simulate the Decomposition of Item Difficulty by Algorithm, Sample Size, Cognitive Component and Time to Convergence. SESUG, Paper ST-13.
  13. Rijmen, F. and de Boeck, P. (2002). The Random Weights Linear Logistic Test Model. Applied Psychological Measurement, 26(3), 271–285.
  14. DeCarlo, L.T. (2012). Recognizing uncertainty in the Q-matrix via a Bayesian extension of the DINA model. Applied Psychological Measurement, 36(6), 447–468.
  15. Junker, B.W. and Sijtsma, K. (2001). Cognitive Assessment Models With Few Assumptions, and Connections With Nonparametric Item Response Theory. Applied Psychological Measurement, 25(3), 258–272.

See Also

» Testlet response theory » Mokken scale analysis » Random notes » Dimensions or categories? » Cronbach's alpha yet again