Cognitive Diagnostic Models and Informative Assessments

Cognitive Diagnostic Models
Recently I received a posting from Assessment Systems on the topic of cognitive diagnostic models in assessment (www.assess.com/what-are-cognitive-diagnostic-models/ ).  This is a topic that assessment builders have had on our mind for quite some time, but due to limitations in assessment technology have not been able to easily implement. Essentially, a cognitive diagnostic model is the idea that assessment results should not be thought of as a single score, or even a collection of sub-scores. Instead, of a single score or broad sub-scores to provide us with diagnostic information, we look at the profile of strengths and weaknesses information the entire assessment provides.

A deeper look at the concept of diagnostic models reveals underlying statistical analysis using computerized assessments and creation of models based on field testing the assessment with a large number of students to discern knowing from guessing. For our purposes, we take a more pragmatic approach.

Informative assessments to be valid, reliable and diagnostically valuable are carefully constructed.  One characteristic that is necessary is to carefully plan the assessment and document the inter-relationships among items in these categories:

Skill-knowledge connections – skill progressions which is our concern in this blog
Cognitive load of items – depth of knowledge
Level of performance – taxonomy of performance (knowledge to evaluation) 

Summative assessments are not designed to provide sufficient information to guide instruction. Instead, summative assessments, like the End of Grade tests, are heavily weighted to items representing the end of a skill progression, are at the upper end of the depth of knowledge, and are at the application level of performance or above. 

Taking the concept of cognitive diagnostic models a step further, within an assessment item it is possible to also use the correct AND incorrect responses to items to construct an understanding of the micro-strengths and weaknesses level of understanding. With this level of understanding about a student’s performance, it is then possible to address the student’s needs and enhance performance going forward. Dr. James Popham, in his book Everything School Leaders Need to Know About Assessment (2010) and at the Assessment Leadership Institute in 2016 at Duke University advocated this intentional model of assessment building.

So how does this look in practice?
The steps are:  

  1. The identification of the CRITICAL high impact skills necessary for the student to demonstrate mastery of the content of the segment of the course which has been taught.
  2. Identifying the pre-requisite skills necessary for the mastery-level performance of the critical skill.
  3. The selection of assessment items to sample the critical skills identified in step 2. The items should represent a span of course. On a benchmark assessment, it may take two or more items as a group of items to assess a critical high impact skill.
  4. Examination of the foils for the items and writing reasons for selecting the foil. This is essentially an error analysis. Some foils may need to be re-written to identify missing pre-skills.
  5. Creation of an assessment “map” or matrix listing each item. Its level of mastery and the reason for each error.

An example: For subtraction of mixed fractions
Skill Progression:

  1. Identifying minuend and subtrahend in problem
  2. Recognizing the need to convert mixed fraction due to the small minuend
  3. Conversion of mixed fractions to improper fractions
  4. Multiplication accuracy
  5. Reducing of fractions
  6. Common denominator of the fraction
  7. Conversion of improper to mixed fraction
  8. Apply computational skill to authentic problem

Item 1:   Easy:  like denominator, no conversion of mixed fraction
 3 4/5 – 1 2/5

      Answer options:
      Foil – Answer – Interpretation related to the skill progression

  1. 4 6/5        Differentiates subtraction from addition
  2. 12/5           Un-necessarily converted to an improper fraction 
  3. 2/10           Error in the common denominator
  4. 2 2/5 Correct Response – Mastery with no fraction conversion

Item 2:  Moderate: Like denominator, conversion from mixed to improper fraction.
   3 1/5 – 1 2/5

      Answer options:
      Foil – Answer – Interpretation related to the skill progression

  1. 13/5          Knows conversion from mixed, but not conversion to mixed
  2. 2 1/5          Subtract low from high, no conversion of improper
  3. 2/5             Can not convert mixed to an improper fraction
  4. 2 3/5          Correct Response – Mastery with the common denominator

 

Item 3: Difficult: Conversion from mixed to improper and common denominator.  
     4 2/5 – 2 3/10

A matrix is then created where each item in the progression and each item response is then cross-matched. Therefore, each assessment may include numerous progressions and matrixes for error analysis. For this reason, assessments are time consuming to build and valuable due to their investment. 

It might look like this with information for item 2 added: (X indicates a problem in progression)
            Skill:    1          2          3          4          5          6          7          8

1A
1B
1C
1D
2A                                                                                           X
2B                   X                     X
2C                                           X
2D  Correct

Summary
In summary, to create a high-quality informative assessment, the assessment builders need to have a thorough knowledge of the 1. the curriculum’s critical skills, 2. the skill progressions associated with a mastery-level performance of the skill, 3. how to interpret the errors, and 4. how to extract the correct and error responses into a profile for the student.  This paradigm could be applied to benchmark assessments. In that case, some items would be identified as exemplary items for assessing the mastery of the curriculum and used to create predictive scores. However, this could result in a longer assessment and the return on the time investment would need to be verified.

Lewis R. Johnson, Ed. D
Lead Consultant
Data Smart LLC