Blog List

Integrating Student Performance Data – Using APEX

As I teach data management for school districts and consult on building data systems I focus on assisting school districts in the importance of integrating data into a coherent information source. As school districts administer North Carolina Check-In assessments, the reports are distributed to schools via paper reports or electronic .pdf documents. The districts also administer other assessments like the NWEA or I-Ready. Then for grades K-3, the district administers DIBELS. Each of these electronic assessments has a data portal and presents reports based on the single data source. This “siloed” data system, frequently managed by different administrators in the district, provides a myopic view of the student’s performance.

All of these data sources do not take into consideration a student’s prior performance on assessments like the End of Grade tests. My recommendation to school districts is to develop a singular portal for access all student performance in one data system so that various student scores during the year can be accessed and viewed alongside of each other. For example, it is important to know that a student on this year’s I-Ready test is testing at the 40th percentile rank during the BOY test and at the 20th percentile rank on the MOY testing, while last year the student achieved an achievement level of 4.

By integrating the student performance data in one data system, with tables of student test scores and demographic information including the student identification number, all data can be linked using the SID. This can be accomplished by creating a database and a reporting system using Oracle Application Express. For more information on how this can be accomplished visit Data Smart LLC in Greensboro, NC at www.Data-Smart.net.

Item Analysis Program: Iteman 4.5

Item Analysis Program: Iteman 4.5

Assessment Systems Corp. has released version 4.5 of the popular program Iteman, an item analysis program that I have used for since being director of testing and accountability.

The program now self-generates a report. For my testing of the program on 100 subjects on an 18 item Math 1 test using the demo version created a 30-page report. The report provides item-level analysis including p and discrimination values. The quartile plot created by the program provides insight into responses not available in other easy-to-use programs. The 46-page User Manual is very complete and provides file setup and some interpretive information.

More information is available at: https://assess.com/2021/03/31/what-is-item-analysis/

Measuring the Impact of the Pandemic on Grade 2-3 Reading Performance for 2020-21

For this brief report, the impact of being out of school and doing home-based learning for grade 3 students was examined in a rural school district. The sample size is approximately 300 students.

Measures:

  1. I-Ready beginning of the year (BOY) reading test overall percentile score converted to an NCE (normal curve equivalent) score.
  2. Beginning of Grade 3 (BOG) ELA test percentile rank score converted to NCE score.
  3. Difference scores between each student’s I-Ready BOY NCE and the corresponding BOG NCE score. By using intra-individual score differences, differences in overall average score from the two different cohort groups were not a confounding factor for this analysis.
  4. BOG percent proficient was computed using counts of level 3-5 and dividing it by the total number of scores. This was computed for each school.

Procedure:
For the fall 2019 and the 2020 (current year) the BOG scores were collected and matched into a single table with the previous year I-Ready BOY data score.

  1. The joined table included columns for the year, school, SID, I-Ready tier, NCE BOG, and NCE I-Ready scores.
  2. A difference score was computed for each student by subtracting the BOY I-Ready NCE score from the BOG NCE score. For example:
    2020 BOG NCE = 50 and the BOY I-Ready NCE score = 45  difference = +5
  3. For each school the average of the differences was computed. 
  4. NCE score differences were disaggregated by school and by reading tier (Tier 1, Tier 2, and At-risk for Tier 3).

Findings: 

  1. The average I-Ready to BOG NCE difference for the 2019-20 school year for the district was 6.5 with a range of 1.0 to 9.7.
  2. The average I-Ready to BOG NCE difference for the 2020-21 school year for the district was -19.8 with a range of -17.8 to -24.2. 
  3. Differences by Tier for the two different school years were as follows:
    1. 2019 Tier 1   5         Tier 2  6.7        At-Risk for Tier 3  10.1
    2. 2020 Tier 1   -26.8      Tier 2  -20.7     At-Risk for Tier 3  -9.9
  4. The 2019 BOG district proficiency was 24.1% and this is a consistent score for the last three years. The 2020 district proficiency was 11.5%.

Conclusion:
The impact of being out of school for the spring of 2020 due to the pandemic is measurable for grade 3 students using the methodology described above. Furthermore, the results suggest that there is a negative impact on reading performance for the student in this district, and the impact cuts across all reading tier levels.

Actions:
 Present this data to the schools and make a school roster of students for each school so that the school leaders and teachers can identify the students who experienced the greatest negative impact so that interventions can be provided.

Use the I-Ready diagnostic data to determine if there is a pattern of weak areas across the most negatively impacted students. For each student determine specific areas had the greatest negative impact and provide targeted individual intervention.  

Policy Impact:
While growth for grade 3 students can be determined by comparing BOG to EOG NCE scores, Percent proficient for grade 3 is quite likely to be lower than the 2018 EOY percent proficient. This is likely to be the case for ELA for grades 3-8. Accountability using the current targets will be problematic.

Test Score Analytics Using APEX

What is the value for a school district to have all of its student performance data in one database?

The answer is quick and accurate multi-year analytics. After the initial report is created, each year is uploaded and populates all of the reports.

Example

Here is an example from a school district, that is looking forward to this year’s Beginning of Grade 3 test (BOG) data and making comparisons to other test data. Using a brief SQL statement the mean scores and standard deviations can be computed and reported for each school for each year in the APEX data system.

Then the distribution of scores in the form of a histogram for each of the years can be viewed. Using a simple drop-down filter, the year and the individual school can be overlaid and viewed .

The question is then what is the relationship between the BOG scores and the i_Ready reading scores when both are converted to Normal Curve Equivalent (NCE) scores? Here is the sample SQL code for that report:

select
    BOG.SCH_CODE as SCH_CODE,
    count(BOG.BOG_NCE),
    round(CORR(BOG.BOG_NCE, I_READY_RD.IR_RD_NCE), 3)
 from I_READY_RD I_READY_RD,     BOG BOG
   where I_READY_RD.SID=BOG.SID and BOG.YEAR = 2020
      and I_READY_RD.TEST_DATE = ‘BOY’
        group by BOG.SCH_CODE
      order by BOG.SCH_CODE

The next question is: What does this data look like as a scatter plot. In the scatter plot, an entire district can be shown for a testing year or the scatter plot can be made to show one school, and then the other data color will show al of the remaining schools. In conclusion, when a district has the BOG and I_Ready (or I_Station) data for this year, it will be better able to understand the significance of the information and perhaps understand the impact on being out of school for months has had on its students.  

Curriculum: What Is Being Taught and Learned?

Perhaps it is time to finally put informative assessment into action.

Preface
My work involves collecting, reporting, and analysis of student performance data from common assessments taken by students. With schools closed and the suspension of state and local assessments, there is no data to collect and use for guiding schools and teachers. Due to this situation I have had time to revisit the topic of formative assessment and the underlying concept of curriculum and instruction.

Facets of Curriculum
The intended curriculum refers to the curriculum documented in state and local curriculum guides. These curricula are also further refined and detailed in curriculum unpacking and pacing guide resources. These documents and resources form the outline of what is to be taught from a policy and standards perspective.

The assessed curriculum is closely linked to the intended curriculum when assessments are built to determine the extent to which the intended curriculum was learned by the student. It is understood and expected that there is alignment between the two curricula. Frequently, the assessed curriculum is documented in statements of the standards and the weight each standard is represented on the assessment used to measure mastery of the curriculum.

There is a danger of creating a mismatch between the intended curriculum and the assessed curriculum when teachers create assessments of what they have taught. Differences in interpretation of the standard, poor matching of items to assess the standards, and lack of rigor in the teacher’s assessment can provide teachers with what may appear to be valid results but may not provide meaningful information as the students’ performance relates to the intended curriculum.  At the core of assessment is validity. An assessment item or task must be representative of the intended learning outcome.

To compound the problem of alignment and validity, what is taught in the classroom may not be aligned with the intended and or assessed curriculum. The enacted curriculum is what is actually taught in the district and classroom. District differences between intended and enacted curriculum may be due to local emphasis on some content, availability of instructional materials, teacher preparation, and school or teacher bias. Efforts have been in place for decades to ensure there is an alignment between the intended curriculum and the enacted curriculum at the classroom level. Principals have had a practice of reviewing teacher lesson plans and more recently long –term instructional plans as a means of monitoring alignment.

The final facet of the curriculum paradigm is the learned curriculum. While this concept is closely linked to the assessed curriculum, it differs in that the learned curriculum is what is actually acquired by the student. The learned curriculum connects the enacted curriculum to student performance and requires some means of assessment to determine if there was a positive connection between the enacted and received curricula.  

Documenting Enacted Curriculum
The curriculum schools need to be informed about is what teachers actually teach: the enacted curriculum. From this data evaluation can be made which compares what is going on in the classroom to what should be going in the classroom from a curriculum perspective. Documenting the enacted curriculum has been done by looking at a teacher’s lesson plans and surveying teachers.

There are two major problems with this information: 1) planned instruction is still in the realm of intended curriculum and 2) surveying teachers may not be an accurate representation of what was actually taught if the data collection is not done regularly and in a systematic way.

In a short-term limited study in math, I had a teacher each day select from a list of math curriculum standards, skills that were being taught that day. Additional information such as the depth of knowledge (DOK), the instructional methodology employed (direct instruction for example).were also in the online data collection system. Over time, the system was able to provide reports of the dates, standards taught, a count of the standards taught, and the sub-skills for each standard. This information was then compared to the district pacing guide.  It was determined by this data collection that the teacher’s enacted curriculum was matching the district’s intended curriculum. While this data collection could be done daily, the data was collected each time a new standard and its sub-skill was taught. The strength of the data collection system was the granularity of the data collected and the reporting.

Documenting the Received Curriculum
While it was worthwhile to know what was being taught, a missing component was being able to gain insight into the received curriculum. Essentially, this took on the form of curriculum-based assessment. Phase two of the study added a student performance data collection using a simple 6 point scale (0-5). For each class the teacher’s instruction was recorded in the system, the teacher also recorded each student’s performance from absent = 0 to 5 = being full mastery.  Using this data, the data system could then report a class average of performance by standard, and a student profile of all of the standards and the average performance on each standard. Teachers could use this data as informative assessment and modify classroom instruction to improve class performance or to identify students who need extra instructional attention. 

The resulting information provided a summary look at the long-term enacted curriculum, data on class and individual student performance, and a student report which could be shared with the student. Student performance as recorded in this system could then be compared to the student’s benchmark or end of year’s assessment results.  

Sources:
https://repository.upenn.edu/cgi/viewcontent.cgi?article=1058&context=cpre_researchreports
https://www.jstor.org/stable/10.1086/428803?seq=1