Integrating Student Performance Data – Using APEX

As I teach data management for school districts and consult on building data systems I focus on assisting school districts in the importance of integrating data into a coherent information source. As school districts administer North Carolina Check-In assessments, the reports are distributed to schools via paper reports or electronic .pdf documents. The districts also administer other assessments like the NWEA or I-Ready. Then for grades K-3, the district administers DIBELS. Each of these electronic assessments has a data portal and presents reports based on the single data source. This “siloed” data system, frequently managed by different administrators in the district, provides a myopic view of the student’s performance.

All of these data sources do not take into consideration a student’s prior performance on assessments like the End of Grade tests. My recommendation to school districts is to develop a singular portal for access all student performance in one data system so that various student scores during the year can be accessed and viewed alongside of each other. For example, it is important to know that a student on this year’s I-Ready test is testing at the 40th percentile rank during the BOY test and at the 20th percentile rank on the MOY testing, while last year the student achieved an achievement level of 4.

By integrating the student performance data in one data system, with tables of student test scores and demographic information including the student identification number, all data can be linked using the SID. This can be accomplished by creating a database and a reporting system using Oracle Application Express. For more information on how this can be accomplished visit Data Smart LLC in Greensboro, NC at www.Data-Smart.net.

Curriculum: What Is Being Taught and Learned?

Perhaps it is time to finally put informative assessment into action.

Preface
My work involves collecting, reporting, and analysis of student performance data from common assessments taken by students. With schools closed and the suspension of state and local assessments, there is no data to collect and use for guiding schools and teachers. Due to this situation I have had time to revisit the topic of formative assessment and the underlying concept of curriculum and instruction.

Facets of Curriculum
The intended curriculum refers to the curriculum documented in state and local curriculum guides. These curricula are also further refined and detailed in curriculum unpacking and pacing guide resources. These documents and resources form the outline of what is to be taught from a policy and standards perspective.

The assessed curriculum is closely linked to the intended curriculum when assessments are built to determine the extent to which the intended curriculum was learned by the student. It is understood and expected that there is alignment between the two curricula. Frequently, the assessed curriculum is documented in statements of the standards and the weight each standard is represented on the assessment used to measure mastery of the curriculum.

There is a danger of creating a mismatch between the intended curriculum and the assessed curriculum when teachers create assessments of what they have taught. Differences in interpretation of the standard, poor matching of items to assess the standards, and lack of rigor in the teacher’s assessment can provide teachers with what may appear to be valid results but may not provide meaningful information as the students’ performance relates to the intended curriculum.  At the core of assessment is validity. An assessment item or task must be representative of the intended learning outcome.

To compound the problem of alignment and validity, what is taught in the classroom may not be aligned with the intended and or assessed curriculum. The enacted curriculum is what is actually taught in the district and classroom. District differences between intended and enacted curriculum may be due to local emphasis on some content, availability of instructional materials, teacher preparation, and school or teacher bias. Efforts have been in place for decades to ensure there is an alignment between the intended curriculum and the enacted curriculum at the classroom level. Principals have had a practice of reviewing teacher lesson plans and more recently long –term instructional plans as a means of monitoring alignment.

The final facet of the curriculum paradigm is the learned curriculum. While this concept is closely linked to the assessed curriculum, it differs in that the learned curriculum is what is actually acquired by the student. The learned curriculum connects the enacted curriculum to student performance and requires some means of assessment to determine if there was a positive connection between the enacted and received curricula.  

Documenting Enacted Curriculum
The curriculum schools need to be informed about is what teachers actually teach: the enacted curriculum. From this data evaluation can be made which compares what is going on in the classroom to what should be going in the classroom from a curriculum perspective. Documenting the enacted curriculum has been done by looking at a teacher’s lesson plans and surveying teachers.

There are two major problems with this information: 1) planned instruction is still in the realm of intended curriculum and 2) surveying teachers may not be an accurate representation of what was actually taught if the data collection is not done regularly and in a systematic way.

In a short-term limited study in math, I had a teacher each day select from a list of math curriculum standards, skills that were being taught that day. Additional information such as the depth of knowledge (DOK), the instructional methodology employed (direct instruction for example).were also in the online data collection system. Over time, the system was able to provide reports of the dates, standards taught, a count of the standards taught, and the sub-skills for each standard. This information was then compared to the district pacing guide.  It was determined by this data collection that the teacher’s enacted curriculum was matching the district’s intended curriculum. While this data collection could be done daily, the data was collected each time a new standard and its sub-skill was taught. The strength of the data collection system was the granularity of the data collected and the reporting.

Documenting the Received Curriculum
While it was worthwhile to know what was being taught, a missing component was being able to gain insight into the received curriculum. Essentially, this took on the form of curriculum-based assessment. Phase two of the study added a student performance data collection using a simple 6 point scale (0-5). For each class the teacher’s instruction was recorded in the system, the teacher also recorded each student’s performance from absent = 0 to 5 = being full mastery.  Using this data, the data system could then report a class average of performance by standard, and a student profile of all of the standards and the average performance on each standard. Teachers could use this data as informative assessment and modify classroom instruction to improve class performance or to identify students who need extra instructional attention. 

The resulting information provided a summary look at the long-term enacted curriculum, data on class and individual student performance, and a student report which could be shared with the student. Student performance as recorded in this system could then be compared to the student’s benchmark or end of year’s assessment results.  

Sources:
https://repository.upenn.edu/cgi/viewcontent.cgi?article=1058&context=cpre_researchreports
https://www.jstor.org/stable/10.1086/428803?seq=1

Silos, Windmills and the Tower of Babel

Occasionally data analysts are asked to take existing data and put it into a system which permits some analysis. In school districts, the biggest challenge in collecting data for an in-depth analysis like a program evaluation is accommodating the various data sources. To do this task the analyst must: 1. Identify the data sources; 2. identifying what the information sources are supposed to provide the analyst; and 3. ensuring that the data is internally consistent within each data source.  But unfortunately in the analyst’s world, we might have: 1. data silos (sources of data not integrated with other data sources; 2. windmills, which have large data stores which look impressive, but are not really very useful to the analyst; and 3. the Tower of Babel, where tables do not have consistent fields to link information between tables (such as a Student ID) and inconsistent data entry within the data table.

Data Silos

School districts are like a farm with many silos. One silo holds student testing and accountability data. Another silo, Power School, holds demographic, discipline and grade information. Other silos hold information on subgroups, such as exceptional students, gifted students, and limited language learners. Each silo holds information which makes sense within that silo, but cannot be shared or merged with the data in another silo. Most importantly, the persons creating and updating the data do not confer with persons with other data silos, so the data is not able to be easily joined. A good example is a collection of data tables one for each school, that does not contain school code, student ID, or consistent names for students. If that data table holds critical information about instructional programs for students and the analyst needs to match it to test score data, the task of joining that data is overwhelming.

Data Windmills

At times when data sources are identified, a closer look exposes that the information contained in these giant tables really are not what they appear. In one school district, there were about 90 small data files that together held program information on a large special population. Basically, the data was a collection of information which could be found in other places, but a few pieces of information were only available in that file. Teachers kept the windmills spinning by adding information but it had no use outside of that data table and the few people who could access the information could not use it for analysis. 

The Tower of Babel

This problem to the analyst presents the problem of having consistent information which can be used to join tables such as a student ID field. This critical information needs to be from an accurate source, such as being pre-populated in the table from an authoritative source, not hand-entered by a user who could make data-entry errors.  If a field in the table needs to contain information about the amount of service time a student receives, then a suggested format for the data entry is needed or a drop-down select list with the possible options is even better. It is essential that the design of the data table defines the information to go into that table and most importantly define what the information must conform to be consistent. In a database, “one time per week for 30 minutes”, “1 x/week 3o minutes”, and “one 30 minute session 1 time per week” are all different and cannot be easily queried. Likewise, “Lep” and “LEP” are interpreted as two different entries in the same data table.

Conclusion

As an analyst, my advice is to have conversations with others in your organization who are at some time likely to need to share information about what your silo needs to contain and how the information is to be entered and recorded. This practice eliminates the most challenging and unnecessary part of the work of a data analyst of rebuilding tables and doing data clean up BEFORE any analysis can be performed. In some cases, the data cleanup is so large that the high cost of the data project is in the preparation, not in the analysis.

Dr. Lewis R. Johnson is owner and lead data consultant for Data Smart LLC, a North Carolina business with a mission to assist schools to use data analytics for improving student performance.

Cohort Reporting

Cohort Reporting is a Necessity

Most schools and school districts report the summative data by grade for the year, and may sometimes report the previous year’s data as a comparison. Unfortunately, this does not provide an accurate picture of student performance in the district. The problem of incomplete or non-comparative data is due to:

  1. The data are not the same student group and differences in the groups’ pre-performance from year to year may cause variations.
  2. Percent proficiency scores vary across grades, so that a grade 4 group may actually do better than their grade 3 performance, but the percent proficient declines.
  3. The inability of the district’s data system to track students’ performance across time as a cohort.

To solve this problem, three changes in the way data are stored and coded are necessary.

Requirements

First, student scores cannot be stored in separate files, one for each year of testing.

Second, student scores need to be transformed into a common scale which will make them comparable across time. Percentile ranks are not the answer, instead all percentile rank scores need to be transformed to normal curve equivalent (NCE) scores, while keeping the percentile rank scores in the data file. Third, a kindergarten entry date or a first date of testing in grade 3 added to the data file is necessary for ease in writing queries for reports. So for each year the student scores are uploaded into a data system the same student ID number has the same K_ENTRY date. Therefore, a chart of the data would look like this:

The Report 

School 000 is a school with an increasing number of students achieving proficiency and a very high NCE Difference EVAAS Gain. Whereas, the 001 school has a small drop in proficiency, not enough to raise concerns, however the school also has a declining EVAAS Gain Score for both RD and MA for grade 5.