Test Score Analytics Using APEX

What is the value for a school district to have all of its student performance data in one database?

The answer is quick and accurate multi-year analytics. After the initial report is created, each year is uploaded and populates all of the reports.

Example

Here is an example from a school district, that is looking forward to this year’s Beginning of Grade 3 test (BOG) data and making comparisons to other test data. Using a brief SQL statement the mean scores and standard deviations can be computed and reported for each school for each year in the APEX data system.

Then the distribution of scores in the form of a histogram for each of the years can be viewed. Using a simple drop-down filter, the year and the individual school can be overlaid and viewed .

The question is then what is the relationship between the BOG scores and the i_Ready reading scores when both are converted to Normal Curve Equivalent (NCE) scores? Here is the sample SQL code for that report:

select
    BOG.SCH_CODE as SCH_CODE,
    count(BOG.BOG_NCE),
    round(CORR(BOG.BOG_NCE, I_READY_RD.IR_RD_NCE), 3)
 from I_READY_RD I_READY_RD,     BOG BOG
   where I_READY_RD.SID=BOG.SID and BOG.YEAR = 2020
      and I_READY_RD.TEST_DATE = ‘BOY’
        group by BOG.SCH_CODE
      order by BOG.SCH_CODE

The next question is: What does this data look like as a scatter plot. In the scatter plot, an entire district can be shown for a testing year or the scatter plot can be made to show one school, and then the other data color will show al of the remaining schools. In conclusion, when a district has the BOG and I_Ready (or I_Station) data for this year, it will be better able to understand the significance of the information and perhaps understand the impact on being out of school for months has had on its students.  

Curriculum: What Is Being Taught and Learned?

Perhaps it is time to finally put informative assessment into action.

Preface
My work involves collecting, reporting, and analysis of student performance data from common assessments taken by students. With schools closed and the suspension of state and local assessments, there is no data to collect and use for guiding schools and teachers. Due to this situation I have had time to revisit the topic of formative assessment and the underlying concept of curriculum and instruction.

Facets of Curriculum
The intended curriculum refers to the curriculum documented in state and local curriculum guides. These curricula are also further refined and detailed in curriculum unpacking and pacing guide resources. These documents and resources form the outline of what is to be taught from a policy and standards perspective.

The assessed curriculum is closely linked to the intended curriculum when assessments are built to determine the extent to which the intended curriculum was learned by the student. It is understood and expected that there is alignment between the two curricula. Frequently, the assessed curriculum is documented in statements of the standards and the weight each standard is represented on the assessment used to measure mastery of the curriculum.

There is a danger of creating a mismatch between the intended curriculum and the assessed curriculum when teachers create assessments of what they have taught. Differences in interpretation of the standard, poor matching of items to assess the standards, and lack of rigor in the teacher’s assessment can provide teachers with what may appear to be valid results but may not provide meaningful information as the students’ performance relates to the intended curriculum.  At the core of assessment is validity. An assessment item or task must be representative of the intended learning outcome.

To compound the problem of alignment and validity, what is taught in the classroom may not be aligned with the intended and or assessed curriculum. The enacted curriculum is what is actually taught in the district and classroom. District differences between intended and enacted curriculum may be due to local emphasis on some content, availability of instructional materials, teacher preparation, and school or teacher bias. Efforts have been in place for decades to ensure there is an alignment between the intended curriculum and the enacted curriculum at the classroom level. Principals have had a practice of reviewing teacher lesson plans and more recently long –term instructional plans as a means of monitoring alignment.

The final facet of the curriculum paradigm is the learned curriculum. While this concept is closely linked to the assessed curriculum, it differs in that the learned curriculum is what is actually acquired by the student. The learned curriculum connects the enacted curriculum to student performance and requires some means of assessment to determine if there was a positive connection between the enacted and received curricula.  

Documenting Enacted Curriculum
The curriculum schools need to be informed about is what teachers actually teach: the enacted curriculum. From this data evaluation can be made which compares what is going on in the classroom to what should be going in the classroom from a curriculum perspective. Documenting the enacted curriculum has been done by looking at a teacher’s lesson plans and surveying teachers.

There are two major problems with this information: 1) planned instruction is still in the realm of intended curriculum and 2) surveying teachers may not be an accurate representation of what was actually taught if the data collection is not done regularly and in a systematic way.

In a short-term limited study in math, I had a teacher each day select from a list of math curriculum standards, skills that were being taught that day. Additional information such as the depth of knowledge (DOK), the instructional methodology employed (direct instruction for example).were also in the online data collection system. Over time, the system was able to provide reports of the dates, standards taught, a count of the standards taught, and the sub-skills for each standard. This information was then compared to the district pacing guide.  It was determined by this data collection that the teacher’s enacted curriculum was matching the district’s intended curriculum. While this data collection could be done daily, the data was collected each time a new standard and its sub-skill was taught. The strength of the data collection system was the granularity of the data collected and the reporting.

Documenting the Received Curriculum
While it was worthwhile to know what was being taught, a missing component was being able to gain insight into the received curriculum. Essentially, this took on the form of curriculum-based assessment. Phase two of the study added a student performance data collection using a simple 6 point scale (0-5). For each class the teacher’s instruction was recorded in the system, the teacher also recorded each student’s performance from absent = 0 to 5 = being full mastery.  Using this data, the data system could then report a class average of performance by standard, and a student profile of all of the standards and the average performance on each standard. Teachers could use this data as informative assessment and modify classroom instruction to improve class performance or to identify students who need extra instructional attention. 

The resulting information provided a summary look at the long-term enacted curriculum, data on class and individual student performance, and a student report which could be shared with the student. Student performance as recorded in this system could then be compared to the student’s benchmark or end of year’s assessment results.  

Sources:
https://repository.upenn.edu/cgi/viewcontent.cgi?article=1058&context=cpre_researchreports
https://www.jstor.org/stable/10.1086/428803?seq=1

Mid-Year Data Tasks

As you complete the mid-year testing in your district here are some “think abouts”:

You could –

  • Join a table or EVAAS Projected percentiles converted to NCE scores with the percentile scores (converted to NCE) and compute the difference to report growth.
  • Join the above growth file with Check-In scores in Math 1 and identify possible skill weaknesses of students and their corresponding teachers.
  • Create a file for each spring semester EOC teacher with each student’s previous test scores, and growth performance which will save them countless hours of looking up these students in EVAAS.
  • Create a file with previous student scores and join the Check-In test 1 with the Check-in test 2 so that teachers can get a good picture of student overall performance and identify students who are at-risk.    

All of this data work can be done manually using MS Access, but then sharing the data still could mean creating and sending exports.

Instead, with a data system hosted by Data Smart LLC in Greensboro, NC all that is required is uploading a few files from Winscan and the work is done for you.

Distribution is handled by secure logins by your administrators and they have access to summary reports, analysis reports, and teacher class rosters.      

If you are interested in hearing more, please contact me by email to schedule a no-cost initial consultation and demonstration of the system.   

Dr. Lewis Johnson
Data Analysis
Data Smart LLC

Ready for EOC Testing?

Throughout North Carolina, high school students will be taking first semester EOC assessments? Are your students ready? Did you provide teachers with the data they need to identify students who are at risk for not scoring a level 3 or higher?
Some school districts have provided EOC and English 1 teachers with data which includes:

  • each student’s previous scores and sub-scores,
  • previous growth information,
  • EVAAS projections and projected levels, and most importantly
  • benchmark scores with sub-scores

In a data mart, teachers can log into the system and see their class data and manipulate the data, just like it was in a spreadsheet. Forward-thinking districts have put all of this information in one file and provided analytics to identify at-risk students.  The approach to data reporting allows school administrators and curriculum leaders to readily analyze benchmark data and saves teachers precious time, shifting the attention to analysis and action, from compiling the data. 

If you have investigated data systems to provide this data to school administrators and TEACHERS and found the cost to be prohibitive, consider a data delivery and analysis system by Data Smart LLC in Greensboro, NC. A basic cloud-based hosted data mart is available for about $1,500 annually.

Contact Dr. Lewis  Johnson if you would like more information and an initial free consultation about your vision of a data system to meet your specific needs. Lew.Johnson@data-smart.net          

This short announcement is also posted on the Data Smart Blog site, where you can read other entries on data-related topics. You may use the “subscribe”  to keep up to date on blog entries by Data Smart, LLC  See https://data-smart.net/blog/

Developing a Growth Mindset

Comparing Proficiency

The testing scores are now compiled into reports which are shared with administrators and they examine if the percent proficient went up or down compared to last year. Of course, this year’s grade 8 students are a different group than last year, so while we can compare the numbers, we really can’t compare the groups. Unfortunately, the accountability system which focuses on proficiency leads educators to focus on the wrong metric of student and school success.

The Importance of Growth 

Going hand-in-hand with examining proficiency should be the examination of growth within each student group and indeed, each individual student. I have spent the last week if so, doing an analysis of a district’s scores. I see where one school has had a percent proficient below the district average and lower than last year for that grade and subject, which on the surface looks like a concern. However, when I examined the change in NCE score from last year to this year, the average NCE difference was a positive 3.0. More importantly, the ratio of students making a gain was 2 to 1.  However, in another school, the percent proficient increased, but the NCE difference was a -3.5, and for every student who had an increase in NCE, there were 2 students who had a decrease.

As educators, we need to stop comparing year to year proficiency and focus on examining growth. The one characteristic of growth which is important for understanding the impact of the school on students is that growth is not impacted by a student’s socio-economic status or ethnicity.

A close examination involves looking at growth in the context of each student’s previous achievement level: “Did students from level 1-5 have similar of different growth?” Another way of looking at the growth is the distribution of growth across the range of NCE scores for a group. This can be seen in the chart below:

The next step is to disaggregate the growth by teacher. We need to know who the effective teachers are and provide feedback to these teachers. Also, teachers who did not “grow” students need to know the “who and the why”, so improvement can occur. The feedback NEEDS TO HAPPEN BEFORE SCHOOL STARTS.

Developing a Growth Mindset

School leaders and teachers need to focus on what can be done to have a student grow from year to year. Surprisingly, the most important factor in this growth mindset is not in the teacher, but within each student. Each student needs to know about his/her past performance and the changes over the years. Each student needs to have a means of knowing if they are making progress. Most importantly, each student needs to understand that effort outweighs raw ability.

For more information on the Growth Mindset see:

https://www.mindsetworks.com/science/