Data is at the forefront today as the Massachusetts DESE (formerly the DOE) had its annual release of our state-wide assessment (MCAS) scores. It is funny to me that the State DESE goes to the trouble of stating in red font the following: NOTE: The Massachusetts Department of Elementary and Secondary Education does not rank schools or districts on the basis of MCAS results.
Yet despite this disclaimer, the Boston Globe (now better known as Boston.com) comes out with a ranking almost immediately of all of the schools in the state based on the percentage of students scoring in the top two categories (Advanced and Proficient) on the state-wide test. Can we add the following after the disclaimer - but we encourage the Boston Globe and other media outlets to do the ranking instead).
I guess the thing that bothers me the most is that this test was not meant to be used as a test which provided data to compare groups of students or schools with other schools. It is a criterion-referenced test meant to see if schools are actually teaching the state-defined standards and not a norm-referenced test which would be used to compare one group of students to another.
In any event, our school's scores are similar to last years as far as students in the top two categories in English, math, and science go. However, the ranking alone (which is based on students in the top two categories) does not point out that we have 10-percent more students in advanced in English than we did a year ago. I could go on and on about things this ranking does not clarify.
But I guess in the end my data question is - Are we developing enough internal data sources to show how our students are doing so that when these external data sources (rankings) come out we can either support or refute the findings? Can we say confidently at a number of points during the school year how our students are doing in regards to the standards that we as a school and community feel are most relevant?