Each fall, many New Yorkers head to family physicians for an annual physical. Doctors record some standard measures-body temperature and blood pressure, for example-and perhaps draw some blood to send to the lab. Doctors will also ask about changes in health over the last year. Only after considering all of this information will they make a holistic assessment and recommend an appropriate treatment plan. This fall, the New York City Department of Education is releasing its version of the annual check-up for schools: the School Progress Reports. September brought the reports for approximately 1,000 elementary, K-8 and middle schools, with the high schools coming shortly. The progress reports assigned these schools a letter grade ranging from A to F, based mostly (60 percent) on their contribution to students' test scores from last year to this year. The progress report letter grades drive the "treatment plan" for the schools: schools which receive an A or a B are eligible for cash rewards, whereas those receiving a D or F face eventual restructuring or closure. In doctors' offices, we count on lab tests, X-rays and other reliable measures of our health. Can we count on the progress reports in the same way? Our analyses of last year's and this year's progress reports suggest that we cannot. We compared the scores of elementary, K-8 and middle schools on the school progress measure in last year's and this year's reports. What we found shocked us. Because some schools are consistently high performing, we expected that schools producing high growth in achievement in one year will also show good progress the next year, even allowing for the possibility that some schools will move up or down a bit. But the schools that the progress reports identified as high growth last year were just as likely to be named low growth this year. You could actually do better by randomly picking the schools out of a hat to identify which ones contribute the most to student growth on this year's progress reports, than by using how the schools did on last year's reports as a predictor. Why is this so? Because no educational test can provide a perfectly accurate reading of a student's performance. Changes in student performance within a particular school on a test from one year to the next may be due to random error, or "statistical noise," rather than genuine change. It takes a lot more information-either about a larger number of students or about performance across more years-to sort out real gains from illusions. The Department of Education has chosen to ignore this complexity. This would not be so alarming if the progress reports were treated as just one of several forms of information about the well-being of particular public schools, such as the school's status under the federal No Child Left Behind law, or the annual Quality Reviews that the department conducts for each school. But the progress reports-based primarily on a very inconsistent measure of how a school is performing-are the centerpiece of the department's accountability system. The stakes for schools, and the educators, students and parents who serve and are served by them, are very real. What's the solution? The department needs to head back to the drawing board. The problems with the progress reports are too vast for tweaks at the margins. We applaud the department for ending the superficial practice of labeling a school as "failing" simply because students were relatively low performing when they entered the school, and we are optimistic about the promise of statistical systems known as "growth models" in education. Unfortunately, the progress reports move us no closer to identifying New York City schools that are producing more growth in students' learning than others. Basing a treatment plan on one unreliable health indicator would be malpractice if a doctor did it. Why should we tolerate this from the Department of Education? Aaron Pallas is Professor of Sociology and Education at Teachers College, Columbia University. Jennifer L. Jennings is a doctoral candidate in Sociology at Columbia University who blogs under the name eduwonkette.