This was written by Andy Hargreaves and Henry Braun. This is the executive summary for a new policy brief released yesterday by National Education Policy Center at the University of Colorodo at Boulder that examines the linkage between data-driven improvement and accountability in education. You can find the entire brief here. You can read about the brief on Valerie Strauss's blog The Answer Sheet here. Andy Hargreaves tweets here.
by Andy Hargreaves and Henry Braun
The drive to enhance learning outcomes has assumed increasing salience over the last three decades. These outcomes include both high levels of tested achievement for all students and eliminating gaps in achievement among different sub-populations (raising the bar and closing the gap). This policy brief examines policies and practices concerning the use of data to inform school improvement strategies and to provide information for accountability. We term this twin-pronged movement, data-driven improvement and accountability (DDIA).
Although educational accountability is meant to contribute to improvement, there are often tensions and sometimes direct conflicts between the twin purposes of improvement and accountability. These are most likely to be resolved when there is collaborative involvement in data collection and analysis, collective responsibility for improvement, and a consensus that the indicators and metrics involved in DDIA are accurate, meaningful, fair, broad and balanced. When these conditions are absent, improvement efforts and outcomes-based accountability can work at cross-purposes, resulting in distraction from core purposes, gaming of the system and even outright corruption and cheating. This is particularly the case when test-based accountability mandates punitive consequences for failing to meet numerical targets that have been determined arbitrarily and imposed hierarchically.
Data that are timely and useful in terms of providing feedback that enables teachers, schools and systems to act and intervene to raise performance or remedy problems are essential to enhancing teaching effectiveness and to addressing systemic improvement at all levels. At the same time, the demands of public accountability require transparency with respect to operations and outcomes, and this calls for data that are relevant, accurate and accessible to public interpretation. Data that are not relevant skew the focus of accountability. Data that are inaccurate undermine the credibility of accountability. And data that are incomprehensible betray the intent of public accountability. Good data and good practices of data use not only are essential to ensuring improvement in the face of accountability, but also are integral to the pursuit of constructive accountability.
Data-driven improvement and accountability can lead either to greater quality, equity and integrity, or to deterioration of services and distraction from core purposes. The question addressed by this brief is what factors and forces can lead DDIA to generate more positive and fewer negative outcomes in relation to both improvement and accountability.
The drive to enhance learning outcomes has assumed increasing salience over the last three decades. These outcomes include both high levels of tested achievement for all students and eliminating gaps in achievement among different sub-populations (raising the bar and closing the gap). This policy brief examines policies and practices concerning the use of data to inform school improvement strategies and to provide information for accountability. We term this twin-pronged movement, data-driven improvement and accountability (DDIA).
Although educational accountability is meant to contribute to improvement, there are often tensions and sometimes direct conflicts between the twin purposes of improvement and accountability. These are most likely to be resolved when there is collaborative involvement in data collection and analysis, collective responsibility for improvement, and a consensus that the indicators and metrics involved in DDIA are accurate, meaningful, fair, broad and balanced. When these conditions are absent, improvement efforts and outcomes-based accountability can work at cross-purposes, resulting in distraction from core purposes, gaming of the system and even outright corruption and cheating. This is particularly the case when test-based accountability mandates punitive consequences for failing to meet numerical targets that have been determined arbitrarily and imposed hierarchically.
Data that are timely and useful in terms of providing feedback that enables teachers, schools and systems to act and intervene to raise performance or remedy problems are essential to enhancing teaching effectiveness and to addressing systemic improvement at all levels. At the same time, the demands of public accountability require transparency with respect to operations and outcomes, and this calls for data that are relevant, accurate and accessible to public interpretation. Data that are not relevant skew the focus of accountability. Data that are inaccurate undermine the credibility of accountability. And data that are incomprehensible betray the intent of public accountability. Good data and good practices of data use not only are essential to ensuring improvement in the face of accountability, but also are integral to the pursuit of constructive accountability.
Data-driven improvement and accountability can lead either to greater quality, equity and integrity, or to deterioration of services and distraction from core purposes. The question addressed by this brief is what factors and forces can lead DDIA to generate more positive and fewer negative outcomes in relation to both improvement and accountability.
The challenge of productively combining improvement and accountability is not confined to public education. It arises in many other sectors too. This brief reviews evidence and provides illustrative examples of data use in business and sports in order to compare practices in these sectors with data use in public education. The brief discusses research and findings related to DDIA in education within and beyond the United States, and makes particular reference to our own recent study of a system-wide educational reform strategy in the province of Ontario, Canada.
Drawing on these reviews of existing research and illustrative examples across sectors, the
brief then examines five key factors that influence the success or failure of DDIA systems
in public education:
brief then examines five key factors that influence the success or failure of DDIA systems
in public education:
1.The nature and scope of the data employed by the improvement and accountability systems, as well as the relationships and interactions among them;
2. The types of indicators (summary statistics) used to track progress or to make comparisons among schools and districts;
3. The interactions between the improvement and accountability systems;In general, we find that over more than two decades, through accumulating statewide initiatives in DDIA and then in the successive Federal initiatives of the No Child Left Behind Act and Race to the Top, DDIA in the U.S. has come to exert increasingly adverse effects on public education, because high-stakes and high-threat accountability, rather than improvement alone, or improvement and accountability together, have become the prime drivers of educational change. This, in turn, has exerted adverse and perverse effects on attempts to secure improvement in educational quality and equity. The result is that, in the U.S., Data-Driven Improvement and Accountability has often turned out to be Data Driven Accountability at the cost of authentic and sustainable improvement.
4. The kinds of consequences attached to high and low performance and how those consequences are distributed;
5. The culture and context of data use -- the ways in which data are collected,
interpreted and acted upon by communities of educators, as well as by those who
direct or regulate their work.
Contrary to the practices of countries with high performance on international assessments, and of high performing organizations in business and sports, DDIA in the U.S. has been skewed towards accountability over improvement. Targets, indicators, and metrics have been narrow rather than broad, inaccurately defined and problematically applied. Test score data have been collected and reported over too short timescales that make them unreliable for purposes of accountability, or reported long after the student populations to which they apply have moved on, so that they have little or no direct value for improvement purposes. DDIA in the U.S. has focused on what is easily measured rather than on what is educationally valued. It holds schools and districts accountability for effective delivery of results, but without holding system leaders accountable for providing the resources and conditions that are necessary to secure those results.
In the U.S., the high-stakes, high-pressure environment of educational accountability, in which arbitrary numerical targets are hierarchically imposed, has led to extensive gaming and continuing disruptions of the system, with unacceptable consequences for the learning and achievement of the most disadvantaged students. These perverse consequences include loss of learning time by repeatedly teaching to the test; narrowing of the curriculum to that which is easily tested; concentrating undue attention on “bubble” students near the threshold target of required achievement at the expense of high-needs students whose current performance falls further below the threshold; constant rotation of principals and teachers in and out of schools where students’ lives already have high instability; and criminally culpable cheating.
In the U.S., the high-stakes, high-pressure environment of educational accountability, in which arbitrary numerical targets are hierarchically imposed, has led to extensive gaming and continuing disruptions of the system, with unacceptable consequences for the learning and achievement of the most disadvantaged students. These perverse consequences include loss of learning time by repeatedly teaching to the test; narrowing of the curriculum to that which is easily tested; concentrating undue attention on “bubble” students near the threshold target of required achievement at the expense of high-needs students whose current performance falls further below the threshold; constant rotation of principals and teachers in and out of schools where students’ lives already have high instability; and criminally culpable cheating.
Lastly, when accountability is prioritized over improvement, DDIA neither helps educators make better pedagogical judgments nor enhances educators’ knowledge of and relationships with their students. Instead of being informed by the evidence, educators become driven to distraction by narrowly defined data that compel them to analyze grids, dashboards, and spreadsheets in order to bring about short-term improvements in results.
The brief concludes with twelve recommendations for establishing more effective systems and processes of Data-Driven or Evidence-Informed Improvement and Accountability:
1. Measure what is valued instead of valuing only what can easily be measured, so
that the educational purposes of schools do not drift or become distorted.
2. Create a balanced scorecard of metrics and indicators that captures the full range
of what the school or school system values.
3. Articulate and integrate the components of the DDIA system both internally and externally, so that improvements and accountability work together and not at cross-purposes.
4. Insist on high quality data that are valid and accurate.
5. Test prudently, not profligately, like the highest performing countries and systems,
rather than testing almost every student, on almost everything, every year.
6. Establish improvement cultures of high expectations and high support, where
educators receive the support they need to improve student achievement, and
where enhancing professional practice is a high priority.
7. Move from thresholds to growth, so that indicators focus on improvements that
have or have not been achieved in relation to agreed starting points or baselines.
8. Narrow the gap to raise the bar, since raising the floor of achievement through
concentrating on equity, makes it easier to reach and then lift the bar of
achievement over time.
9. Assign shared decision-making authority, as well as responsibility for
implementation, to strong professional learning communities in which all
members share collective responsibility for all students’ achievement and bring to
bear shared knowledge of their students, as well all the relevant statistical data on
their students’ performance.
10. Establish systems of reciprocal vertical accountability, so there is transparency in
determining whether a system has provided sufficient resources and supports to
enable educators in districts and schools to deliver what is formally expected of
them.
11. Be the drivers, not the driven, so that statistical and other kinds of formal evidence
complement and inform educators’ knowledge and wisdom concerning their
students and their own professional practice, rather than undermining or replacing
that judgment and knowledge.
12. Create a set of guiding and binding national standards for DDIA that encompass
content standards for accuracy, reliability, stability and validity of DDIA
instruments, especially standardized tests in relation to system learning goals;
process standards for the leadership and conduct of professional learning
communities and data teams and for the management of consequences; and context
standards regarding entitlements to adequate training, resources and time to
participate effectively in DDIA.
No comments:
Post a Comment