Discussion With David Osborne on Comments Made to Mike Petrilli’s Article On Reform (Please read the comment before this one first)

Discussion with David Osborne

On August 16, 2018 at 3:06 PM David Osborne < dosborne@ppionline.org> wrote:

Mr. Honig-

I’ve read your interesting dialogue with Mike Petrilli and could respond to many, many points, but there is one I feel absolutely compelled to comment on: your statement that ” The latest CREDO report does find that urban charters do better than the average traditional public school but the effect size is tiny overall. ”

The 2015 CREDO report on charters in 41 urban regions found that students in their fourth year or beyond in an urban charter school learned about 50% more than their matched peers (demographically similar, with similar past test scores) who stayed in district schools. (Table 10, p. 32.) This is so far beyond a “tiny” effect size that I can’t let it go unchallenged. 

I would also point out that the fastest improving cities in the country have been those that have embraced charter schools as a core strategy: New Orleans, DC, Denver, and Chicago.

There was a time in which too many states allowed failing charters to continue year after year, ignoring one of the central tenets of chartering: that failing schools are closed or replaced. That is still the case too often in California, Nevada, Michigan, and other states. But over the last decade more and more states and authorizers have begun to get serious about accountability, and charter performance has increased rather dramatically.

All best,

On Aug 19, 2018, at 10:29 AM, billhonig@comcast.net wrote:

David, thanks for the reply. 

I think our differences lie in how you characterize the CREDO findings. They report, in Table 10 which you cite, that students attending  an urban charter for four years, post a gain of .15 Standard Deviation in math and .1 in reading over their traditional public schools counterparts. This effect size is characterized as small by the research community (.4-.5 is deemed a medium effect size). John Hattie who examined hundreds of school improvement interventions in his Visible Learning series by examining thousands of research evaluations found a good number of interventions near or above a full standard deviation or 7-10 times the effect size the CREDO findings. Most of these involved instituting an active curiculum which involved students in the learning process such as reciprocal teaching or visible learning  strategies or collaborative team building aimed at increasing teacher engagement and efficacy.  Charter schools as a reform strategy ranked low in his listing. 

Given the small effect size found by CREDO, if you overlay a distribution of charter schools with a distribution of traditional public schools (TPS) the overlaps are very similar–almost a one-for-one match school for school with an extremely high percentage of  schools in both distributions showing the same results–good., average, and poor. Thus, the small average difference CREDO found is much too thin a reed to justify claims that charters should be significantly expanded vis-a-vis TPS.

Most importantly, there was wide variation in urban charters in the CREDO report. Some were in the high-performing .5 range, some showed substantial negative result, most clustered around the middle. TPS showed similar distributions resulting in a significant number of charters and TPS in the high performing category. We should be learning from both high-flying charters and the comparable percentages of high performing traditional public schools. These stellar schools or districts and CMO’s should be examples of what to do.  The issue shouldn’t be charters versus non-charters but what the most successful schools or districts do in either sector.

One small point: the overall effect size for charters in the CREDO report for all years of attendance (not just the four year cohort) was almost negligible–in the .04 range. What can’t be determined is how much the four year rate was influenced by weaker students leaving and their slots not being backfilled resulting in a more rarified cohort. 

Finally, your contention that urban districts placing charter school expansion at the center of their reform efforts posted the highest test score gains in the country is not supported by the data. Some urban districts  emphasizing charters did score high, some scored medium or low. Similarly, some traditional urban school districts scored high and some scored medium or low. For example, Washington DC did make large gains years ago but now is in the middle of the pack in growth in the NAEP Trial Urban District Assessment(TUDA). The TUDA results show no clear pattern favoring urban districts emphasizing charters. Non-TUDA districts showed similar patterns. For example, Long Beach, Garden Grove, and Sanger are urban districts with few charter schools but who posted substantial gains.

Also, some districts you cite as high-performing because charter expansion was at the center of their reform strategy would not agree that charters were the reason for their gains. See the evaluations of Chicago public schools, for example, which found curricular reform, professional development,  team building, and community involvement as the main drivers of improvement. 

I agree with your point that low performing charters should and in some places have been shuttered and that more states should adopt stricter accountability.

Let’s continue the discussion. I hope we can agree to look for exemplars of high-performance from both the charter school and traditional public schools’ sectors  and avoid framing the issue as a charter/TPS conflict. Bill

On August 22, 2018 at 6:31 AM David Osborne <dosborne@ppionline.org> wrote:

Bill-

Thanks for replying.  You’re right, we do characterize the CREDO findings differently. I can’t imagine calling something that produced 90 extra days of learning (in math and ELA) as “tiny” or “small.” That’s half a year of extra gain, every year.

I think the effect size is .1 to .15 because we are talking about the performance of public schools in 41 entire urban regions. I have John Hattie’s book, and the interventions you’re describing that have much higher effect sizes come from studies of much smaller samples, I suspect. Doug Harris’s study of New Orleans found an 0.2-0.4 effect size, and he called that the most rapid improvement of an urban district of which he was aware.

One could get larger effect sizes by studying far smaller samples, such as charters in Boston, or KIPP schools, or Yes Prep or Uncommon. Some of the 41 CREDO cities used bastardized versions of chartering in which terrible charter schools were allowed to remain open year after year. The effect size of systems that were true to the principals of chartering were no doubt larger.

As for New Orleans, DC, and Denver, if we’re looking at the last 10-12 years, there is significant evidence that they have been the most rapidly improving high-poverty cities in the country. (Chicago’s improvement leveled off after 2014.) As Doug Harris said, New Orleans clearly improved the fastest. Among NAEP TUDA cities, DC clearly improved the fastest. And Denver was close behind.

I know a bit about Long Beach, and I agree, it has experienced significant improvement. But it has had continuity of superintendent leadership for 20-25 years—two superintendents, the second of whom was the first’s deputy, if my memory is correct. That is so rare in the world of elected school boards that we can hardly rely on it as a practical strategy. 

I do agree that we should be learning from the highest performing schools in both sectors. But since we organize schools into systems, we have to pay attention to what kind of systems produce more of those high performing schools. And the data is clear: the fastest improvement has come when public schools are relatively autonomous, accountable for their performance (and closed or replaced if it falls below par for several years), allowed to create diverse learning models for diverse students, subject to choice and competition, and operated by nonprofit organizations. The latter is important, sadly, because elected school boards have proven themselves almost incapable of closing schools for performance if they are staffed by unionized district employees.

I look forward to continuing the dialogue.

All best,

On Aug 22, 2018, at 1:31 PM, billhonig@comcast.net wrote:

David, three points.

1. The research community does characterize any SD effect size gain below .2 as small, .5 as medium and above .7 as large. So “small” is in comparison to other interventions with much higher results. Hattie lists fifteen such interventions above .7. Charters are near the bottom of Hattie’s rankings of approximately 100 interventions. You “suspect” that his studies of high impact have small sample sizes but Hattie was aware of that problem and used meta-analysis to deal with it. According to Hattie, an SD gain of 1.0 is equivalent to 2-3 yrs of instruction so the .1-.15 CREDO findings for urban charters are much less than the 1/2 year gain you cite and multiples below other interventions 

2. Whatever the characterization, the relatively small distance found between urban charters and their TPS counterparts means that comparable percentages of schools in each distribution would fall in the “medium” effect range so it is hard to argue that the charter sector deserves support as a key reform plank but TPS don’t. You claim that sclerotic TPS urban districts  prevent improvement but the large numbers of these high-flying TPS, magnets, etc. belies that contention. It is not just in a few stellar districts that high-performing schools are found. They are spread throughout the nation. In fact, even though the distribution of urban charters and urban TPS just about overlap so that the percentages of “medium” effect schools are comparable, because charters have so many fewer schools than TPS urbans, the TPS urbans have many more high performers in raw numbers. Maybe we should be concentrating on expanding those. In Los Angeles, for example, TPS magnet schools substantially outperform charters after adjusting for school populations and gifted schools.

You also argue that if you exclude low performing charters which should have been closed and concentrate on the high-performance CMO’s then results would look better for charters. Of course, if we disregard the low-performing TPS, their results would also improve. The main point is that there is wide variation in both charters and TPS and comparable percentages of each are high performing.  It is a false dichotomy to contend that  charter/non-charter policy is the determining factor in improvement.

So your claim in the last paragraph that there are “more” high performing schools in the charter sector is not accurate. And then inflating this false premise to justify expansion of charters freed from district controls as the only strategy to improve because unionized districts can’t perform is not supported by the data. 

3. Finally, your argument that urban districts where charters are a key element in improvement efforts have led the nation in test score gains during the past twelve years doesn’t fit the facts  As I wrote in the first response below, there is no clear cut pattern in TUDA results. Some of the charter emphasizing districts did score high, others did not. Some districts which did not emphasize charters scored high and some didn’t. What is worth noting is that almost all urbans taking TUDA significantly out-gained the rest of the nation.

(As I said before, I don’t agree with your using Chicago, which posted substantial gains, as an example of a district making charter expansion a central part of their improvement efforts. I doubt whether Chicago would claim that its increases were primarily driven by charters)

Here are the TUDA results using the 12 yr. period you suggested.

Eighth grade reading–2005-2017

Los Angeles +15

Atlanta +15

San Diego +11

Chicago +10

Boston +8

Large Urban +8

Washinton DC +7

New York City +7

Austin +6

Nation +5

Houston +1

Charlotte +1

Cleveland -2

Fourth grade Reading 2005-2017

DC +22

San Diego +14

Atlanta +13 

Chicago +13

Los Angeles +12 

Boston +10

Large City +8

Nation +4

Charlotte/M +3

New York City +1

Austin —

Cleveland -1

Houston -5

8th Grade Math 2005-2017

Atlanta +20

Chicago +18

DC +17

Los Angeles +16

San Diego +12

Boston +10

Large City +9

NYC +8

Cleveland +8

CharlotteM +7

Houston +6

Nation +4

Austin +3

4th grade Math 2005-2017

DC +20

Chicago +18

Atlanta +10

San Diego +5

Boston +4

Large City +4

Los Angeles +3

Houston +2 

Nation +2 

Austin +1

CharlotteM -1 

NYC -1

Cleveland -6

I’m glad we can agree that we should be using high-performing charters and TPS as exemplars and support an in-depth investigation and implementation of the processes and support structures to grow their number. Bill

Reply from David Osborne

The half-year gain in learning in urban charters was CREDO’s conclusion, not my own. And I’ve found that statistics gurus disagree about whether a citywide effect size of .1 or .15 is small or not. 

As for my point about closing low performers, it is a fact that this happens more often in the charter sector than the district sector. California may be an exception, because districts authorize most charters there and many of them don’t hold those charters accountable. But in much of the country, low-performing charters are closed far more often than low-performing district schools. That’s a key part of my argument: Why would we promote a system that does not hold failing schools accountable, when we have an alternative approach that does? We need education systems that do this, whether their schools are called “charters” or not. And experience has proven that elected school boards find it much easier to close a school run by a nonprofit than a school staffed by their own employees, particularly if they are unionized.

As for the NAEP TUDA data you cite, it proves my point. There is a pattern. If you average the four scores, DC was the fastest improving of these cities—and that was just the district. The charter sector in DC, which educated 47% of the public school students last year, improved even faster.

So among TUDA cities, DC has clearly been the fastest improving.

Among all cities, according to Doug Harris’s very thorough analysis at ERA, my analysis in Reinventing America’s Schools, and many others’, New Orleans has been the fastest improving city since the conversion to charters got serious in 2006. New Orleans is the only high poverty city I know that has outperformed its state on high school graduation rates and college going rates. New Orleans doesn’t participate in TUDA, but if you look at improvement in test scores, graduation rates, and college-going rates, no other city can touch it.

That leaves Denver, which unfortunately also does not participate in TUDA. Sean Reardon’s data shows that Denver had the second highest growth of any district with more than 25,000 students between 09-10 and 12-13 (after Lincoln, NE, which is not a high-poverty city), according to ERS (“Denver Public Schools:Leveraging System Transformation to Improve Student Results,” ERS, March, 2017). And since the 2015 switch to PARCC tests in Colorado, Denver has outperformed its state in middle school scores, a rare feat for a high-poverty city. Graduation rates have risen by nearly 30 percentage points since 2005, and college-going rates have increased by more than 80% over the last decade.

I challenge you to find a high-poverty city that has improved anywhere near as fast as these three cities, all of which used chartering in a major way. 

As for Chicago, from 2005 to 2010, under Renaissance 2010, the district closed 60 low performing schools and replaced them with 92 new schools, mostly charters or other nonprofit operators. During that era, this was a major district strategy. Was that a factor in the surge in 3rd through 8th grade performance between 2009 and 2014 that Sean Reardon documented? I find it hard to believe that it wasn’t, particularly because the mayor essentially agreed to freeze chartering in 2013, and academic growth leveled off after 2014. For a thorough discussion of Chicago, see my article at https://www.the74million.org/article/is-chicago-really-the-fastest-improving-urban-district-in-the-country-such-claims-made-by-the-new-york-times-and-others-are-misleading/.

I believe, as I argued in Reinventing America’s Schools, that the data shows that this combination of system elements creates the most rapid improvement in urban public schools: school autonomy, accountability (including closure or replacement), diversity of learning models, choice, operation by nonprofit organizations, and a concerted strategy to recruit and develop talent.

Chartering does not have to be part of this. In Indianapolis Public Schools, they call their sector built on this model Innovation Network Schools. Atlanta and Tulsa call them Partnership Schools. Some cities call them Renaissance Schools. In Chicago some of them are called contract schools, some are AUSL schools. 

Whatever we call them, it’s important that we recognize what type of system works best in urban public education, then embrace it. If we continue to pretend that the top-down bureaucratic model we developed in the 20th century can do the job, we are simply fooling ourselves—and sacrificing the futures of many, many children.

With respect,

David

David Osborne
Progressive Policy Institute
4 Hovey St.
Gloucester, MA 01930
978 865 3917
Cell: 978 273 5397

@OsborneDavid

www.progressivepolicy.org

www.reinventgov.com

www.amazon.com/author/davidosborne

Final comment

David, thanks again for responding. I think our major disagreement is over the efficacy of closing low-performing schools as the indispensable ingredient in improving schools.

Preliminarily, I would caution you not to rely on CREDO’s extrapolation of a .1 or .15 effect size to a half year’s extra growth for students who were enrolled in a charter for four years. (the effect size was much smaller for all students surveyed by CREDO–.04 in reading and .06 in math but let’s use the higher figure). CREDO itself warned about the arbitrariness of “days added”, and a growing number of researchers such as Cohen, Pogrow, Glass, and Berliner have found no evidence for the conversion of effect size to “days added”. Most importantly, these researchers have found that CREDO  significantly overstated the actual magnitude of growth for the effect size reported by several multiples.  Most researchers characterize any effect size below .2 as “difficult to detect” and too small to make claims about differences in results. See Stanley Pogrow https://epaa.asu.edu/ojs/article/view/2517 for a discussion of these points.

So I stand by my argument that the differences between urban charters and urban TPS are too small to be meaningful and certainly not large enough to support the claim that only charters and charter systems can significantly improve. Instead, let’s use both high-performing charters and TPS’s as exemplars for what all schools and districts should do to get better.

You never did refute the argument that, assuming comparable spreads, when you overlay the distribution of performance of charters and TPS’s, a .1 or .15 difference in the average has almost no effect on the percentage of schools scoring above the .5 level or medium effect level in the right side of either distribution. This means that comparable percentages of TPS and charters attain this level. Since there are many more TPS’s than charters the number of higher performing TPS schools is significantly higher than high-performing charters. This reality belies your claim that only charters and charter systems can effectively deliver high-performance. 

I think you arrived at these erroneous conclusions because of some basic flawed assumptions. 

You state: …in much of the country, low-performing charters are closed far more often than low-performing district schools. That’s a key part of my argument: Why would we promote a system that does not hold failing schools accountable, when we have an alternative approach that does?

1. You seem  to  think  that the only legitimate strategy for holding schools accountable and improve performance is to close low performers. You go so far as to assert that there can be no meaningful improvement efforts in the absence of such closures and that, since TPS districts have a difficult time closing their low-performing TPS schools, they can’t significantly improve. What evidence do you have for this extreme claim? As stated above that contention doesn’t square with the existence of large numbers of high-performing TPS. 

Moreover, while closing a school  is certainly one strategy to improve low performing schools, it is not the only one, not necessarily the best one, and it comes with severe collateral damage. Many of the non-charter driven districts or TPS high-performing schools described above got better by pursuing a “build and support” coherent strategy of strong curriculum and materials, investing in focused professional development, building capacity and team work around improving instruction, good site leadership, district support , etc. as I outlined in my original discussion with Mike Petrilli. (Indeed, many of the charter-driven districts such as Washington, DC also pursued a similar instructional improvement strategy so what caused improvements in these districts–the fact of closures or the impact of curriculum driven support initiatives?)

These positive approaches avoid the collateral damage caused by closing schools as part of a high-stakes test and punish accountability system.  Students are not commodities and when you rely on closing low-performing charter schools as a crucial element in improving the quality of overall charter performance, students who attended those schools suffer. You seem to be enamored of market-based solutions but what works in selling toothpaste doesn’t necessarily work in a public service where children are involved. If a toothpaste company goes out of business in a Schumpeter creative destruction the customer can shift brands. A student in a closed charter school, especially if it happens during the year, suffers significant hardship. The numbers of students stranded or being harmed by closures has become substantial. From 2001-2016 approximately 2500 charters have failed or closed involving almost 300,000 students. There are currently about 7000 charters so those which have closed are a significant proportion of the total. Quite a price to pay for improving the performance of the charter sector.

Alternatively, closing a TPS and replacing it with a charter also cause severe collateral damage such as destroying a a long-standing community institution and resource,  causing family and student disruption of relationships,  or forcing youngsters to take long bus rides or walk through hostile territory. Is this level of disruption and hardship worth the small or non-existent performance benefit of shifting to charters when there are effective, less destructive alternatives available?

2. Another major deficiency of a “close the low performers” accountability strategy is that the penalty is primarily based on test scores and fear of consequences which not only encourages gaming the system but too often results in narrowing the curriculum and diminishing learning. Too many charters have become test prep factories and, unfortunately, many TPS have also succumbed to the corrupting influence of high-stakes, test based accountability.  See Daniel Koretz’s The Testing Charade well-argued case against tests driving instruction instead of informing instruction.

Your stated premise is that the following combination of system elements creates the most rapid improvement in urban public schools: school autonomy, accountability (including closure or replacement), diversity of learning models, choice, operation by nonprofit organizations, and a concerted strategy to recruit and develop talent I don’t believe that premise withstands scrutiny as the only or even the best method of improving schools. I made this argument in the discussion with Mike Petrilli on the perils of high stakes test based accountability and the more effective alternative of a “build and support” approach.

For a fuller elucidation of these ideas on charters with supporting research visit 

TUDA

I’m not sure which of us is suffering from confirmation bias. I look at the TUDA results and see a mixed bag. To me you are cherry picking charter driven districts which scored high in NAEP growth and ignoring both charter-driven districts not scoring high and districts with limited charters which did well. It is true that if you average DC’s four scores they are at the top (but currently stalled) in growth with the caveat that they started extremely low so had more room to grow. But, although they scored very high in 4th grade reading growth, by eighth grade, which is more indicative of  eventual student performance, they are in the middle of the pack.  In math they were at the top of the heap in 4th grade and third in 8th grade. Even counting 8th and 4th grade equally and adding all four growth scores for the twelve years  as you did, Atlanta, San Diego, and Boston, with limited charters and starting from much higher performance levels than DC  were not that far from DC’s growth scores: DC +69: Atlanta +58; San Diego +42; Boston +32 and all significantly higher than the national growth gains of +15.

Some charter driven districts did not score well in the TUDA results. Some, such as Milwaukee not taking TUDA and emphasizing charters for years,  have also done poorly which offsets the claimed high performance of Denver.  Charter emphasis does not seem to be the determining factor between high and low growth. Conversely, there are a number of high performing and high growth districts which aren’t enrolled in TUDA but did as well as the districts you cite such as Long Beach, Garden Grove, and Sanger. Again charter/non-charter doesn’t seem to make the difference but what counts is the successful support and improvement strategies the high-flyers from both sectors pursued. 

New Orleans

I agree with you that test scores in New Orleans grew substantially in the past decade although they have flattened out recently. But there is a counter-narrative. The city schools started from an extremely low base and pre-Katrina were corrupt and mis-managed. However, even after the growth spurts they are still among the lowest scoring schools in the country. Moreover, the gains came at substantial cost; disrupting communities, resegregation, growth in inequality, and destruction of a main component of the black middle-class when all teachers were fired and many replaced by short-termers.  The growth has been skewed to the non-poverty schools, large numbers of schools received D’s and F’s from the charter friendly state, and the ACT scores have been disastrously low and dropping. How much growth was attributable to charters, a change in the socio-economic mix of students after Katrina or the substantial increase in per-pupil funding is an issue. A basic question is whether New Orleans needed to adopt such harmful measures to improve or would have achieved comparable growth with less stringent alternatives. For a more extended discussion of this counter-narrative on New Orleans written two years ago see the following excerpt from my website www.betterschools.com . Much recent scholarship supports these findings.

Lessons from New Orleans

In some extreme instances, states have privatized entire districts, converting all public schools to charter schools. A decade ago in the aftermath of Hurricane Katrina, the Louisiana forced New Orleans to follow this path. What ensued was the wholesale elimination of the public schools that were the center of many communities, the firing of most teachers, and the creation of nonaccountable institutions under the umbrella of the state-run New Orleans Recovery School District (RSD). Unquestionably, prior to Katrina the district was severely dysfunctional and one of lowest scoring in the country. But the drastic measures taken in the name of reform created new problems. This is tragic given that better, less disruptive alternatives could have been pursued.

The New Orleans experience has been hyped by reform advocates as an extraordinary success story and, until recently, uncritically covered by the media. Adam Johnson wrote an excellent critiqueof the fawning media coverage. More objective analysesof the RSD have questioned the purported gains and detailed significant collateral damage: hours-long bus rides and other hardships foisted on children, substantial resegregation, and unaccountable schools as well as community erosion and alienation.

Failing Grades

According to blogger and education activist Mercedes Schneider, one decade later most New Orleans Recovery School District (RSD) charter schools received Ds or Fs by a charter-friendly state education department. Out of 57 schools, 15 received Fs or were so low as to be in turnaround status; 17 received Ds; only 7 received Bs; and none earned an A. The RSD schools still rankamong the lowest-scoring schools in the country. Schneider also cites a recent report that showed only an embarrassing 12% of the high school students in the district who took the ACT college preparation test scored high enough under the state’s regent requirement to qualify for a Louisiana four-year college. Schneider has also debunked claimsof better-than-average graduation rates.

Other people have documented the continued extremely low performance of the RSD despite a decades’ worth of effort. Among them are Julian Vasquez Heiligand Andrea Gabor, who raised potent questions about the viability of the New Orleans model for reform when she wrote a response to the defenders of the district in TheNew York Times. See also “The Uncounted,” Owen Davis’s blog postthat raises the possibility that the New Orleans reform effort harmed the city’s most vulnerable children:

A decade after Hurricane Katrina spurred New Orleans to undertake a historic school reform experiment—a shift to a virtually all-charter district with unfettered parent choice—evidence of broader progress is shot through with signs that the district’s most vulnerable students were rebuffed, expelled, pushed out or lost altogether.

For another negative reporton the supposed success of the RSD, see Ten Years after Katrina, New Orleans’ All-Charter School System Has Proven a Failure.Finally, an editorial in The New Orleans Tribune, a major African-American newspaper, decried the reform efforts in New Orleans and its meager results.

In 2015, Frank Adamson, Channa Cook-Harvey, and Linda Darling-Hammond produced the most comprehensive and exhaustive examinationof the New Orleans experiment in districtwide charters. Whose Choice? Student Experiences and Outcomes in the New Orleans School Marketplace is their 72-page report developed for the Stanford Center on Opportunity Policy in Education (SCOPE). The authors came to conclusions similar to those I have previously discussed. The New Orleans experiment led to the creation of a stratified system, which more often than not produced low-quality education and was highly detrimental to large numbers of vulnerable students and their communities. They demonstrated that claims of increased performance for the RSD were not warranted and that schools in the RSD still scored extremely low on measures using accurate data.

Limited Gains and Unnecessary Damage

Even reports that found some progress demonstrate that in light of the extremely low starting point, the gains in New Orleanshave been minimal. After 10 years, the effect size ranges from only 0.2 to 0.4 SD—still leaving the district as one of the lowest scoring in the nation, with one of the country’s highest levels of economic and educational disparities according to race.

The alleged gainscould just as easily be attributed to the substantial increases in funding that occurred over the last decade or to changes in demographics since large numbers of low-achieving students left New Orleans after Katrina. Clearly, these small increases were hardly worth the major disruptions caused by closing just about every local school and firing 7,000 teachers, most of whom formed the backbone of the African-American middle class in the city. For a heart-wrenching accountof the callous treatment of New Orleans teachers, see “Death of My Career: What Happened to New Orleans’ Veteran Black Teachers?” in Education Weekand the extensive quotations in the SCOPE report cited above. For a forum with differing points of view on the New Orleans experience, see the Albert Shanker Institute’s series of conversations “Ten Years After the Deluge: The State of Public Education in New Orleans.” Finally, Charter Schools, Race, and Urban Space: Where the Market Meets Grassroots Resistance,by Kristen Buras (2014), provides a devastating look at the harm caused in New Orleans by the abandonment of public schools.

Unquestionably, some excellent charter schools have been created in New Orleans, and many dedicated teachers and principals are making heroic efforts to improve instruction. Yet better schools and outcomes could have been produced without such drastic measures. Even researchers who supported the reforms have declared that New Orleans should not be held up as a model for the nation.

David, again I hope we can agree on using what the most successful schools from both sectors have done to improve performance and avoid an unfruitful debate over whether the combination of charters, high-stakes accountability based on test scores, and radical decentralization is the only or best way to improve public education. Bill


Leave a Reply

Your email address will not be published.

Designed and Developed by Pointline.net