'Bad science' should not be used to justify NHS reforms

Following months of arguments and negative press and sheer panic around the future of the NHS comes this article from the Guardian questioning the accuracy of the reports published by several academics who are supporting the reforms:




The drip-feed of pro-competition arguments from economists Julian Le Grand and Zack Cooper at the London School of Economics raises serious questions about the independence and academic rigour of research by academics seeking to reassure government of the benefits of market competition in healthcare.
Last July, Cooper and several colleagues released an unpublished paper to coincide with the prime minister's announcement that he was setting up a forum in response to concerns about his health bill. The authors were sufficiently persuasive for David Cameron to declare "Put simply: competition is one way we can make things work better for patients. This isn't ideological theory. A study published by the London School of Economics found hospitals in areas with more choice had lower death rates."
The study in question claimed that competition in the NHS saved lives. The authors claimed that if heart attack mortality rates were used as an indicator of quality, mortality rates fell more quickly and therefore quality improved for patients after competition between hospitals was introduced to the NHS in their area. But if you examine the evidence it is clear thatcompetition had nothing to do with it. The intervention that the authors claimed reduced deaths from heart attacks was patient choice – a proxy for competition. In 2006, patients were given choices of hospitals, including private providers, for some selected treatments, mainly non-emergency surgery. Yet there is no biological mechanism to explain why having a choice of providers for cataract, hip and knee operations could affect the overall survival rate from heart attacks. These are emergencies where patients do not exercise choice over where they are treated and are usually treated in the NHS.
As the government's own cardiac tsar Roger Boyle explains. "Patients can't chose where to have their heart attack or where to be treated. It is bizarre to choose a condition where choice by consumer can have virtually no effect. Patients suffering severe pain in emergencies clouded by strong analgesia don't make choices. It's the ambulance driver who follows the protocol and drives to the nearest heart attack centre." 
So among the numerous problems with this study the authors have made the cardinal error of confusing minor statistical associations with causation. Deaths from acute heart attacks are not a measure of the quality of hospital care as a whole, as they claim, but rather a measure of access to and quality of cardiology care. Gwyn Bevan, professor of management science at the London School of Economics, who carried out a review of patient choice and competition in the BMJ commented on the paper's shortcomings. He subsequently went on to say that he was "perplexed" by Andrew Lansley's emphasis on the role of choice and competition because "the evidence is very weak and contested".
"In fact, I would argue that we don't have any strong evidence of that effect. To my mind, the jury is at best still out on whether choice and competition will improve quality of care in the NHS." 
Cooper and colleagues were at it again in February, press releasinganother as yet unpublished paper, once again coinciding with an important NHS event – Cameron's summit on the NHS bill. This time the authors claimed that length of stay fell more rapidly in NHS hospitals experiencing greater competition, but appeared to be unaware that lengths of stay differ between the four conditions they chose to examine. These were elective hip replacements, knee replacements, hernia repairs and arthroscopies (keyhole examination and sometimes surgery to repair joint damage), for which lengths of hospital stay vary widely. Arthroscopy may be done as an outpatient or day case procedure and therefore may not be recorded in statistics derived from admissions to hospital. Hernia repair usually involves admission as a day case although this varies according to the type of procedure and median lengths of stay range between one or two days. In contrast, for hip and knee replacements the median lengths of postoperative stay are four or five days depending on the procedure. 
So, if providers switched to doing more arthroscopies and hernia repairs and fewer hip and knee replacements they will appear to have shortened their pre-operative and post-operative length of stay to less than a day. Length of stay should also take account of other factors such as whether patients are fit for discharge, especially if they live alone, and the need to avoid readmissions due to complications or premature discharge. So if hospitals switch to operating on patients who are well and healthy or to easier procedures they will also appear to have shortened their length of stay.
Equally, the authors did not look at how clinical coding changed following the introduction of the "payment by results" tariff in 2006, which was modelled on the payment system used in the US. Gaming, upcoding and diagnostic drift are widely recognised in research in the US where providers seek to improve and increase their payments through fraudulent billing and accounting by claiming for work that hasn't been done, or for making out that patients were sicker and more complicated and expensive than they are.
Even without fraud, in the NHS arthroscopy which may previously have been coded as an outpatient activity or not at all (ie it would not have been counted as an admission) may now be recorded separately as a daycase inpatient procedure. Similarly, patients undergoing simple surgical hip replacements might be billed as more complex.
These changes in coding distort measures of productivity so that providers appear to be more efficient as they appear to do both more cases and more complex operations and procedures in the time period.
Le Grand and Cooper call themselves "empiricists" and all those that disagree with them "intuitivists". Yet unlike scientists, they do not appear to have carried out real life observational work in general practice or on the wards, nor have they thought through how financial incentives can change the data. Neither do they appear to have tested their theories with experiments, or adapted their models to see if they are also compatible with different explanations from the many that could be derived from historical data. While their data dredging has generated weak statistical associations, they have made the cardinal error of assuming these associations were causal. Bad science makes bad policy, bad policy leads to careless talk and careless talk costs lives.

http://www.guardian.co.uk

New Vacancy:
Nursery Nurse - £7.50 per hour - South East London
http://www.carerecruitment.org/jobs/view/nursery-nurse-1034