Part one of this two-part discussion of the problem of healthcare data looked at the use of data for administrative purposes. This article looks at the use of data in evidence-based practice and attempts to draw some conclusions about how to use data while honouring the principle of primum non nocere.
In the first part of Does Healthcare Treat Patients or Parameters? I raised some concerns about how data can be misused in healthcare administration, to the detriment of healthcare’s over-arching goal of helping sick people. I linked this adverse effect to a doctrinaire approach to performance management, which perverts the valid contribution that performance management can indeed make to system governance.
Closely intertwined with data zealotry in healthcare administration is data zealotry in healthcare practice. Although healthcare has throughout history inferred cause-effect relationships from observed phenomena, scientific developments during the 19th and 20th centuries led to healthcare becoming strongly empirically driven. This gave rise to the Evidence-Based Medicine movement in the last quarter of the 20th century, which has striven to base all healthcare practice on empirical research findings. Beginning in medical practice, this dogma has infiltrated most disciplines within healthcare and in general discussions of healthcare, is perhaps better named Evidence-Based Practice (EBP).
The data of EBP has given healthcare practitioners a comforting promise of certainty and precision when faced with their awesome responsibilities in the midst of baffling ambiguity and complexity. It also gives them a defensive position (both emotionally and medicolegally) when things go wrong – ‘the surgery was a complete success but unfortunately the patient died’.
But data’s seductive quality has the potential to harm clinical practice in much the same way as it is harming healthcare administration. Rather than science informing healthcare practice, it seems to be distorting the healthcare provider’s view of the patient, to the detriment of practice quality.
While researching this article, I discovered that much of what I wanted to say had already been written – and far more cogently than I could ever have managed. Dr. Trisha Greenhalgh, Professor of Primary Health Care and Co-Director of the Global Health, Policy and Innovation Unit at the Centre for Primary Care and Public Health in London, incisively unpacks the problems of data zealotry in healthcare practice in her 2012 Editorial in the Journal of Primary Health Care, which I strongly recommend you read.
She parses the development of EBP using insights from the philosophy and history of science —
“The pre-paradigmatic research of off-road break-away groups is typically slow, messy and characterised by wrong turnings and periodic pile-ups. But eventually some tracks are laid and a clear direction of travel is pointed out.” (p.92);
a deep understanding of the methodological underpinnings of EBP —
“ . . . the language of [EBP] . . . converts the unique individual narrative into abstracted population categories and Bayesian probabilities . . . Most medical cases, especially in primary care, fit the clean, efficient, probabilistic language of [EBP] remarkably poorly.” (p.94);
and practical examples of how these issues play out in everyday practice —
“In . . . context, and taking account of intuitive cues built from 25 years of listening to patients coughing, I classified this patient’s cough alongside the abdominal pain for which he had been fully investigated (no organic cause found) and his recurring headaches accompanied by flashbacks (post-traumatic stress disorder). I removed my doctor-as-diagnostician hat and . . . bore witness to his suffering.
The medical student who was sitting in with me later called up a guideline . . . and challenged me. Why had I not listened to the patient’s chest or asked him to blow into a meter? Why had I not completed the decision support algorithm? Why, he implicitly asked, had I not followed the rules?” (p.94)
Why, indeed? I cannot speak for Dr Greenhalgh but my answer would be: because to slavishly follow the guidelines would be to miss the opportunity to add true value for that patient, at that time.
Take the case of patients who have the audacity to suffer more than one condition at a time. There is a useful evidence-base for treating depression, diabetes or asthma but no evidence-base for treating the depressed patient with diabetes and asthma. (Most treatment studies specifically exclude patients with multiple diagnoses). A recommended treatment for one condition (corticosteroids for asthma) may be contraindicated in the treatment of the other condition (diabetes). The patient may be currently motivated to deal with one problem but not highly motivated regarding another.
When faced with a real-world patient with all their real-world complexity, the healthcare provider still has to ‘make it up as they go along’ – the scientific evidence-base is of some help but by no means provides definitive answers.
Improvising in this way is still EBP, just not data driven practice. David Sackett, one of the main modern progenitors of Evidence-Based Medicine, noted the value of experiential learning and clinical improvisation in a 1997 paper:
“The practice of evidence-based medicine means integrating individual clinical expertise with the best available external clinical evidence from systematic research. By individual clinical expertise we mean the proficiency and judgment that we individual clinicians acquire through clinical experience and clinical practice.”
A nuance here is the level of ambiguity inherent in the clinical presentation. To meaningfully apply the ‘best-evidence’, first the practitioner needs to correctly identify which set of evidence is best for each particular presentation. In certain contexts, there might be an evidence-base for how best to do make those decisions (evidence-based assessment protocols; trauma presentations to the emergency department, for example) but even then, decisions leading to best outcomes will rest on some clinical wisdom about which assessment protocol to apply. Barking up the wrong tree does no-one any good, no matter how well ‘by the book’ it is done.
Data is most helpful when the presentation most closely matches the scenarios in the studies from which the data are derived. The more dissimilar the presenting patient and condition from the study conditions, the less relevant the data will be. Further, the more ambiguous the presentation, the more the final outcome will be determined by clinical wisdom rather than data.
In primary care, the presentations are often very ambiguous indeed, the scope for assessment very limited in terms of both time and resources and the individual patients quite dissimilar from the study samples in ways that are very important to long-term clinical outcomes (medical complexity, language, culture, health literacy, access to resources, etc). Thus, in primary care, data can be more often than not, barely relevant and of little practical help in the management of individual patients.
The future of more effective and efficient healthcare lies in primary care rather than in the specialty departments of hospitals, so how can we use data in healthcare (particularly in primary healthcare) in ways that improve quality of care without doing harm to patients in the process?
I suggest we begin with remembering that facts change all the time – half of what we think we know will turn out to be wrong (but we won’t know which half for another 15 years). Sometimes what we thought was wrong will actually turn out to have been right all along. It follows from this that some humility is always in order and we should avoid dictatorial approaches to healthcare administration and provision.
Healthcare is the patient’s journey. Without the patient, there would be no role for the provider or the healthcare administrator. Providers and patients are in it together, stumbling through the dark with the dim torch of science to help guide the way. Even when the provider has been this way many times before, it may never have been with this patient, and never on this night.
Both are taking risks, both will be affected by their own and one another’s anxieties (and other feelings) and the quality of their working relationship may have as much to do with the eventual outcome as anything else, particularly in primary care. In the end, the patient’s needs, objectives and beliefs are paramount, not the provider’s. Healthcare administration should support and facilitate this relationship and avoid creating structures and processes that impede it.
Data should not dominate the processes of providing healthcare. Data’s proper role is to inform healthcare practice and to lead its development but not to drive it.
Healthcare providers should be thoroughly familiar with the evidence-base for the most common things they encounter and know how to access it for the less common things. However, the map is not the territory and the guidebook won’t have a standard operating procedure for every obstacle encountered along the way. For either providers or administrators to pretend otherwise is to risk ignoring important realities or imposing irrelevant constraints.
Healthcare providers should understand what is regarded as ‘quality care’ in the community within which they practice. This will assist them in their use (and disregard) of data. I suggest the working definition of ‘quality care’ is what one’s colleagues would regard as mainstream best-practice; if you are going to be wrong half the time, you want to do it in good company. In the end, consensual validation is the only real defense one can have if asked to explain one’s actions at some point in the future.
Healthcare providers should strive to understand and reflect on their patients as people. Data requires a context to have a meaning and the correct context is the patient whose healthcare is being attended. If the healthcare provider does not understand the patient, the chances increase that the wrong meaning will be ascribed to the data and the patient will be mistreated.
What should patients do? That is a topic for another article.
 Greenhalgh, T. (2012). Why do we always end up here? Evidence- based medicine’s conceptual cul-de-sacs and some off-road alternative routes. Journal Of Primary Health Care, 4(2), 92–97.
 Sackett, D. L. (1997). Evidence-based medicine. Seminars in Perinatology, 21(1), 3–5. doi:10.1016/S0146-0005(97)80013-4
 E.g. Hojat, M., Louis, D. Z., Markham, F. W., Wender, R., Rabinowitz, C., & Gonnella, J. S. (2011). Physicians’ empathy and clinical outcomes for diabetic patients. Academic Medicine : Journal of the Association of American Medical Colleges, 86(3), 359–64. doi:10.1097/ACM.0b013e3182086fe1