The MET 5-min mile: Measuring performance of medical emergency teams

 

RESUSCITATION JOURNAL COVER

The MET 5-min mile: Measuring performance of medical emergency teams

Bangor University, School of Medical Sciences, Bangor, UK
Betsi Cadwaladr University Health Board, Wrexham, UK
Received: May 3, 2014; Published Online: May 16, 2014
DOI: http://dx.doi.org/10.1016/j.resuscitation.2014.05.010

On the 13th of April 2014 Mo Farah ran his first full Marathon in London. He took 2:08:21?h to complete the run with an average speed of around 5?min per mile. These are probably the slowest miles he has run for some time. In a 10?km run he would sustain a significantly higher pace, constantly pushing to get closer to 4?min per mile. So was his speed per mile in the Marathon good or bad? It probably depends whether one would measure it against the speed in short, mid or long-distance runners. Or whether one would make the winning of the race the prime outcome measure. Even with these caveats quantifying performance is comparatively easy in the world of athletics and significantly more complex in the world of Medical Emergency Teams (MET) or Rapid Response Teams (RRT).

The current edition of Resuscitation contains a review of the performance of what can only be described as ‘the original MET team’, Ken Hillman’s unit at Liverpool Hospital in Sydney Australia has influenced the course of the way we see patient safety in hospitals more than almost any other team in the world. The Australian MET reviewed more than 19,000 patients over the time between 2000 and 2012, enough to populate a small town. Given that the team had already been at work for nearly 10 years at the start of the study period there was no major reduction in the good rates of cardio-pulmonary arrests or unplanned Intensive Care admissions. Hospital mortality decreased by 20% between 2005 and 2012: overall impressive results and a hospital where most readers would feel safe.

Let’s assume for a moment that Australia is hitting a deep recession and administrators at Liverpool Hospital would want to cut costs. Does the data show that the MET team kept cardiac arrests at a low rate and pushed down mortality? Is it worth keeping the investment for the service? The authors of the paper imply that the MET’s performance is linked to the good results of the hospital, but in a fierce discussions this might be a more complicated argument: Did the mortality in other hospitals in Australia also improve? Probably, but most of them would have implemented MET teams at around 2005 so comparison is tricky. Were there other initiatives that might have impacted on mortality? Surgical checklists?The surviving sepsis campaign? Better training of doctors and nurses?

The Institute for Healthcare Improvement’s (IHI) 100,000 lives campaign concluded in 2006 that RRTs were a critical ingredient in the package of measures that helped participating units to reduce standardised mortality rates. In 2009 an evaluation of the Safer Patient Initiative in the UK led to the implementation of Rapid Response Teams and other patient safety interventions in a number of UK hospitals. Despite the improvements in mortality in the trial units a wider analysis showed similar results in non-participating units. Did the METs not make a difference? Or was the profile of the interventions so high, that everybody implemented them at around the same time? Or did it ‘just’ influence organisation culture?

How can we use data to work out which one of these answers is correct and to show that METs are the reason for improvements in the outcomes of hospitalised patients? The data collection that is recommended for METs in the literature is almost certainly completely unworkable without automated data capture. In the world of the IHI, three types of data would be required to show effectiveness of an intervention: Outcome measures demonstrate the voice of the patient and are usually the ones that reflect whether the intervention is an improvement: What are we trying to achieve? Most RRTs would aim for reductions in hospital mortality, avoidable death and in preventable cardiac arrests. But how about measuring length of hospital stay for patients with physiological abnormalities. There is a reasonable body of evidence that delayed treatment drives up days in hospital. Process measures evaluate whether systems are working as planned, whether there is an uptake of the intervention – How many patients are there with potential activation criteria for a MET and how many receive the intervention? Do people record vital signs, recognise abnormality, report to the MET and does this respond appropriately? Last up are balancing measures – Has our intervention unintended consequences in other parts of the system? Is the ICU swamped by patients who have been identified by the MET and did not need ICU before, is hospital mortality actually improved, are patients with other needs showing signs of neglect? Is over zealous usage of fluids and antibiotics linked to iatrogenic pulmonary oedema or development of multi-resistant bacterial strains?

Given the limited and non-standardised data-sets that most MET’s collect it is often difficult to quantify the part that METs and Rapid Response Systems play in reducing mortality. What could be a sensible minimum data set: As a working hypothesis we would have to assume that the effect of a MET would be on faster treatment of deteriorating patients with reversible pathology; or that by raising awareness of acute physiological deterioration less patients get anywhere near the state of physiological instability. Additionally we would need to show that the majority of patients with abnormal vital signs or other features of physiological deterioration receive appropriate and timely care by the MET with subsequent clinical improvement.

How about failure to rescue: the assumed cause of many events is the failure to activate the MET. This number of missed opportunities is even more difficult to quantify but the progress that the implementation of electronic patient records is making in the US, following Obama’s affordable care act, represents a major opportunity to quantify the number of potential events and the proportion in which team activation occurred.

Very few teams will currently hold enough data to document both reliable activation and improvement of deteriorating patients. For METHOD (Medical Emergency Teams Hospital Outcomes’s in a Day) we have recently challenged teams to collect this type of pragmatic data for a week. 51 teams from three continents took up the challenge in February 2014. First results will be shown at the international Rapid Response meeting in Miami in May. But already it is clear that there is a need for a pragmatic data collection format that allows to capture the outcomes of MET calls. Only then will be able to say whether the system performance was good, adequate or just too slow, and which team is getting close to the MET equivalent of the 5-min mile.

Conflict of interest statement

Chris Subbe is a Principal Investigator of a study sponsored by Philips Healthcare.

References

  1. http://www.bbc.com/sport/0/athletics/27009201 [accessed May 2].
  2. Herod, R., Frost, S.A., Parr, M., Hillman, K., and Aneman, A. Long term trends in medical emergency team activations and outcomes. Resuscitation2014;
  3. Hillman, K., Chen, J., Cretikos, M. et al. Introduction of the medical emergency team (MET) system: a cluster-randomised controlled trial. Lancet20053652091–2097
  4. Semmens, J.B., Aitken, R.J., Sanfilippo, F.M., Mukhtar, S.A., Haynes, N.S., and Mountain, J.A. The Western Australian Audit of Surgical Mortality: advancing surgical accountability. Med J Aust2005183504–508
  5. Wright, J., Dugdale, B., Hammond, I. et al. Learning from death: a hospital mortality reduction programme. J R Soc Med200699303–308
  6. http://www.surgeons.org/media/12661/LST_2009_Surgical_Safety_Check_List_%28Australia_and_New_Zealand%29.pdf [accessed May 2].
  7. Dellinger, R.P., Carlet, J.M., Masur, H. et al. Surviving Sepsis Campaign guidelines for management of severe sepsis and septic shock. Crit Care Med200432858–873
  8. Wachter, R.M. and Pronovost, P.J. The 100,000 lives campaign: a scientific and policy review. Jt Comm J Qual Patient Saf200632621–627
  9. Benning, A., Dixon-Woods, M., Nwulu, U. et al. Multiple component patient safety intervention in English hospitals: controlled evaluation of second phase. BMJ2011342d199
  10. Peberdy, M.A., Cretikos, M., Abella, B.S. et al. Recommended guidelines for monitoring, reporting, and conducting research on medical emergency team, outreach, and rapid response systems: an Utstein-style scientific statement. A Scientific Statement from the International Liaison Committee on Resuscitation; the American Heart Association Emergency Cardiovascular Care Committee; the Council on Cardiopulmonary, Perioperative, and Critical Care; and the Interdisciplinary Working Group on Quality of Care and Outcomes Research. Resuscitation200775412–433
  11. http://www.ihi.org/resources/Pages/ImprovementStories/SuccessfulMeasurementForImprovement.aspx [accessed May 3].
  12. Rivers, E., Nguyen, B., Havstad, S. et al. Early goal-directed therapy in the treatment of severe sepsis and septic shock. N Engl J Med20013451368–1377
  13. Subbe, C.P. and Welch, J.R. Failure to rescue: using rapid response systems to improve care of the deteriorating patient in hospital. Clin Risk2013196–11
  14. Morris, A., Owen, H.M., Jones, K., Hartin, J., Welch, J., and Subbe, C.P. Objective patient-related outcomes of rapid-response systems – a pilot study to demonstrate feasibility in two hospitals. Crit Care Resusc201315:33–39


pdf actividadesPDF