2: The emergence of patient safety

Improving the quality of healthcare


Unless substantial progress had been made in the understanding and practice of quality improvement, it is highly unlikely that the tougher issues underlying patient safety would have emerged. Although Ernest Codman was one of the few clinicians to explicitly address error (in the context of surgery), there are many other examples of pioneering quality initiatives early in the 20th century. For instance, in 1928, the British Ministry of Health set up a committee to examine maternal morbidity and mortality, instigating confidential enquiries on 5800 cases (Kerr, 1932). This spurred Andrew Topping, a remarkable Medical Officer for Health, to set up his own programme, which became known as the Rochdale experiment. At that time, Rochdale, an industrial town in the English Midlands, had a maternal mortality of 9 deaths per 1000 deliveries. Topping instituted ante-natal clinics, meetings between midwives and family doctors, and established a puerperal fever ward and a specialist consultant post and backed it all by education and public meetings. Within five years mortality had reduced to 1.7 per 1000 (Oxley, Phillips and Young, 1935).


National reports on maternal mortality were produced intermittently in subsequent years, but progress was rather haphazard. Finally, the Confidential Enquiry into Maternal Deaths was established which, since 1952, has produced triennial reports on all maternal deaths and endeavoured to establish why they had occurred and how they might be prevented (Sharpe and Faden, 1998). Similar enquiries are now conducted into deaths after surgery, stillbirth, and homicide and suicide (Vincent, 1993).


By the early 1970s, it was clear that there were widespread variations in quality of care across different geographical areas; for instance, at that time in the United States, surgery might be routinely offered for a particular medical condition in one state but hardly ever in a neighbouring state with a similar population (Wennberg and Gittlesohn, 1973). Quality problems were inferred from these variations, but much of the imperative for examining variation, particularly in the United States, stemmed from economic considerations rather than the harm caused by unnecessary surgery. Attempts were also being made to improve the processes and organization of healthcare, drawing on the practice and methodology of quality assurance approaches in manufacturing industry, such as continuous quality improvement, total quality management, business process re-engineering and quality circles. Such methods had been particularly influential in Japan, sometimes credited for the emergence of high quality and reliability in the Japanese motor industry. These approaches combine a respect for and reliance on data as a basis for quality improvement, together with an attempt to harness the ideas and creativity of the workforce to create change, test the effects and sustain them (Langley et al., 1996). Regulatory agencies and professional societies investigated and acted on complaints made about healthcare professionals although, in Britain at least, this seldom extended to assessment of clinical competence. Amazingly, it was only in 1995 that the General Medical Council was finally empowered, by act of parliament, to investigate the clinical abilities of doctors as well as their general conduct (HMG,1995). Prior to that,sexual misdemeanours might bring down the wrath of the Council, but competence did not fall within their remit.


Doctors and other clinical staff were, as always, committed to providing high quality care to individual patients. However, the quality of the system overall was really someone else’s business; they wanted to be left to get on with treating their patients. The basic assumption for many was that quality was a natural outcome of conscientious work by highly motivated clinicians, with quality problems being due to the occasional ‘bad apple’. In 1984, Robert Maxwell (Maxwell, 1984) still had to argue that an honest concern about quality, however genuine, is not the same as methodical assessment based on reliable evidence. There was also little understanding that poor quality might not be due to bad apples, but inherent in the very structures and processes of the healthcare system itself.


Progress over the next decade in the United Kingdom has been well summarized by Kieran Walshe and Nigel Offen, in their description of their report of the background to the events of the Bristol Royal Infirmary Enquiry (2001):


Between 1984 and 1995 the place of quality improvement in the British NHS was transformed. At the start of that period… clinicians took part in a range of informal and quasi-educational activities aimed at improving the quality of practice, but there were few,if any, health care organizations who could claim to have a systematic approach to measuring or improving quality. Moreover many clinicians and professional organizations had a record of being disinterested, sceptical, or even actively hostile towards the idea that systematic or formal quality improvement activities had much to offer in healthcare.


10 years later much had changed. A raft of national and local quality initiatives… had generated a great deal of activity, virtually all healthcare organizations had established clinical audit or quality improvement systems and structures, and the culture had been changed substantially. It had become common to question clinical practices and to seek to improve them, activities which might have been difficult or even impossible a decade earlier.
(WALSHE AND OFFEN, 2001: P. 251)


The developments described by Walshe and Offen in Britain were paralleled in other healthcare systems, although with different emphases and different timescales. This section can obviously only sketch a very rough outline of the evolution of quality assurance in healthcare. The main thrust should however be clear. In the 1980s and early 1990s, prior to the full emergence of patient safety, there was a massive growth in awareness of the importance of systematic quality improvement. Clinicians, managers and policy makers began to understand that quality was not just another headline capturing government initiative to be endured while it was flavour of the month, but was here to stay. This was an essential support and background to the hard look at the damage done by healthcare that was to follow.


Learning from error


In 1983 Neil McIntyre, Professor of Medicine, and the philosopher Sir Karl Popper, published a paper ‘The critical attitude in medicine: the need for a new ethics’, which called for clinicians to actively seek out errors and use them to advance both their personal knowledge and medical knowledge generally. This paper is densely, almost unbelievably, rich in ideas and embraces ethics, philosophy of science, the doctor-patient relationship, attitudes to fallibility and uncertainty, professional regulation and methods for enhancing the quality of care. Summarizing all the arguments is not feasible, but two extracts illustrate some of the main themes:


To learn only from one’s own mistakes would be a slow and painful process and unnecessarily costly to one’s patients. Experiences need to be pooled so that doctors may also learn from the errors of others. This requires a willingness to admit one has erred and to discuss the factors that may have been responsible. It calls for a critical attitude to one’s own work and that of others.


No species of fallibility is more important or less understood than fallibility in medical practice. The physician’s propensity for damaging error is widely denied, perhaps because it is so intensely feared… Physicians and surgeons often flinch from even identifying error in clinical practice, let alone recording it, presumably because they themselves hold… that error arises either from their or their colleagues’ ignorance or ineptitude. But errors need to recorded and analysed if we are to discover why they occurred and how they could have been prevented.
(McIntyre and Popper, 1983: p. 1919)


The call to learn from mistakes has close links with Popper’s philosophy of science, in which he argued that scientific knowledge is inherently provisional and that progress in science depends, at least in part, on the recognition of flaws in accepted theories. Popper argues that while there is some truth in the traditional view that knowledge grows through the accumulation of facts, advances often come about through the recognition of error, by the overthrow of old knowledge and mistaken theories. In this view, error becomes something of value, a resource and clue to progress, both scientifically and clinically. Many famous scientists, such as Sir Peter Medawar, have been profoundly influenced by Popper in their approach to fundamental scientific problems, finding the emphasis on hypothesis and conjecture both creative and liberating (Medwar, 1969).


McIntyre and Popper (1983) argue that being an authority, in the sense of a wise and reliable fount of knowledge, is often seen as a professional ideal in both science and medicine. However, this idealized view of authority is both mistaken and dangerous. Authority tends to become important in its own right; an authority is not expected to err and, if he does, his errors tend to be covered up to uphold the idea of authority. So mistakes are hidden, and the consequence of this tendency may be worse than those of the mistake being hidden.


It is not only scientific authority that is questioned here, but professional authority of all kinds.In medicine this means that, while one should respect the knowledge and experience of senior clinicians, one should not regard them as ‘authorities’ in the sense of inevitably being correct. An environment,in which junior staff feel unable to question senior staff about their decisions and actions, is profoundly dangerous to patients. There are, of course, many obstacles to more open communication and the spirit of Karl Popper may be no help to the hapless junior doctor when their authoritarian consultant turns his baleful eye on them. Popper’s view of error is however a constant reminder that error and uncertainty are no respecters of status.


Reminding oneself that one may be wrong and that an absolute sense of certainty can be highly misleading, is not something that comes easily to us. Gerd Gigerenzer has advised us to always remember what he terms ‘Franklin’s law’, so called because of Benjamin Franklin’s statement that nothing in life is certain except death and taxes (Gigerenzer, 2002). Franklin’s law makes us mindful of fallibility and uncertainty, enabling us to constantly reappraise apparent certainties in the certainty that some of them will turn out to be wrong!


Tragedy and opportunities for change


The knowledgeable health reporter for the Boston Globe, Betsy Lehman, died from a drug overdose during chemotherapy. Willie King had the wrong leg amputated. Ben Kolb was eight years old when he died during ‘minor’ surgery during a drug mix-up. These horrific cases that made the headlines are just the tip of the iceberg. (Opening paragraph of the Institute of Medicine report, To err is human, Kohn, Corrigan and Donaldson, 1999.)


Certain ‘celebrated’ cases attain particular prominence and evoke complicated reactions. Cook, Woods and Miller (1998) describe some particularly sad cases in their introduction to the report ‘A tale of two stories: contrasting views of patient safety’ and make some important comments about the public perception of these cases:


The case of Willie King in Florida, in becoming the ‘wrong leg case’, captures our collective dread of wrong site surgery. The death of Libby Zion has come to represent not just the danger of drug-drug interaction but also the issues of work hours and supervision of residents – capturing symbolically our fear of medical care at the hands of overworked, tired or novice practitioners without adequate supervision. Celebrated cases such as these serve as markers in the discussion of the healthcare system and patient safety. As such, the reactions to these tragic losses become the obstacles and opportunities to enhance safety.
(COOK, WOODS AND MILLER 1998: P. 7)


Cook, Woods and Miller go on to argue that the public account of these stories is usually a gross over-simplification of what actually occurred, and that it is equally important to investigate run-of-the-mill cases and success stories in order to understand the complex, dynamic process of healthcare. Such disastrous cases however, came to symbolize fear of a more widespread failure of the healthcare system, provoking more general concerns about medical error. Perhaps it isn’t just a question of finding a good, reliable doctor. Perhaps the system is unsafe? Such concerns are magnified a 100-fold when there is hard evidence of longstanding problems in a service and a series of tragic losses. This is well illustrated by the events that led to the UK Inquiry into infant cardiac surgery at the Bristol Royal Infirmary (Box 2.1).



BOX 2.1 Events leading up to the Bristol inquiry


In the late 1980s, some clinical staff at the Bristol Royal Infirmary began to raise concerns about the quality of paediatric cardiac surgery by two surgeons. In essence it was suggested that the results of paediatric cardiac surgery were less good than at other specialist units and that mortality was substantially higher than in comparable units. Between 1989 and 1994, there was a continuing conflict at the hospital about the issue between surgeons, anaesthetists, cardiologists and managers. Agreement was eventually reached that a specialist paediatric cardiac surgeon should be appointed and in the meantime that a moratorium on certain procedures should be observed. In January 1995, before the surgeon was appointed, a child called Joshua Loveday was scheduled for surgery against the advice of anaesthetists, some surgeons and the Department of Health. He died and this led to further surgery being halted, an external enquiry being commissioned and to extensive local and national media attention.


Parents of some of the children complained to the General Medical Council which, in 1997, examined the cases of 53 children, 29 of whom had died and 4 of whom suffered severe brain damage. Three doctors were found guilty of serious professional misconduct and two were struck off the medical register.


The Secretary of State for Health immediately established an Inquiry, costing £14 million, chaired by Professor Ian Kennedy. The Enquiry began in October 1998 and the report published in July 2001 made almost 200 recommendations.


(REPRODUCED FROM QUALITY & SAFETY IN HEALTH CARE, K WALSHE, NOFFEN. ”AVERY PUBLIC FAILURE: LESSONS FOR QUALITY IMPROVEMENT IN HEALTHCARE ORGANIZATIONS FROM THE BRISTOL ROYAL INFIRMARY’’. 10, NO. 4, [250–256], 2001, WITH PERMISSION FROM BMJ PUBLISHING GROUP LTD.)


The impact in the United Kingdom of the events at Bristol on healthcare professionals and the general public is hard to understate. The editor of the British Medical Journal wrote an editorial entitled ‘All changed, changed utterly. British medicine will be transformed by the Bristol case’ (Smith, 1998), in which he highlighted a number of important issues, but particularly its impact on the faith and trust which people have in their doctors. The subsequent Inquiry, led by Professor Sir Ian Kennedy, could have been recriminatory and divisive but, in fact, achieved the remarkable feat of bringing positive, forward looking change from disaster and tragedy.


The Inquiry report is massive and we can only make a few general points here about the relevance of the Bristol affair to patient safety. The tragedy for all concerned was undeniable, the media attention relentless and sustained. The fact that routine, although highly skilled and complex, healthcare could be substandard to the point of being dangerous, was abundantly clear. The impetus for open scrutiny of surgical performance, and indeed the outcomes of healthcare generally, was huge and the subject of error and human fallibility in healthcare was out in the open (Treasure, 1998).


The Inquiry was noteworthy, from the outset, in adopting a systems approach to analysing what had happened; poor performance and errors were seen as the product of systems that were not working well, as much as the result of any particular individual’s conduct (Learning from Bristol, 1999). In practice, this meant that, whereas most Inquiries would have started by grilling the surgeons involved, Professor Kennedy’s team began by examining the wider context and only gradually moved towards specific events and individuals. This approach revealed the role of contextual and system factors much more powerfully and demonstrated that the actions of individuals were influenced and constrained by the wider organization and environment. Bristol therefore came to exemplify wider problems within the NHS, and its conclusions were widely applicable. Recommendations were made on open and honest risk communication to patients, the manner of communication and support, the process of informed consent, the need for a proper response to tragic events, the vital role of team work, the monitoring of the quality of care, the role of regulation and a whole host of other factors.


Many other countries have had their Bristols. Canada, for instance, experienced a similar high profile tragedy in the paediatric cardiac service at Winnipeg. Jan Davies, the leading clinical advisor to that Inquiry, drew explicit parallels between Winnipeg and a major aviation disaster at Dryden (Davies, 2000), holding out the hope that both events would provoke enduring system wide changes.


Studying the safety of anaesthesia: engineering a solution


Whereas practitioners of quality improvement in healthcare tended to look to industrial process improvement as their model, patient safety researchers and practitioners have looked to high-risk industries, such as aviation, chemical and nuclear industries, which have an explicit focus on safety usually reinforced by a powerful external regulator. The industries have invested heavily in human factors, a hybrid discipline drawing on ergonomics, psychology and practical experience in safety critical industries. Many of the important developments in the psychology of error have their origins in studies of major accidents in these complex industries. Healthcare has drawn some important lessons from them, gaining a much more sophisticated understanding of the nature of error and accidents and a more thoughtful and constructive approach to error prevention and the management of error. These issues will be addressed in more detail in later chapters. For the moment we will simply set the scene and demonstrate the importance of this line of work to patient safety.


One of the true pioneers in this area is Jeffrey Cooper, who trained originally as a bioengineer. In 1972 he was employed by the Massachusetts General Hospital to work on developing machines for anaesthesiology researchers (Cooper et al., 1978; Cooper, Newbower and Kitz, 1984; Gawande, 2002). Observing anaesthetists in the operating room he noticed how poorly anaesthetic machines were designed and how conducive they were to error. For example, a clockwise turn of a dial decreased the concentration of a powerful anaesthetic in some machines but increased the concentration in others – a real recipe for disaster. Cooper’s work extended well beyond the more traditional approach to anaesthetic misadventure, in that he examined anaesthetic errors and incidents from an explicitly psychological perspective, exploring both the clinical aspects and the psychological and environmental sources of error such as inexperience, fatigue and stress.


Cooper’s 1984 paper provides a remarkably sophisticated analysis of the many factors that contribute to errors and adverse outcomes and is the foundation of much later work on safety in anaesthesia. Contrary to the prevailing assumption that the initial stages of the anaesthesia were the most dangerous, Cooper discovered that most incidents occurred during the operation when the anaesthetist’s vigilance was most likely to ebb. The most important problems involved errors in managing the patient’s breathing, such as undetected disconnections and mistakes in managing the airway or the anaesthetic machine. Cooper also discussed factors that might have contributed to an error, such as fatigue and inadequate experience.


Cooper (1994), reflecting on the impact of the studies, noted that they seem to have stirred the anaesthesia community into recognizing the frequency of human error. Cooper’s work provoked much debate but little action, until Ellison Pierce was elected President of the American Society of Anaesthesiologists in 1982. The daughter of a friend had died under anaesthetic while having wisdom teeth extracted, and this case in particular galvanized Pierce to persuade the profession that itwas possible to reduce the then 1 in 10000 death rate from anaesthesia (Gawande, 2002) to the extremely low rate seen today. Anaesthesia. together with obstetrics, led the way in a systematic approach to the reduction of harm, foreshadowing the wider patient safety movement a decade later (Gaba, 2000).


Error in medicine


In 1994, the subject of error in medicine was, with some notable exceptions, largely confined to anaesthesia. A prescient and seminal paper (Leape, 1994), still widely cited, addressed the question of error in medicine head on and brought some entirely new perspectives to bear. Lucian Leape began by noting that a number of studies suggested that error rates in medicine were particularly high, that error was an emotionally fraught subject and that medicine had yet to seriously address error in the way that other safety critical industries had. He went on to argue that error prevention in medicine had characteristically followed what he called the ‘perfectibility model’. If physicians and nurses were motivated and well trained, then they should not make mistakes; if they did, then punishment in the form of disapproval or discipline was the most effect remedy and counter to future mistakes. Leape summarized his argument by saying:


The professional cultures of medicine and nursing typically use blame to encourage proper performance. Errors are caused by a lack of sufficient attention or, worse, lack of caring enough to make sure you are correct.
(LEAPE, 1994: P. 1852)


Leape, drawing on the psychology of error and human performance, rejected this formulation on several counts. Many errors are often beyond the individual’s conscious control; they are precipitated by a wide range of factors, which are often also beyond the individual’s control; systems that rely on error-free performance are doomed to failure, as are reactive attempts to error prevention that rely on discipline and training. He went on to argue that if physicians, nurses, pharmacists and administrators were to succeed in reducing errors in hospital care, they would need to fundamentally change the way they think about errors (Leape, 1994).


Leape went on to outline some central tenets of cognitive psychology, in particular the work of Jens Rasmussen and James Reason (discussed in detail in Chapter 4). While Reason had made some forays into the question of error in medicine (Eagle, Davies and Reason, 1992; Reason, 1993), Lucian Leape’s paper brought his work to the attention of healthcare professionals in a leading medical journal. Leape explicitly stated that the solutions to the problem of medical error did not primarily lie within medicine, but in the disciplines of psychology and human factors, and set out proposals for error reduction that acknowledged human limitations and fallibility and relied more on changing the conditions of work than on training.


Cooper and Leape are not the only authors to understand the importance of human factors and psychology to medical harm and medical error at an early stage. For instance, Marilyn Bogner’s 1994 book ‘Human error in medicine’ contained many insightful and important chapters by David Woods, Richard Cook, Neville Moray and others; James Reason articulated his theory of accidents and discussed its application in medicine in Medical Accidents (Vincent, Ennis and Audley, 1993). Cooper and Leape were, however, particularly important influences and they illustrate the more general point that some of the defining characteristics of patient safety are its acceptance of the importance of psychology and the lessons to be learnt from other safety critical industries.


Litigation and risk management


Until relatively recently litigation was seen as a financial and legal problem, patients who sued were often seen as difficult or embittered and doctors who helped them as professionally and often personally suspect. Only gradually did those addressing the problem come to understand that litigation was a reflection of the much more serious underlying problem of harm to patients; for this reason litigation is part of the story of patient safety.


Litigation and medical malpractice crises have occurred on a regular basis for over 150 years, each time accompanied by worries about public trust in doctors and much associated commentary and soul searching, some of it rather hysterical in nature. Litigation in medicine dates back to the middle of the 19th century, when the relaxation of professional regulation and introduction of a free market in both medical and legal services, simultaneously fuelled a decline in standards in medicine, dissatisfaction from patients and the availability of lawyers to initiate proceedings. Between 1840 and 1860, the rate of malpractice cases increased 10-fold and medical journals, after more than 50 years of barely noticing malpractice, suddenly became all but obsessed with the problem (Mohr, 2000).


Since then there have been recurrent crises usually coinciding with rising malpractice premiums paid by doctors. By 1989, US malpractice premiums appeared to have reached a plateau, though that plateau was very high for some specialties (Hiatt et al., 1989). Insurance premiums for Long Island neurosurgeons and obstetricians ranged from $160 000 to $200 000 per annum, although admittedly New York State premiums were amongst the highest in the United States and probably in the world. Since then however, premiums in many countries appear to have stabilized and even declined (Hiatt et al., 1989; Mohr, 2000).


An historical perspective on litigation tempers reaction to the latest media driven litigation crisis, but there is no doubt that litigation is a longstanding problem for healthcare. Some believe that doctors are under attack (occasionally true) and that healthcare is burdened by numerous frivolous lawsuits brought by greedy patients. In passing, we might usefully dispose of a few myths. First, patients, as we shall see, very seldom sue after adverse events. Second, the huge awards that hit the headlines for severely damaged babies are very rare. Compensation for being condemned to a life of pain and suffering after hospital injury is meagre or non-existent in most countries and much of the money expended is swallowed up in fees and administration. Third, where there is no actual negligence, patients hardly ever receive compensation; it is more common that patients who claim and should receive compensation are denied it (Studdert et al., 2006). Fourth, while compensation is important in some cases, patients often turn to litigation for entirely other reasons, being driven in despair to litigation through a failure to receive the apologies, explanations and support that they both deserve and need (Vincent, 2001a). Finally, consider the simple fact that patients or families who need money because they cannot work or have to look after a relative generally have no other option but to sue. Shamefully, few hospitals have a proactive policy of actively helping the patients they injure, although as we will see this is beginning to change. We, as payers of tax, fees or insurance, have in fact been remarkably tolerant of the failings of the healthcare system and litigation has by any standard been used very sparingly. We must remember however, that the process of litigation in serious cases can be traumatic for both patients and doctors, but this is a subject for later chapters.


Litigation, as a means of reparation for injured patients, is expensive and in many cases a rather inefficient means of compensating injured patients. The threat of litigation is also often cited as a deterrent to the open reporting and investigation of adverse events and as a major barrier to patient safety. However, for all this, litigation has undoubtedly been a powerful driver of patient safety. Litigation raised public and professional awareness of adverse outcomes, and ultimately led to the development of clinical risk management. In the United States, risk management has had a primarily legal and financial orientation, and risk managers are only now becoming involved in safety issues. In the United Kingdom and other countries however, risk management had a clinical orientation from its inception as well as a concern with legal and financial issues. The terminology varies from country to country but the aims of clinical risk management and patient safety are the same – to reduce or eliminate harm to patients (Vincent, 1995; Vincent, 2001b).


Litigation has had one other unexpected benefit. The rising rate of litigation in the 1980s led some to consider whether compensation might be offered on a no-fault basis, bypassing the expense and unpleasantness of the adversarial legal process. The Harvard Medical Practice Study (HMPS), still the most famous study in the field of patient safety, was originally established to assess the number of potentially compensable cases in New York State, not primarily as a study of the quality and safety of care (Hiatt et al., 1989). However, its major legacy has been to reveal the scale of harm to patients. The study found that patients were unintentionally harmed by treatment in almost 4% of admissions in New York, and about 1% of patients were seriously harmed (e.g. resulting in death or permanent disability) (Brennan et al., 1991; Leape et al., 1991). These findings were later to receive massive publicity with the release of the Institute of Medicine report ‘To Err is Human’ in 1999.


Professional and government reports: patient safety hits the headlines


The US Institute of Medicine’s 1999 report ‘To err is human’,is a stark, lucid and unarguable plea for action on patient safety at all levels of the healthcare system. Without doubt the publication of this report was the single most important spur to the development of patient safety, catapulting it into public and political awareness and galvanizing political and professional will at the highest levels in the United States.


President Clinton ordered a government wide study of the feasibility of implementing the report’s recommendations. The Institute of Medicine called for a national effort to include establishment of a Centre for Patient Safety within the Agency for Healthcare Research and Quality, expanded reporting of adverse events and errors, and development of safety programmes by healthcare organizations, regulators and professional societies. However, as Lucian Leape recalls, one particular statistic provided a focus and impetus for change:


However, while the objective of the report, and the thrust of its recommendations, was to stimulate a national effort to improve patient safety, what initially grabbed public attention was the declaration that between 44 000 and 98 000 people die in US hospitals annually as a result of medical errors.
(LEAPE, 2000: P. 95)


‘To err is human’, the first of a series of reports on safety and quality from the Institute, was far more wide ranging than the headline figures suggest. A large number of studies of error and harm were reviewed; the causes of harm, the nature of safe and unsafe systems and the role of leadership and regulation were all examined, themes we will return to in later chapters. The principal aim of the report was to establish patient safety as a major requirement and activity of modern healthcare, by establishing national centres and programmes, expanding and improving reporting systems and driving safety in clinical practice through the involvement of clinicians, purchasers of healthcare, regulatory agencies and the public (Box 2.2).



BOX 2.2 To err is human: principal recommendations of the IOM report



  • Congress should create a Centre for Patient Safety.
  • A nationwide mandatory reporting system should be established.
  • The development of voluntary reporting should be encouraged.
  • Congress should pass legislation to extend peer review protection to patient safety data.
  • Performance standards and expectations for healthcare organizations and healthcare professional should focus greater attention on patient safety.
  • The Food and Drug Administration should increase attention to the safe use of drugs in both the pre- and post-marketing processes.
  • Healthcare organizations and the professionals affiliated with them should make continually improved patient safety a declared and serious aim, by establishing patient safety programmes with defined executive responsibility.
  • Healthcare organizations should implement proven medication safety practices.

(FROM KOHN, CORRIGAN AND DONALDSON, 1999)

Stay updated, free articles. Join our Telegram channel

Jun 24, 2017 | Posted by in GENERAL SURGERY | Comments Off on 2: The emergence of patient safety

Full access? Get Clinical Tree

Get Clinical Tree app for offline access