Chapter 41 Evaluating professionalism in medical education has become much more explicit over the past decade. All important regulatory and accreditation bodies (such as the Royal College of Physicians and Surgeons of Canada, the Accreditation Council of Graduate Medical Education in the United States, Frank & Danoff 2007 and the Accreditation Council for Graduate Medical Education 2010) mandate that professionalism be taught and evaluated during training. However, since professionalism is a difficult construct to define – and therefore evaluate – no clear consensus exists as to the best methods to use. Further, issues of reliability and validity are difficult to resolve, as there is no gold standard. This chapter describes the difficulties in defining professionalism, some common pitfalls in evaluating it, and outlines some useful approaches from the research literature. One important recent resource is the report from the International Ottawa Conference working group on the assessment of professionalism (Hodges et al 2011). This international working group was tasked with developing a consensus statement on the evaluation of professionalism and elected to conduct a ‘discourse analysis’ to help categorize and understand the dominant perspectives on assessment from the published literature. The group suggested that the evaluation of professionalism can be viewed from three different yet complementary perspectives: the individual, the relational and the organizational. • The individual perspective focuses on an individual’s characteristics, attributes or behaviours. This is useful for evaluation purposes (most of which focus on single individuals) and for personal accountability. Most of the focus has shifted towards behaviours, as they appear to be more objective than attitudes or traits; however, behaviours are not always good indicators of motivations or intent and should be interpreted carefully. This perspective does not adequately consider or account for the context that is so powerful in shaping behaviours. • The relational perspective considers professionalism as arising from (and affecting) relationships between individuals. This is helpful in broadening our understanding of context and in understanding how unprofessional behaviour may arise. However, it may not adequately be able to address individuals’ problematic behaviours, or the macro-social forces that act upon relationships. • The institutional or organizational perspective allows one to consider how policies and power hierarchies can influence or even constrain individuals’ actions. This is helpful in understanding how and why people don’t always act according to their stated beliefs. However, it can obscure individual responsibility and accountability. In practical terms this means that before a programme intends to evaluate professionalism, it should choose a definition that resonates with its members and should make this explicit so that teaching and evaluation can follow. See, for example, Wilkinson and colleagues’ efforts (2009) to blueprint professionalism assessment by considering aspects of its definition. Interestingly, it is only recently that evidence has been published to support this last point, but it is now clear that some professionalism issues identified early in training may predict disciplinary actions in practice (Papadakis et al 2005, Papadakis et al 2008). That said, these authors and others have pointed out that these red flags account for only a small proportion of attributable risk (i.e. most students do not go on to have documented difficulties in practice) so should be interpreted with caution. Other researchers have found that certain behaviours in the first 2 years of medical school can predict professionalism issues in clerkship: behaviours such as turning in course evaluations or being up to date on immunizations that some have referred to collectively as conscientiousness (McLachlan et al 2009, Stern et al 2005). When evaluating professionalism one must consider a few important caveats, some of which apply to many areas of evaluation and some of which are particular to professionalism. In an article written over 10 years ago, for example, we argued that the assessment of professionalism should focus on behaviours rather than attitudes and should incorporate and acknowledge the importance of context, value conflicts and resolution (Ginsburg et al 2000). These issues are still important, and although some have been modified (based on new research) the basic principles are worth briefly revisiting. Although we began with a call to focus on behaviours, as they seem more objective, it is important to note that research in other domains has provided abundant evidence that an individual’s behaviour, and how we interpret it, is largely shaped by environmental factors (Regehr 2006, Rees & Knight 2007). With this understanding it has been argued that we should modify our approach to evaluating professionalism, and in particular in remediating it. It no longer makes sense to evaluate an individual as if he or she acted in a vacuum; rather, we should acknowledge the factors that shape these behaviours and work to create environments that foster, rather than hinder, professional behaviour (Hodges et al 2011, Lesser et al 2010, Lucey & Souba 2010). That said, programmes still have to evaluate individuals. What follows is a brief overview of commonly used evaluation methods that are well-described in the literature. A full explication of each would be beyond the scope of this practical chapter; however, for the interested reader, several review articles, chapters and books have been published that review these in much greater detail (Lynch et al 2004, Stern 2006, van Mook et al 2009). There is also an excellent review by Clauser and colleagues (2010) of how to consider – and build – evidence for validity when it comes to evaluating professionalism. It is well worth considering a final word of advice from the Ottawa consensus group: that it may be ‘more important to increase the depth and quality of the reliability and validity of a programme’ of existing measures in various contexts ‘than to continue developing new measures for single contexts’ (Hodges et al 2011). In other words, use what’s ‘out there’ already and modify if needed, rather than building a new assessment from scratch. In-training evaluation reports (ITERs) ITERs and their variants (ward evaluations, faculty evaluations, etc.) are the mainstay of evaluation in most educational settings. Their advantages include familiarity, feasibility and comprehensiveness. However, they are also plagued by issues of unreliability and have not often been shown to be valid. Some issues are related to the fact that these evaluations are meant to be based on observation, yet it is widely recognized that the evaluators often do not directly observe performance; rather, they observe snippets of performance and then extrapolate, or use information from other sources (such as residents, students, nurses, etc.) (Clauser et al 2010, Mazor et al 2008). Another significant issue is the fact that clinician attendings and supervisors have repeatedly been shown to be reluctant to provide constructive or negative feedback – on any issue, in fact – but particularly on professionalism. This is despite every good intention to do so. Faculty report feeling unprepared, uncomfortable, unsure (of what they are witnessing) and, in many cases, unwilling (due to the difficulty involved, lack of support, fear of appeals or reprisals, etc.) (Burack et al 1999, Cleland et al 2008, Dudek et al 2005). These issues should not be ignored, but should be addressed with proper education, faculty development and training. Despite these issues, there is some evidence in the literature of the utility of ITER type evaluations in picking up issues in professionalism (Colliver et al 2007, Frohna & Stern 2005, Stern et al 2005). It is also worth noting that the comments on these forms can be an even more fruitful source of information although they are not often analysed or reported systematically (Frohna & Stern 2005, Ginsburg et al 2011).
Evaluating professionalism
Defining professionalism
Why evaluate professionalism?
Pitfalls to consider in evaluation
Common methods of evaluation
Stay updated, free articles. Join our Telegram channel
Full access? Get Clinical Tree
Evaluating professionalism
