Fig. 5.1
Classification of evaluation methods
5.2.1 Analytical Approaches
Analytical approaches rely on analysts’ judgments and analytic techniques to perform evaluations on user interfaces, and often do not directly involve the participation of end users. These approaches utilize experts – usability, human factors, or software – to conduct the evaluation studies. In general, analytical evaluation techniques involve task-analytic approaches (e.g., hierarchical and cognitive task analysis), inspection-based methods (e.g., heuristic evaluations and walkthroughs), and predictive model-based methods (e.g., keystroke models, Fitts Law). As will be described in the respective sections, the model-based techniques do not use any participants and relies on parameterized approaches for describing expert behavior. We describe each of these techniques, their applications, appropriate contexts of their use and examples from recent research literature.
5.2.1.1 Task Analysis1
Task analysis is one of most commonly used techniques to evaluate “existing practices” in order to understand the rationale behind people’s goals of performing a task, the motivations behind their goals, and how they perform these tasks (Preece et al. 1994). As described by Vicente (1999), task analysis is an evaluation of the “trajectories of behavior.” Hierarchical task analysis (HTA) and cognitive task analysis (CTA) are the most commonly used task-analytic methods in in biomedical informatics research.
Hierarchical Task Analysis
HTA is the simplest task analytic approach and involves the breaking down of a task into sub-tasks and smaller constituted parts (e.g., sub-sub-tasks). The tasks are organized according to specific goals. This method, originally designed to identify specific training needs, has been used extensively in the design and evaluation of interactive interfaces (Annett and Duncan 1967). The application of HTA can be explained with an example: consider the goal of printing a Microsoft Word document that is on your desktop. The sub-tasks for this goal would involve finding (or identifying) the document on your desktop, and then printing it by selecting the appropriate printer. The HTA for this task can be organized as follows:
0.
Print document on the desktop
1.
Go to the desktop
2.
Find the document
2.1.
Use “Search” function
2.2.
Enter the name of the document
2.3.
Identify the document
3.
Open the document
4.
Select the “File” menu and then “Print”
4.1.
Select relevant printer
4.2.
Click “Print” button
Plan 0: do 1–3–4; if file cannot be located by a visual search, do 2–3–4
Plan 2: do 2.1–2.2–2.3
In the above-mentioned task analysis, the task can be decomposed into the following: moving to your desktop, searching for the document (either visually or by using the search function and typing in the search criteria), selecting the document, opening and printing it using the appropriate printer. The order in which these tasks are performed may change based on certain situations. For example, if the document is not immediately visible on the desktop (or if the desktop has several documents making it impossible to identify the document visually), then a search function is necessary. Similarly, if there are multiple printer choices, then a relevant printer must be selected. The plans include a set of tasks that a user must undertake to achieve the goal (i.e., print the document). In this case, there are two plans: plan 0 and plan 2 (all plans are conditional on tasks having pertinent sub-tasks associated with it). For example, if the user cannot find a document on the desktop, plan 2 is instantiated, where a search function is used to identify the document (steps 2.1, 2.2 and 2.3). Figure 5.2 depicts the visual form of the HTA for this particular example.
Fig. 5.2
Graphical representation of task analysis of printing a document: the tasks are represented in the boxes; the line underneath certain boxes represents the fact that there are no sub-tasks for these tasks
HTA has been used significantly in evaluating interfaces and medical devices. For example, Chung et al. (2003) used HTA to compare the differences between 6 infusion pumps. Using HTA, they identified potential sources for the generation of human errors during various tasks. While exploratory, their use of HTA provided insights into how the HTA can be used for evaluating human performance and for predicting potential sources of errors. Alternatively, HTA has been used to model information and clinical workflow in ambulatory clinics (Unertl et al. 2009). Unertl et al. (2009) used direct observations and semi-structured interviews to create a HTA of the workflows. The HTA was then used to identify the gaps in existing HIT functionality for supporting clinical workflows, and the needs of chronic disease care providers.
Cognitive Task Analysis
CTA is an extension of the general task analysis technique to develop a more comprehensive understanding regarding the knowledge, cognitive/thought processes and goals that underlie observable task activities (Chipman et al. 2000). While the focus is on knowledge and cognitive components of the task activities and performance, CTA relies on observable human activities to draw insights on the knowledge based constraints and challenges that impair effective task performance.
CTA techniques are broadly classified into three groups based on how data is captured: (a) interviews and observations, (b) process tracing and (c) conceptual techniques (Cooke 1994). CTA supported by interviews and observations involve developing a comprehensive understanding of the tasks through discussions with, and task observations of experts. For example, a researcher observes an expert physician performing the task of medication order entry into a CPOE (Computerized Physician Order Entry) system and asks follow up questions regarding the specific aspects of the task. In a study on understanding providers’ management of abnormal test results, Hysong et al. (2010) conducted interviews with 28 primary care physicians on how and when they manage alerts, and how they use the various features on the EHR system to filter and sort their alerts. The authors used the CTA approach supported by a combination of interviews and demonstrations. Participants were asked how they performed their alert management tasks and were asked to demonstrate these to the researcher. Based on the evaluation, they found that understanding of alert management differed (between 4 and 75 %) between providers and most did not use these features.
CTA supported by process-tracing approaches relies on capturing task activities through direct (e.g., verbal think aloud) or indirect (e.g., unobtrusive screen recording) data capture methods. Whereas the process-tracing approach is generally used to capture expert behaviors, it has also been used to evaluate general users. In a study on experts’ information seeking behavior in critical care, Kannampallil et al. (2013) used the process-tracing approach to identify the nature of information-seeking activities including the information sources, cognitive strategies and shortcuts used by critical care physicians in decision making tasks. The CTA approach relied on the verbalizations of physicians, their access of various sources, and the time spent on accessing these sources to identify the strategies of information seeking. In a related study, the process-tracing approach was used to characterize the differences of information seeking practices of two groups of clinicians (Kannampallil et al. 2014).
Finally, CTA supported by conceptual techniques rely on the development of representations of a domain (and their related concepts) and the potential relationships between them. This approach is often used with experts and different methods are used for knowledge elicitation including concept elicitation, structured interviews, ranking approaches, card sorting, structural approaches such as multi-dimensional scaling, and graphical associations (Cooke 1994). While extensively used in general HCI studies, the use of conceptual techniques based CTA is much less prominent in biomedical informatics research literature. A detailed review of these approaches and their use can be found in Cooke (1994).
5.2.1.2 Inspection-Based Evaluation
Inspection methods involve one or more experts appraising a system, playing the role of a user in order to identify potential usability and interaction problems with a system (Nielsen 1994). Inspection methods are most often conducted on fully developed systems or interfaces, but may also be used on prototypes or beta versions. These techniques provide a cost-effective mechanism to identify the shortcomings of a system. Inspection methods rely on a usability expert, i.e., a person with significant training and experience in evaluating interfaces, to go through a system and identify whether the user interface elements conform to a pre-determined set of usability guidelines and design requirements (or principles). This method has been used as an alternative to recruiting potential users to test the usability of a system. The most commonly used inspection methods are heuristic evaluations (HE) and walkthroughs.
Heuristic Evaluation
HE techniques utilize a small set of experts to evaluate a user interface (or a set of interfaces in a system) based on their understanding of a set of heuristic principles regarding interface design (Johnson et al. 2005). This technique was developed by Jakob Nielsen and colleagues (Nielsen 1994; Nielsen and Molich 1990), and has been used extensively in the evaluation of user interfaces. The original set of heuristics was developed by Nielsen (1994) based on an abstraction of 249 usability problems. In general, the following ten heuristic principles (or a subset of these) are most often considered for HE studies: system status visibility; match between system and real world; user control and freedom; consistency and standards; error prevention; recognition rather than recall; flexibility and efficiency of use; aesthetic and minimalist design; help users recognize, diagnose and recover from errors; and help and documentation (retrieved from: http://www.nngroup.com/articles/ten-usability-heuristics/, on September 24, 2014; additional details can be found at this link). Conducting a HE involves a usability expert going through an interface to identify potential violations to a set of usability principles (referred to as the “heuristics”). These perceived violations could involve interface elements such as windows, menu items, links, navigation, and interaction.
Evaluators typically select a relevant subset of heuristics for evaluation (or add more based on the specific needs and context). The selection of heuristics is based on the type of system and interface being evaluated. For example, the relevant heuristics for evaluating an EHR interface would be different from that of a medical device. After selecting a set of applicable heuristics, one or more usability experts evaluate the user interface against the identified heuristics. After evaluating the heuristics, the potential violations are rated according to a severity score (1–5, where 1 indicates a cosmetic problem and 5 indicates a catastrophic problem). This process is iterative and continues till the expert feels that a majority (if not all) of the violations are identified. It is also generally recommended that a set of 4–5 usability experts are required to identify 95 % of the perceived violations or problems with a user interface. It should be acknowledged that HE approach may not lead to the identification of all problems and the identified problems may be localized (i.e., specific to a particular interface in a system). An example of an HE evaluation form is shown in Fig. 5.3.
Fig. 5.3
Example of a HE form (for visibility) (Figure courtesy, David Kaufman, Personal communication)
In the healthcare domain, HE has been used in the evaluation of medical devices and HIT interfaces. For example, Zhang et al. (2003) used a modified set of 14 heuristics to compare the patient safety characteristics of two 1-channel volumetric infusion pumps. Four independent usability experts evaluated both infusion pumps using the list of heuristics and identified 89 usability problems categorized as 192 heuristic violations for pump 1, and 52 usability problems categorized as 121 heuristic violations for pump 2. The heuristic violations were also classified based on their severity. In another study, Allen et al. (2006) developed a simplified list of heuristics to evaluate web-based healthcare interfaces (printouts of each interface). Multiple usability experts assigned severity ratings for each of the identified violations and the severity ratings were used to re-design the interface. HE has also been used for evaluating consumer-based pages (e.g., see the use of HE by Choi and Bakken (2010) on the evaluation of a web-based education portal for low-literate parents of infants). Variants of HE approaches have been widely used in the evaluation of HIT interfaces primarily because of its easy applicability. However, the ease of its application in a variety of usability evaluation scenarios often gives rise to inappropriate use. For example, there are several instances where only one or two usability experts (instead of the suggested 4–5 experts) are used for the HE. Other instances have used subject matter experts rather than usability experts for such evaluation studies.
Walkthroughs
Walkthroughs are another inspection-based approach that relies on experts to evaluate the cognitive processes of users performing a task. It involves employing a set of potential stakeholders (designers, usability experts) to characterize a sequence of actions and goals for completing a task. Most commonly used walkthrough, referred to as cognitive walkthrough (CW), involves observing, recording and analyzing the actions and behaviors of users as they complete a scenario of use. CW is focused on identifying the usability and comprehensibility of a system (Polson et al. 1992). The aim of CW is to investigate and determine whether the user’s knowledge and skills and the interface cues are sufficient to produce an appropriate goal-action sequence that is required to perform a given task (Kaufman et al. 2003). CW is derived from the cognitive theory of how users work on computer-based tasks, using the exploratory learning approach, where system users continually appraise their goals and evaluate their progress against these goals (Kahn and Prail 1994).
While performing CW, the focus is on simulating the human-system interaction, and evaluating the fit between the system features and the user’s goals. Conducting CW studies involves multiple steps. Potential participants (e.g., users, designers, usability experts) are provided a set of task sequences or scenarios for working with an interface or system. For example, for an interface for entering demographic and patient history details, participants (e.g., physicians) are asked to enter the age, gender, race and clinical history information. As the participants perform their assigned task, their task sequences, errors and other behavioral aspects are recorded. Often, follow up interviews or think aloud (described in a later section) are used to identify participants’ interpretation of the tasks, how they make progress, and potential points of mismatches in the system. Detailed observations and recordings of these mismatches are documented for further analysis. While in most situations CWs are performed by individuals, sometimes groups of stakeholders perform the walkthrough together. For example, usability experts, designers and potential users could go through systems together to identify the potential issues and drawbacks. Such group walkthroughs are often referred to as pluralistic walkthroughs.
In biomedical informatics, it must be noted that CW has been used extensively in evaluating situations other than human computer interaction. For example, CW method (and its variants) has been used to evaluate diagnostic reasoning, decision-making processes and clinical activities. Kushniruk et al. (1996) used the CW method to perform an early evaluation on the mediating role of HIT in clinical practice. The CW was not only used to identify usability problems, but was instrumental in the development of a coding scheme for subsequent usability testing. Hewing et al. (2013) used CW to evaluate an expert ophthalmologist’s reasoning regarding the plus disease (a condition of the eye) among infants. Using images, clinical experts were independently asked to rate the presence and severity of the plus condition and provide an explanation of how they arrived at their diagnostic decisions. Similar approaches were used by Kaufman et al. (2003) to evaluate the usability of a home-based, telehealth system.
While extremely useful in identifying the key usability issues, CW methods involve significant investments in cost and time for data capture and analysis.
5.2.1.3 Model-Based Evaluation
Model-based evaluation approaches use predictive modeling approaches to characterize the efficiency of user interfaces. Model-based approaches are often used for evaluating routine, expert task performance. For example, how can the keys of a medical device interface be optimally organized such that users can complete their tasks efficiently (and accurately)? Similarly, predictive modeling can be used to compare the data entry efficiency between interfaces with different layouts and organization. We describe two commonly used predictive modeling techniques in the evaluation of interfaces.
GOMS
Card et al. (1980, 1983) proposed the GOMS (Goals, Operators, Methods and Selection Rules) analytical framework for predicting human performance with interactive systems. Specifically, GOMS models predict the time taken to complete a task by a skilled/expert user based on “the composite of actions of retrieving plans from long-term memory, choosing among alternative available methods depending on features of the task at hand, keeping track of what has been done and what needs to be done, and executing the motor movements necessary for the keyboard and mouse” (Olson and Olson 2003). In other words, GOMS assumes that the execution of tasks can be represented as a serial sequence of cognitive operations and motor actions.
GOMS is used to describe an aggregate of the task and the user’s knowledge regarding how to perform the task. This is expressed in terms of the Goals, Operators, Methods and Selection rules. Goals are the expected outcomes that a user wants to achieve. For example, a goal for a physician could be documenting the details of a patient interaction on an EHR interface. Operators are the specific actions that can be performed on the user interface. For example, clicking on a text box or selecting a patient from a list in a dropdown menu. Methods are sequential combinations of operators and sub-goals that need to be achieved. For example, in the case of selecting a patient from a dropdown list, the user has to move the mouse over to the dropdown menu, click on the arrow using the appropriate mouse key to retrieve the list of patients. Finally, selection rules are used to ascertain which methods to choose when several choices are available. For example, using the arrow keys on the keyboard to scroll down a list versus using the mouse to select.
One of the simplest and most commonly used GOMS approaches is the Keystroke-Level Model (KLM), which was first described in Card et al. (1983). As opposed to the general GOMS model, the KLM makes several assumptions regarding the task. In KLM, methods are limited to keystroke level operations and task duration is predicted based on these estimates. For the KLM, there are six types of operators: K for pressing a key; P for pointing the mouse to a target; H for moving hands to the keyboard or pointing device; D for drawing a line segment; M for mental preparation for an action; and R for system response. Based on experimental data or other predictive models (e.g., Fitts Law), each of these operators is assigned a value or a parameterized estimate of execution time. We describe an example from Saitwal et al. (2010) on the use of the KLM approach.
In a study investigating the usability of EHR interfaces, Saitwal et al. (2010) used the KLM approach to evaluate the time taken, and the number of steps required to complete a set of 14 EHR-based tasks. The purpose of the study was to characterize the issues with the user interface and also to identify potential areas for improvement. The evaluation was performed on the AHLTA (Armed Forces Health Longitudinal Technology Application) user interface. A set of 14 prototypical tasks was first identified. Sample tasks included entering patient’s current illness, history of present illness, social history and family history. KLM analysis was performed on each of the tasks: this involved breaking each of the tasks into its component goals, operators, methods and selection rules. The operators were also categorized as physical (e.g., move mouse to a button) or mental (e.g., locate an item from a dropdown menu). For example, the selection of a patient name involved 8 steps (M – mental operation; P – physical operation): (1) think of location on the menu [M, 1.2s], (2) move hand to the mouse [P, 0.4s], (3) move the mouse to “Go” in the menu [P, 0.4s], (4) extend the mouse to “Patient” [P, 0.4s], (5) retrieve the name of the patient [M, 1.2s], (6) locate patient name on the list [M, 1.2s], (7) move mouse to the identified patient [P, 0.4s] and (8) click on the identified patient [P, 0.4s]. In this case, there were a total of 8 steps that would take 5.2s to complete. In a similar manner, the number of steps and the time taken for each of the 14 considered AHLTA tasks were computed.
In addition, GOMS and its family of methods can be effectively used to make comparisons regarding the efficiency of performing tasks interfaces. However, such approaches are approximations and have several disadvantages. While GOMS provides a flexible and often reliable mechanism for predicting human performance in a variety of computer-based tasks, there are several potential limitations. A brief summary is provided here, and interested readers can find further details in Card et al. (1980). GOMS models can be applied only to the error-free, routine tasks of skilled users. Thus, it is not possible to make time predictions for non-skilled users, who are likely to take considerable time to learn to use a new system. For example, the use of the GOMS approach to predict the potential time spent by physicians in using a new EHR would be inaccurate – owing to relative lack of knowledge of the physicians regarding the use of the various interfaces, and the learning curve required to be up-to-speed with the new system. The complexity of clinical work processes and tasks, and the variability of the user population create significant challenges for the effective use of GOMS in measuring the effectiveness of clinical tasks.
Fitts Law
Fitts Law is used to predict human motor behavior; it is used to predict the time taken to acquire a target (Fitts 1954). On computer-based interfaces, it has been used to develop a predictive model of time it takes to acquire a target using a mouse (or another pointing device). The time taken to acquire a target depends on the distance between the pointer and target (referred to as amplitude, A) and the width of the target (W). The movement time (MT) is mathematically represented as follows:
In summary, based on Fitts law, one can say that the larger objects are easier to acquire while smaller, closely aligned objects are much more difficult to acquire with a pointing device. While the direct application of Fitts law is not often found in the evaluation studies of HIT or health interfaces in general, it has a profound influence in the design of interfaces. For example, the placement of menu items and buttons, such that a user can easily click on them for selection, are based on Fitts law parameters. Similarly, in the design of number keypads for medical devices, the size of the buttons and their location can be effectively predicted by Fitts law parameters.
In addition to the above-mentioned predictive models, there are several other less common models. While a detailed description of each of them or their use is beyond the scope of this chapter, we provide a brief introduction to another predictive approach: Hick-Hyman choice reaction time (Hick 1951; Hyman 1953). Choice reaction time, RT, can be predicted based on the number of available stimuli (or choices), n:
Hick-Hyman law is particularly useful in predicting text entry rates for different keyboards (MacKenzie et al. 1999), and time required to select from different menus (e.g., a linear vs. a hierarchical menu). In particular, the method is useful to make decisions regarding the design and evaluation of menus. For example, consider two menu design choices: 9 items deep/3 items wide and 3 items deep/9 items wide. The RT for each of these can be calculated as follows: . This shows that the access to menus is more efficient when it is designed breadth-wise rather than depth-wise.
5.2.2 Usability Testing/User-Based Evaluation
In this section, we have grouped a range of approaches that are generally used for evaluating the usability of HIT systems. In general, we have classified them into field/observational studies, and general approaches for usability evaluation that can be utilized in both field and laboratory settings. While formal usability testing is often conducted in laboratory settings where user performance (and other selected variables) are evaluated based on pre-selected tasks, we have loosely classified the evaluation techniques that utilize users in the evaluation process into general approaches (those that can be used in both field and laboratory based studies) and field studies.
5.2.2.1 General Usability Testing Approaches
Interviews
Interviews are commonly used to elicit information about opinions and perspectives of participants and their work practices (Mason 2002). Within the context of HIT design and evaluation, interviews have been used to obtain clinicians’ perspectives and their experiences within the context of the clinical workflow and its respective challenges and opportunities for design improvement. A study on physicians’ use of EHR with particular emphasis on its barriers and solutions is a classic example of an interview study that investigates the impact of HIT on physician workflow. For example, Miller and Sim (2004) conducted over 90 interviews with physician champions and EHR managers. Through these interview sessions, they identified participant perceptions regarding barriers to EHR use including high initial set-up costs, slow and uncertain financial payoffs, high initial physician time costs related to challenges with the technology, attitudes and incentives to use the new system. Interview participants, when asked, suggested potential solutions such as performance incentives for achieving quality improvement, technical support for the system and incorporation of a community-wide data exchange.
Interviews are viewed as an approach to elicit additional information and are often used in concert with other field study methods (e.g., observation or shadowing). For example, Unertl et al. (2013) investigated the use of health information exchange (HIE) technology, and its impact on care delivery at an e-health organization. Multi-faceted data collection methods including observations, informal and formal interviews, were used to examine workflow and information flow among team members and patients. While the interview findings illustrated the benefits of HIE technologies for communication and care continuity, their adoption in practice was limited. The integrated analysis highlighted the importance of moving away from a data and information “ownership” model to a “continuity and context-aware” model for the design and implementation of HIE technology.
Often, the data obtained from interviews are used to analyze the contextual language and meaning as quoted by participants. For example, in a qualitative study on patient transfers, Abraham et al. (Abraham 2010; Abraham and Reddy 2008) observed breakdowns in information flow between clinical units, despite the effective use of a care coordination system. Using follow-up interviews, the authors captured participants’ perspectives on the underlying cause for the information breakdowns. For example, in one of the interviews, an emergency department charge nurse was asked to describe the information sharing issues that affected the coordination of patient transfers from her unit. Her response was: “A lot of times the attending residents don’t know to put in medication or change orders, additional labs and if we are busy with other patients, we don’t have time to go to the computer and even though these screens help, they still don’t alleviate the problem.” She further added that: “I think basically they don’t understand how the emergency department works, how difficult it is to hold patients, I don’t think they understand the concept like I said we don’t have the ancillary staff and so they have this expectation of what the patient is going to be like when they come up, you know they are disheveled or haven’t had a bath or like you know they think that’s horrible (Abraham and Reddy 2008).”
Individual interviews can be classified into three major categories based on the format and level of standardization of the interview questions – structured, semi-structured and narrative (or unstructured). During structured interviews, all interviewees are asked the same questions in the same order. This allows for comparisons between responses across interviewees, which can be analyzed using qualitative and quantitative methods. Semi-structured interviews, unlike the structured interviews, are flexible and allow for probing of participants (i.e., with follow up questions) to discuss relevant issues.
In contrast to the structured methods of interviewing, narrative, open-ended, unstructured interviewing does not use any question-response structure. Instead, it adopts a storytelling and listening framework for obtaining participant perspectives. Narrative interviewing is typically comprised of four steps: (a) initiation (introduction of the topic for narration), (b) the main storytelling or narration, (c) questioning and clarification, and (d) concluding remarks (Farr 1982; Hermanns 1991). This particular type of interviewing allows participants to describe their story in their own spontaneous language. For instance, short HCI scenarios can be used to elicit participants’ responses on how they react to a real-world situation. An example scenario can focus on the emergency medical service (EMS) personnel use of patient EHR to support handoff communication to an ER physician during a trauma patient drop-off. Some of the potential questions that follow the scenario could uncover the details of how the EMS and ED team respond to the trauma situation, and the EHR functions and features that can support such emergent communication during trauma resuscitation.
Most interviews are audio-recorded for a variety of reasons: (a) the data can be transcribed verbatim, with limited chances of missing key points made by participants, (b) provides the ability for the researcher to listen to the audio files and (c) features such as voice tone and frequency may be of interest for researchers. It is recommended that interviews be conducted at locations selected by the participants to ensure that they feel comfortable to freely talk, without being concerned about other colleagues overhearing their conversations.
Focus Groups
Focus group is a type of interactive interviewing method that involves an in-depth discussion of a particular topic of interest with a small group of participants. Focus group method has been described as “a carefully planned discussion designed to obtain perceptions on a defined area of interest in a permissive, non-threatening environment” (Krueger 2009). The central elements of focus groups as highlighted by Vaughn et al. (1996) include: (a) the group is an informal assembly of target participants to discuss a topic; (b) the group is small, between 6 and 12 members and is relatively homogeneous; (c) the group conversation is facilitated by a trained moderator with prepared questions and probes; and (d) given that the primary goal of a focus group is to elicit the perceptions, feelings, attitudes, and ideas of participants about a selected topic, it can be used to generate hypotheses for further research (Krueger 2009).
Unlike individual interviews, focus group discussions allow the researcher to probe responses to a particular research topic while capturing the underlying group dynamics of the participants. According to Kitzinger (1995), interaction is a crucial feature of focus groups as it captures their view of the world, the language they use about an issue and their values and beliefs about a situation (Gibbs 1997). For instance, a focus group involving usability experts, system designers and care providers can allow participants to share their varying perspectives on HIT system design based on their work role. This will enable them to voice the key issues on the fit or (lack thereof) between the functionalities of the system and the clinical workflow.