The model in Fig. 7.1 can be used to drive both the development of use case scenarios for use in scenario-based design of healthcare information systems, as well as for summative testing of systems once they are implemented (Carroll 1995; Kushniruk and Turner 2012). For example, along the User Dimension, the different classes of users of a system being developed (e.g., physicians, pharmacists, nurses) are delineated along with their attributes (e.g., level of experience/expertise, age range). Along the Task Dimension, the various tasks and the attributes of those tasks are defined for each class of user (e.g., tasks such as entry of patient data, decision support, etc.). In the past, the combination of User and Task dimensions made up a model known in the software industry as the User-Task Matrix (Hackos and Redish 1998). In our work in clinical contexts, it became apparent that a third dimension, that of “Context” also needs to be considered when designing healthcare information systems. This perspective is consistent with work from the socio-technical design literature for healthcare IT development (where the role of social context is emphasized), but differs in that it provides an explicit model of context in relation to user types and users’ tasks. Context refers to the healthcare setting or environment into which healthcare IT will be deployed. As an illustration, the User-Task-Context model can be used to consider under what conditions a new speech recognition component would likely be effective for physicians dictating reports while using an electronic health record system (i.e., the Task dimension). The effectiveness of the component can be shown to vary considerably even when considering the same class of users (i.e., the User dimension), depending on whether the speech recognition component is deployed in a quiet office setting or in a noisy clinic (i.e., the Context dimension). Thus, the success or failure of health information systems and technologies is related to consideration of all 3 dimensions.
In our work developing requirements, application of this model has proven useful for activities ranging from creation of system requirements during early requirements analysis, to generation of use cases (which describe in detail the scenarios involved in specific uses of the system) and generation of scenarios that can be used to test the user model of a prototype once a system has been deployed. The system development life cycle (SDLC) provides a useful formal framework for considering where this type of user modeling can be applied and consists of the following phases: (1) the Planning Phase, where the initial planning of the system development is initiated, (2) the Analysis Phase, where there is a focus on requirements gathering, (3) the Design Phase, where detailed architectural blueprints for the system are developed, (4) the Implementation phase, where the system is programmed, and (5) the Support Phase, where the system is in use (Kushniruk 2002). In the context of the SDLC, the User-Task-Context matrix is useful at a number of stages including early in Planning and Analysis phases to specify user requirements, during the Design Phase to drive refinement of use cases, and during the Implementation Phase, to provide those testing a system with a list of users and tasks for target testing (in order to ensure the system does what it was intended for each class of user it was designed to serve).
7.3 Low-Cost Rapid Usability Engineering in Clinical Informatics
In this section, we describe methods collectively known as rapid low-cost usability engineering methods designed to be used for analyzing user interactions with a range of healthcare information systems and which can be an integral part of UCD in healthcare (Kushniruk et al. 2006; Borycki et al. 2013). As will be shown, the methods can be employed during UCD, and also upon completion of a clinical information system during its deployment phase (i.e., during the support phase of the traditional SDLC).
Usability testing involves observing representative users of a system (e.g., doctors or nurses) while they use a system or user interface to carry out representative tasks (e.g., entering medications into a clinical information system) (Nielsen 1993; Kushniruk and Patel 2004). Observing users typically involves video recording user interactions, on-screen actions and verbalizations. Such data can be transcribed and coded to identify usability problems and issues (Kushniruk and Patel 2004). Usability testing methods have been employed widely in the design and evaluation of a range of health information systems over the past several decades. Usability testing methods can be used along the entire SDLC and the focus of the testing will depend on the stage of development of the system (Borycki et al. 2011; Kushniruk 2002).
Usability engineering methods have evolved in response to advances in technology. For example, free or low-cost screen recording software and built-in microphones on laptops have enabled “low-cost rapid usability testing” to become more widely applied (Kushniruk and Borycki 2006). The goal of this method is to provide an informative usability test that is efficient and cost effective. Moreover, low-cost rapid usability testing is not limited to the confines of a laboratory setting. Rather, by employing low-cost portable methods that can be taken directly into settings like operating rooms or clinics, the approach allows for what we have referred to as “in-situ” usability testing. Such testing has the advantage of having greater fidelity than laboratory-based usability testing. Such testing can also vary in terms of whether the experimenter exerts control over the study, or allows the users’ interactions to be more naturalistic, which allows for a range of study types. It also is arguably far less expensive, since if the testing can be taken into real settings after hours or when available, then the cost of the testing can be reduced (Kushniruk and Borycki 2006). Regardless of whether usability testing is conducted in a laboratory setting or in a real clinical environment, there are a number of steps that need to be considered in setting up such testing. Kushniruk and colleagues (Kushniruk and Patel 2004; Borycki and Kushniruk 2005) have previously outlined the stages of this approach:
1.
Identification of testing objectives
2.
Selection of participants (e.g., n = 10 to 20 representative users)
3.
Selection of representative experimental tasks
4.
Selection of an evaluation environment
5.
Observation and recording of users’ interaction with the health information system
6.
Analysis of usability data (i.e., coding screen and/or video recordings and audio transcripts)
7.
Translating findings and feedback into suggestions for system improvement
This method has been shown to drastically reduce development costs. In one study, adopting this method resulted in an estimated cost savings between 36.5 and 78.5 % (Baylis et al. 2012). Costs associated with design changes are much lower early in the SDLC. For example, identifying a usability problem or error early in the design phase may require minimal effort to fix. However, once a system is deployed, making even minimal changes may be impossible or prohibitively expensive. Further, mitigating errors before deploying the system reduces the potential of technology-induced errors (i.e., errors resulting from the use of an information system that may be caused from poor usability or from interactions with a system in a real setting) which in some cases are costly to address from a systems and human perspective in terms of patient safety (Baylis et al. 2012; Borycki and Keay 2010). As a result, this method minimizes the probability of requiring a system re-design. In addition to this, the method is appealing as it is both efficient and inexpensive to employ. Currently, experimental apparatus (i.e., the computer screen, screen recording software, and microphone) is embedded in most laptops and therefore a usability test can be conducted anywhere (Kushniruk and Borycki 2006). The low-cost rapid usability engineering approach can also be conducted remotely by employing commonly available web-conference and screen sharing software to remotely view and record the screens and audio of subjects performing tasks during usability testing remotely (Kushniruk et al. 2007, 2008; Kushniruk and Borycki 2006).
In considering at what points in the SDLC that low-cost rapid usability engineering can be applied, the literature indicates that such testing can be carried out at various stages (Kushniruk and Patel 2004). For example, in the early development of the user interface for an EHR (e.g., during the Analysis and Design phases), early prototypical designs can be analyzed by having representative users (e.g., physicians and nurses) comment on, and interact with partially functioning mock-ups and prototypes in order to determine the most effective interface. In addition, continual testing throughout the Implementation Phase is recommended as the feedback gained from end users can be used to refine the system/user interface. Finally, upon delivery of a clinical information system within an institution, the application of low-cost rapid usability engineering is highly recommended in order to ensure that systems that are beginning to be deployed are both safe and effective for end users.
An important aspect of conducting effective usability tests is the delineation of the following: (1) user classes (i.e., who are the different users or potential users of the system being designed and have they all been defined and characterized?), (2) the tasks the system will be designed to support (e.g., what tasks will the system be used for?), and (3) the context in which the system will be deployed (where will the system be implemented?). The User-Task-Context model described in the previous section can be used to decide what scenarios and use cases should be used for setting up usability testing.
7.4 Using Low-Cost Rapid Usability Engineering in Conjunction with Rapid Prototyping in UCD
Rapid prototyping uses models (ranging from paper mock-ups to wireframe models to partially functioning systems) to illustrate/simulate system functionality (Kushniruk 2002). Thus, these models depict different options about how the system could operate in order to gain insight and feedback from users without investing substantial time and resources in system design. For example, Axure (www.axure.com) software can be used to develop interactive wireframe mock-ups without writing any code. Similarly, Usaura (www.usaura.com) allows designers to upload screenshots or sketches and then asks users to do a task, pick from a selection or give feedback. Usaura collects a variety of data on how the users interacted with the screen (e.g., “heat maps” of user clicks, accuracy of user clicks, how long users took to click). Usaura also allows users to select their preference between display options and respond to multiple-choice questions. This software can be used to evaluate a variety of research questions (e.g., Where should a design element be placed? Which design iteration is better?). Thus, software can facilitate the development of prototypes quickly and these potential design solutions can be compared and evaluated by users.
Rapid prototyping focuses on key system functionality, and thus minimizes the time invested in system design prior to gaining user feedback about critical aspects of the system or user interface design. Moreover, rapid prototyping can be used to investigate specific system components independently. Developing components in parallel allows for the progression of other components to continue despite barriers impeding the development of specific components. Furthermore, several different solutions can be evaluated to discern the best solution before considerable investment in designing the actual system. Additionally, different ways of integrating components can be explored to determine which combinations are most successful. Thus, rapid prototyping integrates users’ choice and feedback about how a system will operate with minimal expenses. Kushniruk (2002) outlines the process of incorporating rapid prototyping into system design (see Fig. 7.2). This flowchart depicts the iterative nature of rapid prototyping to refine the solution and ensure that user requirements are met by subjecting the prototypes to usability evaluations before a final solution is implemented. In Fig. 7.2, the box in the flowchart corresponding to “Prototype Testing (Usability Testing)” is the point where low-cost rapid usability testing methods can be applied.
Fig. 7.2
Systems development based on prototyping and iterative usability testing
In addition, application of usability inspection methods, such as heuristic evaluation (Nielsen 1993; Nielsen and Mack 1994; Zhang et al. 2003; Carvalho et al. 2009) and cognitive walkthroughs (Kushniruk et al. 1996) are potential techniques for evaluating the usability of prototypes (a detailed description of the evaluation methods can be found in Chap. 5 in this volume). These methods do not involve observing users but rather having one or more expert analysts “stepping through” and methodically comparing the interface design against design guidelines in the case of heuristic evaluation (Nielsen 1993), and in the case of cognitive walkthrough “inspecting” areas where users might be expected to have problems by identifying user goals, actions, and system responses (Wharton et al. 1994). However, it should be noted that during rapid prototyping there is no replacement to actually observing users’ interactions in terms of gaining an in-depth understanding (rather than predicting alone) of both usability and workflow problems and issues that need to be corrected on subsequent iterative cycles.
The adoption of rapid prototyping techniques in health information system design (in conjunction with usability testing) have been shown to improve the usability and usefulness of these systems while simultaneously minimizing development costs. Rapid prototyping fosters inexpensive exploration and refinement of models before a system is developed. Thus, more options are available for users to assess. In addition to what users are testing during system development, advancements in how usability tests are conducted have been made. For example, rapid approaches are now being used that can practically be incorporated within iterative prototyping cycles to feed information back into design based on analysis of user interactions with systems.
7.5 Use of Clinical Simulations in System Design and Evaluation
Clinical simulations represent a development that follows logically from usability testing methods and can be practically employed during UCD. As described above, usability testing can be characterized as involving observation of representative users of a system being observed/recorded while they carry out representative tasks (using a system being evaluated). Clinical simulations extend the realism of testing by also carrying out the evaluation in representative environments (i.e. settings, environments or contexts that are representative of where the system being designed or developed will ultimately be deployed in). Examples of clinical simulations include work conducted in the evaluation of medication administration systems in order to assess the impact of different system designs on usability and patient safety (Borycki et al. 2013). In a series of studies conducted “in-situ” in a hospital in Japan, realistic clinical situations were set up by using hospital rooms “after hours”, where the system was to be deployed (Kushniruk et al. 2006, 2008). This approach included using mannequins (i.e. life size physical representations of the human body used in health professional education) in place of patients, as the simulations were to include not only use of computer systems in the room, but also physical interactions such as hanging intravenous bags and ergonomic aspects of the room layout (i.e. where the computer is located). This reduced the cost of setting up the in-situ testing, as the hospital room was already in place along with integration with other hospital systems and technologies. The advantages include not only reduction in cost, but also the fidelity or realism of the study was increased as the setting mirrored the actual location where the medication administration would be implemented. It also included testing the human-computer interaction involving integration with other technologies already in the hospital such as the bar code scanning technology. For this study, a User-Task-Context model was used to brainstorm a set of representative tasks that ranged from using the system to administer routine medications to medications that varied in their complexity of administration. In addition, scenarios were also created that included physical interruptions and unexpected emergency conditions. Representative users included sixteen health professionals (physicians and nurses) that were recruited to participate in 1-h sessions where they interacted with the new system to carry out the set of representative medication administration tasks. Recording of the tasks involved installing screen recording software (e.g. hypercam®) on the computer the participants accessed the medication administration system from. This allowed for recording of all user interactions with the medication administration system. In addition, a camcorder was used to obtain a wide-angle view of the physical interactions of the participants with the system and other technologies in the room (see Fig. 7.3).
Fig. 7.3
External video view and screen view of user interactions with a medication administration system
Video analysis of the screen recordings in conjunction with audio recordings of users interacting in the task and external video views were integrated using Adobe® Premiere video editing software. During the analysis of the data, users’ interactions were coded for: (a) usability problems in using the medication administration system, (b) ergonomic issues, and (c) issues in the integration of differing technologies (e.g. medication administration system with bar coding). The coding methodology used was modified and adapted from that described by Kushniruk and Patel (2004) and involved first transcribing all audio recordings and then observing the video and screen recordings, in order to create an annotated log file of verbalizations and actions for each participant. The interactions were coded for time taken to complete tasks and subtasks (e.g. verifying patients, reviewing medication orders, entering administration information) as well as for problems and issues encountered using the system. In addition, a post-task audio-recorded semi-structured interview was conducted to ask each participant about his or her experience in using the system. The results indicated that for routine medication administration, the system operated safely and was deemed to be usable when simple and short lists of medications were to be administered. However, as the complexity of the tasks increased it was found that the rigidity of the system locked the user into a workflow sequence, which although it supported safety (in not allowing for any deviation from a specified workflow) did pose potential safety risks when emergencies were simulated. Specifically, when a simulated emergency occurred there was not enough time for all steps to be completed in sequence (as guided by the system) and there was a need for emergency override capability. As a result of this study, such an override was included in the system design prior to widespread release (Kushniruk et al. 2006).
Carrying out system evaluation in-situ can increase the fidelity of testing while at the same time reduce costs. Another cost-effective approach involves integrating clinical simulations into the operations of simulation laboratories that are becoming increasingly commonly used for medical and nursing education purposes. An example of this is the IDX laboratory (Kushniruk et al. 2013a) that was established in Copenhagen. The laboratory was initially used for medical and nursing education purposes (i.e. computer controlled mannequins are used for training students), but has since been expanded for use in testing the usability and safety of clinical information systems. Recently, it has been used for installing candidate clinical information systems for testing those systems during a regional procurement process having the objective of selecting a system that matched the needs of users in the Copenhagen region.