, Sam Salek2 and Stuart Walker3
(1)
Centre of Regulatory Excellence, Duke-NUS Graduate Medical School, Singapore, Singapore
(2)
Department of Pharmacy, University of Hertfordshire, Hatfield, UK
(3)
Centre for Innovation in Regulatory Sciences, London, UK
Rationale
With the evolution of the assessment of efficacy and safety towards systematic explicit benefit–risk balance, both regulatory agencies and pharmaceutical companies have developed frameworks albeit each for their own jurisdiction and purpose. Given the individual efforts, this will perpetuate the problem of inconsistency in regulatory decision-making and the perceived lack of transparency in the processes. Hence, there is now a need to provide a universal framework that is able to meet the needs of the various stakeholders. Based on the background information reviewed thus far, it appears that a universal benefit–risk assessment framework should:
Encompass the existing frameworks used by the regulatory agencies and pharmaceutical companies
Align and support the current principles of the assessment of benefits and risks
Be flexible and accommodate the various scientific tools to assess different benefits and risks
Reflect the contribution of other stakeholders, e.g., that of patients to the overall decision
Enhance transparency of the decision-making process
Aid communication of the benefit–risk balance and the basis of regulatory decision to stakeholders
Include visualization or other graphic representation of the assessment outcomes
Methodological Considerations
Design
Methodologies can be broadly classified into qualitative and quantitative designs. The latter are commonly employed in clinical studies, where the goal is likely singular. Analysis of the data will be conducted through predefined statistical methods to minimize the bias in interpretation of the outcomes. This is possible as the measures of the data are objective and quantifiable, allowing the application of statistical testing on the numerical outcomes. The purpose of quantitative design is usually to prove the acceptance of a hypothesis through the generation of statistical evidence to support the conclusion. For qualitative studies, the scope is wider and is likely used to generate collective opinions and directions for future quantitative studies. While basic descriptive statistics may be generated, the overall conclusion is obtained through expert interpretation rather than statistical outcomes. However, the absence of statistical outcomes should not be seen as a limitation in the use of qualitative designs. Both quantitative and qualitative studies are conducted in a systematic manner to collect predefined data that is relevant to the study goals. In settings where opinions, comments, and experience are explored to generate concepts that would guide future developments (Pope and Mays 1995), qualitative designs should be considered. Pope illustrated the differences between quantitative and qualitative research (Fig. 2.1).
For the purpose of achieving the objectives for this research, it appeared that qualitative designs were more appropriate.
Data Source
Literature Search Strategy
To provide a good overview of the current environment in regulatory assessment of benefits and risks, published literature was systematically searched. Two established repository of reputable publications were used, namely, PubMed and ScienceDirect. The following keywords and terms were considered relevant in searching the literature:
Benefit
Risk
Benefit assessment
Risk assessment
Benefit–risk assessment
Benefit–risk balance
Assessment framework
To optimize the validity of the opinions from the publication, the period of search was confined to within the last 5 years. However, it was expected that some older literature would provide vital fundamentals to the history relevant to this research, and these should be included for reference.
Main Regulatory Authorities’ Websites
Guidance documents for benefit–risk assessment from major regulatory agencies and international bodies were reviewed to understand the underlying principles in the evaluation of medicines. This is important as any framework proposed should not deviate or challenge these fundamentals, but rather support the execution of the processes. The major reference regulatory agencies included the EMA, US FDA, and TGA, while relevant international bodies would include the ICH and WHO. Likewise, the search for existing frameworks and publicly available assessment reports by these recognized bodies was conducted either through publications or their respective websites.
Data Collection Techniques and Analysis
Comparing Existing Frameworks
The key goals of the comparison of the frameworks were to identify the similarities and differences. Similarities were carried over to the universal framework as these would facilitate the adoption of the new framework by the owners of the reference frameworks. The similarities were also reviewed for their functionality and how these could be harmonized across the frameworks. The differences may potentially challenge the use of a universal framework, and these were also assessed for their contribution to the overall decision-making process. Differences that were deemed relevant to benefit–risk assessment would be considered for the universal framework, while those differences found to be related for the purpose of fulfilling specific jurisdiction requirements may be omitted. Beyond the content of the framework, the flow of processes was also compared. The ideal flow should correlate closely to the processes undertaken by a reviewer.
Validating the Proposed Universal Framework and Templates
To carry out the systematic collection of opinions and comments, study tools were developed. Questionnaires, surveys, and decision conferencing are common tools employed for such purpose. One established approach to develop a survey is the use of the Delphi method for structuring group communication process to ensure the effectiveness in allowing a group of individuals to solve a complex problem (Linstone and Turoff 2002). This will be further explored here.
Delphi Technique
Linstone et al. expounded on the application of the Delphi process, which can be carried out either using the traditional “Delphi Exercise” or the newer “Delphi Conference” manner. The traditional approach requires the draft questionnaire to be sent via hard copy documents to the respondent group for feedback on the proposed contents. With the inputs returned from the respondents, the questionnaire is revised and the group is again sought to review their original answers based on the new questionnaire. This approach is similar to a combination of a poll and a process to shift the need for a large communication to the smaller team developing the questionnaire. The newer “Delphi Conference” replaces the hard copy exchanges with real-time communications afforded by the current technology and thus reduces the time to obtain the responses. Regardless of the approaches, there are four distinct phases. The first phase determines the subject for discussion and provides the initial content deemed relevant for the questionnaire. The second phase aims to understand of how and where the group agrees or disagrees on the contents. Disagreements are then explored in the third phase to find out the underlying reasons for the differences and review them. The final phase includes the final review by the group when all previous responses are reviewed and the outcomes have been fed back for consideration. Okoli et al. (2004) showed an alternative but similar way for executing the Delphi method (Fig. 2.2) and also further explained on the process of selecting the panel of experts forming the respondent group. Simple statistical analysis of the responses can be carried out to assist in the analysis of the outcomes.