Regulatory affairs

Chapter 20 Regulatory affairs





Brief history of pharmaceutical regulation


Control of pharmaceutical products has been the task of authorized institutions for thousands of years, and this was the case even in ancient Greece and Egypt.


From the Middle Ages, control of drug quality, composition purity and quantification was achieved by reference to authoritative lists of drugs, their preparation and their uses. These developed into official pharmacopoeias, of which the earliest was probably the New Compound Dispensatory of 1498 issued by the Florentine guild of physicians and pharmacists.


The pharmacopoeias were local rules, applicable in a particular city or district. During the 19th century national pharmacopoeias replaced local ones, and since the early 1960s regional pharmacopoeias have successively replaced national ones. Now work is ongoing to harmonize – or at least mutually recognize – interchangeable use of the US Pharmacopeia, the European Pharmacopoeia and the Japanese Pharmacopoeia.


As described in Chapter 1, the development of experimental pharmacology and chemistry began during the second half of the 19th century, revealing that the effect of the main botanical drugs was due to chemical substances in the plants used. The next step, synthetic chemistry, made it possible to produce active chemical compounds. Other important scientific developments, e.g. biochemistry, bacteriology and serology during the early 20th century, accelerated the development of the pharmaceutical industry into what it is today (see Drews, 1999).


Lack of adequate drug control systems or methods to investigate the safety of new chemical compounds became a great risk as prefabricated drug products were broadly and freely distributed. In the USA the fight against patent medicines led to the passing of the US Pure Food and Drugs Act against misbranding as long ago as 1906. The Act required improved declaration of contents, prohibited false or misleading statements, and required content and purity to comply with labelled information. A couple of decades later, the US Food and Drug Administration (FDA) was established to control US pharmaceutical products.


Safety regulations in the USA were, however, not enough to prevent the sale of a paediatric sulfanilamide elixir containing the toxic solvent diethylene glycol. In 1937, 107 people, both adults and children, died as a result of ingesting the elixir, and in 1938 the Food Drug and Cosmetics Act was passed, requiring for the first time approval by the FDA before marketing of a new drug product.


The thalidomide disaster further demonstrated the lack of adequate drug control. Thalidomide (Neurosedyn®, Contergan®) was launched during the last years of the 1950s as a non-toxic treatment for a variety of conditions, such as colds, anxiety, depression, infections, etc., both alone and in combination with a number of other compounds, such as analgesics and sedatives.


The reason why the compound was regarded as harmless was the lack of acute toxicity after high single doses. After repeated long-term administration, however, signs of neuropathy developed, with symptoms of numbness, paraesthesia and ataxia. But the overwhelming effects were the gross malformation in infants born to mothers who had taken thalidomide in pregnancy: their limbs were partially or totally missing, a previously extremely rare malformation called phocomelia (seal limb). Altogether around 12 000 infants were born with the defect in those few years. Thalidomide was withdrawn from the market in 1961/62.


This catastrophe became a strong driver to develop animal test methods to assess drug safety before testing compounds in humans. Also it forced national authorities to strengthen the requirements for control procedures before marketing of pharmaceutical products (Cartwright and Matthews, 1991).


Another blow hit Japan between 1959 and 1971. The SMON (subacute myelo-optical neuropathy) disaster was blamed on the frequent Japanese use of the intestinal antiseptic clioquinol (Entero-Vioform®, Enteroform® or Vioform®). The product had been sold without restrictions since early 1900, and it was assumed that it would not be absorbed, but after repeated use neurological symptoms appeared, characterized by paraesthesia, numbness and weakness of the extremities, and even blindness. SMON affected about 10 000 Japanese, compared to some 100 cases in the rest of the world (Meade, 1975).


These tragedies had a strong impact on governmental regulatory control of pharmaceutical products. In 1962 the FDA required evidence of efficacy as well as safety as a condition for registration, and formal approval was required for patients to be included in clinical trials of new drugs.


In Europe, the UK Medicines Act 1968 made safety assessment of new drug products compulsory. The Swedish Drug Ordinance of 1962 defined the medicinal product and required a clear benefit–risk ratio to be documented before approval for marketing. All European countries established similar controls during the 1960s.


In Japan, the Pharmaceutical Affairs Law enacted in 1943 was revised in 1961,1979 and 2005 to establish the current drug regulatory system, with the Ministry of Health and Welfare assessing drugs for quality, safety and efficacy.


The 1960s and 1970s saw a rapid increase in laws, regulations and guidelines for reporting and evaluating the risks versus the benefits of new medicinal products. At the time the industry was becoming more international and seeking new global markets, but the registration of medicines remained a national responsibility.


Although different regulatory systems were based on the same key principles, the detailed technical requirements diverged over time, often for traditional rather than scientific reasons, to such an extent that industry found it necessary to duplicate tests in different countries to obtain global regulatory approval for new products. This was a waste of time, money and animals’ lives, and it became clear that harmonization of regulatory requirements was needed.


European (EEC) efforts to harmonize requirements for drug approval began 1965, and a common European approach grew with the expansion of the European Union (EU) to 15 countries, and then 27. The EU harmonization principles have also been adopted by Norway and Iceland. This successful European harmonization process gave impetus to discussions about harmonization on a broader international scale (Cartwright and Matthews, 1994).



International harmonization


The harmonization process started in 1990, when representatives of the regulatory authorities and industry associations of Europe, Japan and the USA (representing the majority of the global pharmaceutical industry) met, ostensibly to plan an International Conference on Harmonization (ICH). The meeting actually went much further, suggesting terms of reference for ICH, and setting up an ICH Steering Committee representing the three regions.


The task of ICH was ‘… increased international harmonization, aimed at ensuring that good quality, safe and effective medicines are developed and registered in the most efficient and cost-effective manner. These activities are pursued in the interest of the consumer and public health, to prevent unnecessary duplication of clinical trials in humans and to minimize the use of animal testing without compromising the regulatory obligations of safety and effectiveness’ (Tokyo, October 1990).


ICH has remained a very active organization, with substantial representation at both authority and industry level from the EU, the USA and Japan. The input of other nations is provided through World Health Organization representatives, as well as representatives from Switzerland and Canada.


ICH conferences, held every 2 years, have become a forum for open discussion and follow-up of the topics decided. The important achievements so far are the scientific guidelines agreed and implemented in the national/regional drug legislation, not only in the ICH territories but also in other countries around the world. So far some 50 guidelines have reached ICH approval and regional implementation, i.e. steps 4 and 5 (Figure 20.1). For a complete list of ICH guidelines and their status, see the ICH website (website reference 1).



The process described in Figure 20.1 is very open, and the fact that health authorities and the pharmaceutical industry collaborate from the start increases the efficiency of work and ensures mutual understanding across regions and functions; this is a major factor in the success of ICH.



Roles and responsibilities of regulatory authority and company


The basic division of responsibilities for drug products is that the health authority is protecting public health and safety, and the pharmaceutical company is responsible for all aspects of the drug product. The regulatory approval of a pharmaceutical product permits marketing and is a contract between the regulatory authority and the pharmaceutical company. The conditions of the approval are set out in the dossier and condensed in the prescribing information. Any change that is planned must be forwarded to the regulatory authority for information and, in most cases, new approval before being implemented.


To protect the public health, regulatory authorities also develop regulations and guidelines for companies to follow in order to achieve a balance between the possible risks and therapeutic advantages to patients. The authorities’ work is partly financed by fees paid by pharmaceutical companies. Fees may be reduced, under certain conditions, to stimulate research. This may be driven, e.g., by company size or size of target patient groups.


The regulatory authority:



The company:




The role of the regulatory affairs department


The regulatory affairs (RA) department of a pharmaceutical company is responsible for obtaining approval for new pharmaceutical products and ensuring that approval is maintained for as long as the company wants to keep the product on the market. It serves as the interface between the regulatory authority and the project team, and is the channel of communication with the regulatory authority as the project proceeds, aiming to ensure that the project plan correctly anticipates what the regulatory authority will require before approving the product. It is the responsibility of RA to keep abreast of current legislation, guidelines and other regulatory intelligence. Such rules and guidelines often allow some flexibility, and the regulatory authorities expect companies to take responsibility for deciding how they should be interpreted. The RA department plays an important role in giving advice to the project team on how best to interpret the rules. During the development process sound working relations with authorities are essential, e.g. to discuss such issues as divergence from guidelines, the clinical study programme, and formulation development.


Most companies assess and prioritize new projects based on an intended Target Product Profile (TPP). The RA professional plays a key role in advising on what will be realistic prescribing information (‘label’) for the intended product. As a member of the project team RA also contributes to designing of the development programme. The RA department reviews all documentation from a regulatory perspective, ensuring that it is clear, consistent and complete, and that its conclusions are explicit. The department also drafts the core prescribing information that is the basis for global approval, and will later provide the platform for marketing. The documentation includes clinical trials applications, as well as regulatory submissions for new products and for changes to approved products. The latter is a major task and accounts for about half of the work of the RA department.


An important proactive task of the RA is to provide input when legislative changes are being discussed and proposed. In the ICH environment there is a greater possibility to exert influence at an early stage.



The drug development process


An overview of the process of drug development is given in Chapters 1418 and summarized in Figure 20.2. As already emphasized, this sequential approach, designed to minimize risk by allowing each study to start only when earlier studies have been successfully completed, is giving way to a partly parallel approach in order to save development time.



All studies in the non-clinical area – chemistry, pharmacology, pharmacokinetics, pharmaceutical development and toxicology – aim to establish indicators of safety and efficacy sufficient to allow studies and use in man. According to ICH nomenclature, documentation of chemical and pharmaceutical development relates to quality assessment, animal studies relate to safety assessment, and studies in humans relate to efficacy.



Quality assessment (chemistry and pharmaceutical development)


The quality module of a submission documents purity and assay for the drug substance, and purity data for all the inactive ingredients. The formulation must fulfil requirements for consistent quality and allow storage, and the container must be shown to be fit for its purpose. These aspects of a pharmaceutical product have to be kept under control throughout the development process, as toxicology and pharmacology results are reliable only for substances of comparable purity. Large-scale production, improved synthetic route, different raw material supply, etc., may produce a substance somewhat different from the first laboratory-scale batches. Any substantial change must be known and documented.


The formulation of a product is a challenge. For initial human studies, simple i.v. and oral solutions are needed for straightforward results, whereas for the clinical programme in patients, bioequivalent formulations are essential for comparison of results across studies, and so it is preferable to have access to the final formulation already during Phase II.


If the formulation intended for marketing cannot be completed until late in the clinical phase, bioequivalence studies showing comparable results with the preliminary and final market formulations will be necessary to support the use of results with the preliminary formulation. There may even be situations when clinical studies must be repeated.


The analytical methods used and their validation must be described. Manufacturing processes and their validation are also required to demonstrate interbatch uniformity. However, full-scale validation may be submitted when sales production has eventually started.


Studies on the stability of both substance and products under real-life conditions are required, covering the full time of intended storage. Preliminary stability data are sufficient for the start of clinical studies. The allowable storage time can be increased as data are gathered and submitted. Even marketing authorizations can be approved on less than real-time storage information, but there is a requirement to submit final data when available.


Inactive ingredients as well as active substances need to be documented, unless they are well known and already documented. Even then it may become necessary to perform new animal studies to support novel uses of commonly used additives.


Although the quality module of the documentation is the smallest, the details of requirements and the many changes needed during development and maintenance of a product make it the most resource intensive module from a regulatory perspective. Also, legislation differs most in this area, so it will often be necessary to adapt the documentation for the intended regional submission. RA professionals, however, try to convince regulatory authorities not to create local rules to avoid, as far as possible, different interpretations and duplicate work.


As previously said, all changes to the originally submitted dossier must be made known to the approving regulatory authority. Since the majority of changes are made in the quality section, very small changes take lots of resources for the company as well as for the authority. The US legislation has allowed the submission of annual reports collecting those changes that have no impact on quality. This possibility has also been introduced in EU in 2010 with the purpose of saving time and resources.



Safety assessment (pharmacology and toxicology)


Next we consider how to design and integrate pharmacological and toxicological studies in order to produce adequate documentation for the first tests in humans. ICH guidelines define the information needed from animal studies in terms of doses and time of exposure, to allow clinical studies, first in healthy subjects and later in patients. The principles and methodology of animal studies are described in Chapters 11 and 15. The questions discussed here are when and why these animal studies are required for regulatory purposes.




General pharmacology


General pharmacology1 studies investigate effects other than the primary therapeutic effects. Safety pharmacology studies (see Chapter 15), which must conform to good laboratory practice (GLP) standards, are focused on identifying the effects on physiological functions that in a clinical setting are unwanted or harmful.


Although the study design will depend on the properties and intended use of the compound, general pharmacology studies are normally of short duration (i.e. acute, rather than chronic, effects are investigated), and the dosage is increased until clear adverse effects occur. The studies also include comparisons with known compounds whose pharmacological properties or clinical uses are similar.


When required, e.g. when pharmacodynamic effects occur only after prolonged treatment, or when effects seen with repeated administration give rise to safety concerns, the duration of a safety pharmacology study needs to be prolonged. The route of administration should, whenever possible, be the route intended for clinical use.


There are cases when a secondary pharmacological effect has, eventually, been developed into a new indication. Lidocaine, for example, was developed as a local anaesthetic agent and its cardiac effects after overdose were considered a hazard. Later that cardiac effect was exploited as a treatment for ventricular arrhythmia.


All relevant safety pharmacology studies must be completed before studies can be undertaken in patients. Complementary studies may still be needed to clarify unexpected findings in later development stages.



Pharmacokinetics: absorption, distribution, metabolism and excretion (ADME)


Preliminary pharmacokinetic tests to assess the absorption, plasma levels and half-life (i.e. exposure information) are performed in rodents in parallel with the preliminary pharmacology and toxicology studies (see Chapter 10).


Studies in humans normally start with limited short-term data, and only if the results are acceptable are detailed animal and human ADME studies performed.


Plasma concentrations observed in animals are used to predict the concentrations that may be efficacious/tolerated in humans, under the assumption that similar biological effects should be produced at similar plasma levels across species. This is a reasonable assumption provided the in vitro target affinity is similar.


Investigations during the toxicology programme give the bulk of the pharmacokinetic information due to the long duration of drug exposure and the wide range of doses tested in several relevant species. They also give data about tissue distribution and possible accumulation in the body, including placental transfer and exposure of the fetus, as well as excretion in milk.


Metabolic pathways differ considerably between species, often quantitatively but sometimes also qualitatively. Active metabolites can influence study results, in particular after repeated use. A toxic metabolite with a long half-life may accumulate in the body and disturb results. The characterization and evaluation of metabolites are long processes, and are generally the last studies to be completed in a development programme.



Toxicology


The principles and methodology of toxicological assessment of new compounds are described in Chapter 15. Here we consider the regulatory aspects.


In contrast to the pharmacological studies, toxicological studies generally follow standard protocols that do not depend on the compound characteristics. Active comparators are not used, but the drug substance is compared at various dose levels to a vehicle control, given, if possible, via the intended route of administration.




Genotoxicity


Preliminary genotoxicity evaluation of mutations and chromosomal damage (see Chapter 15) is needed before the drug is given to humans. If results from those studies are ambiguous or positive, further testing is required. The entire standard battery of tests needs to be completed before Phase II (see Chapter 17).


Stay updated, free articles. Join our Telegram channel

Oct 1, 2016 | Posted by in GENERAL SURGERY | Comments Off on Regulatory affairs

Full access? Get Clinical Tree

Get Clinical Tree app for offline access