Unlocking the Market Potential of Academic Research
AlphaBeta Pharma Group, Leatherhead, Surrey, UK
Our world is facing many pressing health-care issues. In developing countries, the basic health components of clean air and water present great technical challenges. Growing populations demand proper nutrition, public health programs, immunization, energy, reproductive health care, and more. Developed countries are seeing more of their health-care resources spent on illnesses caused by an affluent western lifestyle: metabolic syndrome, mental health issues, and the complications of aging. The solutions to these complex issues cut across all borders, and involve many industries and scientific disciplines. Governments need both industry and knowledge institutions to work together to reach tomorrow’s technical health-care solutions.
Moreover, in a global economy where conventional manufacturing is dominated by developing economies, the future of industry, including the health industry, in the most advanced economies must depend on its ability to innovate in those high-tech activities that can offer a differential added value, rather than to only improve existing technologies and products.
Universities are at the forefront of knowledge. Most of the research conducted in academic institutions represents small, incremental steps in the advancement of our overall understanding. Academic investigators discover pieces of a vast and complex puzzle, which are eventually made public through journals, conference presentations, and working groups. All of these contribute significantly to the vast body of knowledge in modern society. However, too few of these research programs lead to commercially viable inspiration that can be exploited by entrepreneurs, eventually resulting in a marketable product.
Most basic research that is undertaken in the complete absence of any market insight or commercial valuation is “research for research’s sake.” Research for research’s sake is considered no longer acceptable or justifiable or, some would argue, even ethical. In this era, if research is not beneficial to humanity in any way or shape, then it is a waste: a waste of money, a waste of time, a waste of resources, and a waste of intellectual capital. With the market’s fast-paced growth, and the demands of unmet medical needs, we see a wide gap between research produced in academia and the demand in the health-care marketplace. Research should lead to innovation, which in turn should develop into successful commercial exploitation of these ideas. Strategies for adding value and marketability should be planned into the research and development processes so as to bridge the gap between the laboratory and the market, to help ensure the successful commercialization of new technology-based products [1].
We believe that academic research has both ethical and financial obligations. There are ethical obligations toward patients with an unmet medical condition whose lives, against the ticking clock, are depending on the development of efficacious treatments. In addition, we also feel that there is a financial obligation to recoup at least part of the public’s research investment in academia, and to recycle it for more research in the future.
Due to diminishing public funding, academia also must attract corporate funding and therefore must align its research with market needs. However, market needs are not always aligned with unmet medical needs or patients’ needs. Nevertheless, corporate research generally starts from surveying the current market need (market pull) and takes into account future market, business, and financial projections before embarking on or funding new research. The return on investment for shareholders is the main driver in their decision making: companies conduct research for investors’ sake.
Academic institutions no longer have deep pockets, and alternative models are beginning to emerge. In response to these trends, and aware that much university research effort is currently unused, many universities have created Technology Transfer Offices (TTOs). Their purpose is to create a thriving research strategy and to guide academic researchers in aligning their research programs with the requirements of the pharmaceutical industry, to include its accepted research models, regulatory requirements, and a myriad of other aspects that are not within the usual sphere of academic researchers’ activity. Many institutions are copying each other’s best practices, but the evidence of success is more anecdotal than factual. There are many good reasons for this, an important one being simply the uncertainty and unpredictability of early-stage drug development research.
Many different specialist skills are needed to achieve successful exploitation of research and innovations. Corporate research and development managers, academic researchers, technology transfer officers, intellectual property (IP) specialists, venture capitalists, professional marketers, and policymakers alike have unique and vital contributions to the drug development process. In this chapter, we make an attempt
- To provide an overview of the drug development process to better understand its complexity and the substantial financial and time investment that it requires
- To look closely at the gap between “science for science’s sake” versus “science for society’s sake” and its causes
- To highlight some of the emerging models that are in place to bridge the gap between the research environment of academia and the commercial world of corporate health care
- To address the emerging role of the academic technology transfer office; discuss strategies and suggestions for improving the effectiveness of these models for the future
- To illustrate some successful initiatives.
Where Does Drug Discovery Fit into the Drug Development Process?
The development of a new drug involves a long, costly journey, hampered by uncertainty. Cost estimation studies, conducted mostly by U.S. entities, provide figures ranging between $500 and $800 million, over an average of 11–15 years, from inception to market for a new drug [2–4]. According to the Pharmaceutical Research and Manufacturers of America (PhRMA), the costs of developing a drug have risen from $138 million in 1975 to $1.3 billion in 2005. The cost to develop a single biologic was estimated at $1.2 billion [5]. Some variations may be attributed to the type and nature of disease being targeted, the type of drug being developed, and the design of the clinical trials that is necessary to have the drug ultimately achieve regulatory approval at local and international levels. However, while these are gross estimations for time and cost, variations notwithstanding, the studies provide concordant conclusions that the process is a lengthy and expensive one (Figure 15.1 and Table 15.1).
In comparison to other industries, the pharmaceutical industry has one of the highest ratios of research and development (R&D) to sales. The PhRMA reports that domestic R&D as a percentage of domestic (U.S.) sales was 20.5% (17.0% for total R&D) in 2010 [5].
In the European Union (EU), the innovative pharmaceutical sector devotes 16% of business expenditure to R&D, the highest of any industrial sector. Pharmaceuticals contribute strongly to a trade surplus, of some €47.8 billion ($50 billion) in 2010 in Europe (Figure 15.2). The strong presence of the pharmaceutical industry in the economy makes it a major employer of skilled labor. In the EU alone, the pharmaceutical industry provides over 640,000 jobs directly, of which 113,000 are highly skilled [6–8]. In the United States, PhRMA reported in 2012 that the biopharmaceutical industry creates approximately 650,000 jobs directly, nationwide, and that each biopharmaceutical job supports nearly five additional jobs. This translates to approximately 4 million jobs across the economy, ranging from manufacturing, construction, and services.
Productivity and the rate of increase in investment in drug discovery have slowed. This slowdown is not only attributable to the daunting investment required in time and money, but also to the fact that many of the easier disease targets have been addressed. The many remaining targets are proving extremely difficult to address from several standpoints, including a lack of understanding of disease pathology, limited technological and computational capabilities, and the complex manufacturing processes required to produce a new entity (Figure 15.3).
Another more modern impediment, albeit a necessary one, is that regulatory requirements from safety and tolerability perspectives since the mid-1970s have become far more stringent and thus require further investments in time and money to gather long-term safety data. In 2012, the FDA approved 35 novel drugs.
How Does the Drug Discovery Process Work?
On this long, costly journey from laboratory to marketed drug, the phase of drug discovery is only the beginning. It is the first rung on a very tall, slippery ladder.
While it is well known that the drug discovery process requires a full commitment on the part of researchers and investors, the complexity of the pathway from laboratory to the health-care marketplace will be discussed to gain a greater understanding of its implications for commercialization. Commonly called the development chain or “pipeline,” bringing a drug to market occurs in a number of distinct stages, and each may be viewed as a separate cost center.
Step One: Discovery/Basic Research
This stage usually resides within academia, and includes the process of synthesizing, extracting, chemically engineering, or otherwise identifying a molecule, enzyme, hormone, or receptor that has the potential to effect a change in a biological environment, that is, a drug target. It is the drug target that has to be demonstrated to be the modifier of the disease. This is where academic researchers really excel. Validation of the candidate molecule or the intended target is the next step in the discovery stage. It is difficult to say whether the validation step is as important as the identification step, or is even more important! This is where academia is not particularly interested: the primary research has been published already, and no one finds merit in repeating someone else’s original work. To be fair, editors of first-tier international journals are also not excited about such work, so that it rarely gets a respected reception, however well deserved. Beyond the extreme difficulty of obtaining funds for a validation project, it can be rather awkward to justify the need for such research work. Validation work is not usually published by the next runner-up team that lost the race to publish first!
Once identification and validation steps are out of the way, another step—no less important than the former steps—called optimization is ready to start. This is optimization of the compound, that is, extraction of an isomer or a metabolite that is much more potent and less toxic, or optimization of the target, that is, identification of a more specific receptor subtype. Optimization can of course occur in parallel without critically impacting the development process and should also involve a validation step. Academic researchers slightly prefer such research work, as it is considered “nearly” original; however, the pharmaceutical industry excels academia in this step.
Once identified, to validate the model, the molecule is then subjected to a battery of in vitro biological research and screening investigations, each one designed to explore and clarify the pharmacological activity and therefore therapeutic potential of the agent.
Animal modeling is an essential part of the development process. After the in vitro testing stage, an animal model is either identified—if available—or developed for modeling the disease indication. The selection of the animal model is very important and should not be taken lightly. Some animal models of human disease are well recognized within the academic community, but some do not yet exist and therefore require original research and de novo development. However, it is crucial that the appropriate animal model is selected in order for a study to be widely accepted by regulatory authorities. This is where academic researchers frequently fall short. Academic researchers often overlook this vital detail, because their focus has traditionally been on the beginning of the development cycle, rather than its end! If it is academia’s objective to participate in the drug development process, it is imperative that they align their research program design with the regulatory requirements further down the line in this process. In the pharmaceutical industry, this is part of the responsibility of the regulatory strategy team, to ensure that no problematic issues arise during a discussion/consultation with the regulatory authorities or indeed at the time of submission of a New Drug Application or, even worse, during submission of the Marketing Authorization Application (MAA).
This is an important fundamental role for TTOs to perform; however, frequently at this early stage of discovery, communication on this point does not take place enough between the academic researchers and the TTOs. This is perhaps due to a lack of specialized expertise needed or—may we dare say—a lack of understanding of the entire drug development process (Figure 15.4).
Step Two: Preclinical Testing
Once a molecule has been defined as having biological potential at the intended target in the disease animal model, preclinical testing for its pharmacodynamic and pharmacokinetic properties takes place to establish allometrically a safe dose to initiate clinical trials in humans. Toxicology tests are done to determine the compound’s potential risk to humans, utilizing whatever formulation is available and any potential route of administration to humans.
Typically, preclinical toxicology testing is conducted in two species of animals, most commonly involving murine and canine, although primate and porcine species may also be used. During this stage, work continues in parallel to also focus on turning the active compound into a suitable form and strength for human use in the clinical studies yet to follow. This is called “clinical supply,” and it does not have to be the finished item, that is, the end marketed product, either in strength or even formulation, as this will come later on.
An important feature of preclinical testing, and a regulatory requirement, is that each investigation must comply with Good Laboratory Practice (GLP) and associated International Committee of Harmonization (ICH) policies. Needless to say, the majority of the academic laboratories does not hold a GLP certificate, and in the author’s experience even argue that they do not need one! At this phase, the compound is referred to as an Investigational New Drug (IND), and the FDA approval for the IND status is based solely on safety data, not efficacy.
Step Three: Phase 1 Clinical Trials (“First in Man”)
The IND is now approved for use in humans, and small studies involving 6–20 (though at times reaching 80) healthy volunteers (usually male) will be conducted to determine not only safety and tolerability, first and foremost, but also the basic pharmacological properties in a human subject: absorption, distribution, metabolism, and excretion (ADME). Bioavailability can also be studied for the various formulations. In addition, patients with a disease for which the drug may be useful can be test subjects, such as in oncology, and this can yield useful information regarding the efficacy of the drug.
The start-up dose has always been and will remain the most crucial and most dangerous step in the whole process. The regulatory authorities have issued a number of guidelines that must be followed when calculating (and projecting from animals) the start-up equivalent dose in humans. Typically, single-dose studies are carried out, followed by single ascending dose studies where the dose of the new medicine is gradually increased. This allows the investigator to determine the safety and tolerability and to measure the participant’s clinical response to the medicine. Intravenous (IV) formulation and route is the most common in this phase, even if the final formulation is not going to be an IV, and drug metabolism and pharmacokinetics data are collected at all times. This determines whether the medicine is sufficiently absorbed and distributed at the intended site of action, how long the medicine remains active in the system after dosing, and which dosage levels are safe and well tolerated. Phase 1 is sometimes divided into several clinical trials, for instance, into Phase 1a and 1b, with Phase 1a studying a group of healthy subjects, and Phase 1b studying a group of patients with the disease, or specific safety aspects or interactions with other agents.
In recent years, regulators have encouraged a new pre-Phase 1 step called “Phase 0,” using a fraction (1/100th) of the proposed Phase 1 starting dose in less than a handful of subjects. This process is very dependent on advanced nanotechnology to detect and trace the minute dose and project the response of the full dose.
Step Four: Phase 2 Clinical Trials
This intermediate phase of testing the investigational drug involves larger groups of human subjects (n = 100–300), usually now including a relevant patient population. While dosing strategies may be investigated, the main aim of Phase 2 clinical trials is to establish effectiveness against the disease or medical condition and to understand any short-term safety risks.
Phase 2 studies are divided into Phases 2a and 2b. Phase 2a studies are generally short, small studies to compare and evaluate different doses of the drug that indicate whether the drug is working in the way that is expected. Phase 2b studies are usually larger (lasting longer and with more patients), with narrow inclusion/exclusion criteria, and are randomized, often blinded, placebo-controlled studies, to investigate the clinical benefit to the patient. These are usually considered the main go/no go decision gate; the outcomes of these trials will be determining factors of whether to proceed to the larger and most expensive part of the process, that is, Phase 3.
Step Five: Phase 3 Clinical Trials
Involving large patient numbers (n = >1000+), Phase 3 trials are designed to mimic clinical real life by adapting a study design that simulates the actual clinical practice in terms of the dose, frequency, and duration of treatment in the intended patient population. A more relaxed inclusion/exclusion criterion includes the targeted patients with that specific disease indication in terms of gender, age, past history, and most common concomitant medication. The primary objective of Phase 3 is to establish efficacy through the most relevant primary efficacy outcome measures, and its secondary objective is to establish the safety profile by further evaluating the safety of the IND through the collection of data on safety, tolerability, and any adverse events. These studies often have a comparative design, either with placebo, or the current clinical gold standard of care, or with a competitor product, or indeed include all of these! In addition, a design that includes more than one formulation, dose regimen, or comparator is not unusual. Upon completing Phase 3, a favorable benefit–risk assessment of the drug should be demonstrated to both the regulator as well as the health-care professionals. Needless to say, the quality of the methodology employed in Phase 3 trials has a great bearing on the later approval by regulatory authorities, as well as on the credence and trust that medical specialists will place in the new drug.
Regulatory Review of Investigational New Drug
Submission of the “dossier” is the biggest event in any pharmaceutical company’s diary, perhaps second only to first-in-man dosing in Phase 1. The dialogue between the regulatory authorities and the pharmaceutical company (the “sponsor”) does not just start after completing the clinical development or indeed stop at the approval stage! Such discussion ideally starts at the discovery stage and should be ongoing throughout the entire development process; that is, before embarking on the next stage, discussions regarding design, duration, objectives, and so on, should take place. There should be regular updates throughout that stage to discuss progress, difficulties, or any unexpected or unexplained results. Even after obtaining marketing approval, the regulatory dialogue continues on a much more formal basis in the form of periodic safety update reports and risk management plans.
Evaluation of an IND is a lengthy process that could take years. It includes all the data generated by the sponsor (or any other parties) for the intended disease indication (or any other indications) in such patient populations (or any other tested populations). If the regulatory authority does not have the required expertise for this evaluation, a more appropriate external specialist in the field will be engaged for the task. Approval of an IND is dependent on multiple factors besides efficacy and safety, such as the nature of the indication (life-threatening), the target population (vulnerable), the nature of the drug (first in class), the economic environment (budgetary illness prioritization), the political situation (high threat of biological warfare), societal (traditional alternatives), and finally, cultural constraints (stigma).
Postmarketing trials, which are also known as Phase 4 clinical trials, are then conducted to gain understanding and confidence in the drug from safety and efficacy perspectives. In particular, these trials are designed to detect hitherto unseen adverse events, and to further understand the long-term morbidity and mortality profile of the drug in a much larger population. These clinical trials are usually a voluntary exercise sponsored by the MAA holder; however, in some instances (usually due to safety concerns), it is required by the regulatory authority as a condition for the marketing approval and hence becomes a postmarketing commitment by the MAA holder. The regulatory authorities reserve the right to request (i.e., demand) such data as they deem appropriate at any time during the life cycle of the marketed drug (Table 15.2).
TABLE 15.2. Limitation of Clinical Trials in Establishing Drug Safety of INDa
Incidence | Sample size to detect adverse event once with 95% certainty |
---|---|
1 : 500 | 1,497 |
1 : 5,000 | 7,488 |
1 : 25,000 | 74,892 |
1 : 50,000 | 149,785 |
1 : 100,000 | 299,572 |
aThis is primarily due to the limited number of patients that tested the new drug throughout the clinical phase. Typically; 50 (patients in Phase 1) + 500 (patients in Phase 2) + 1000 (patients in Phase 3) = 1550 patients.
Step Six: Manufacturing Process Optimization (Chemistry, Manufacturing, and Control)
Often conducted concurrently throughout the entire drug development process, engineering and manufacturing studies are performed to refine production for efficiency, scale, stability, uniformity, and quality. The results of these tests will often determine the large-scale commercial viability of a compound (Figure 15.5).
The race with the generic drug companies starts as soon as the drug is on the market, and usually creeps over the original drug territories within about 10 years after expiration of IP rights. During that period, the MAA is racing to recoup the cost of research and development as well as achieve as much profit as possible. Generic drug companies need only to prove that their drug is equivalent to the original drug, which is a far a less costly development process; hence, generic drugs are far cheaper than the originals. This is a different story that could easily fill another book, however, and will not be discussed in detail here.
Science for Science’s Sake and Science for Society’s Sake
There is a huge gap that results from the mismatch between academia’s discoveries (market/technology push) and industry needs (market pull). Academic research addresses basic, fundamental questions. Universities largely conduct research in the complete absence of “market pull.” Research is undertaken from the perspective of the search for new knowledge and understanding, and not necessarily in response to a direct market need. This “science for science’s sake” is absolutely appropriate.
Scientists often do not know what the applications of their discoveries could be, or which parties could potentially be interested in their discoveries, and are not familiar with the requirements of research to be accepted within a particular industry.
At a later time, they may try to “push” this new technology or knowledge to a market that they are unfamiliar with. Pushing an unknown technology to an unknown market seldom leads to a success: one unknown in any equation is manageable, but two unknowns are not. Technology push (or market push) is very difficult, and will often fail. Failure occurs not only because of the limitation or the lack of applicability of the technology: it can also be due to unsuitability or the saturation or maturation of the market. Of course, what is deemed “failure” from a commercial standpoint is a natural and normal part of the research process, yielding information of value to the researcher, but not of short-term financial value to any company, especially start-ups (Table 15.3).
TABLE 15.3. Commercial Contrast between Technology Push and Market Pull
Technology Push | Market Driven | |
---|---|---|
Business Focus | Funding IP | Customer Requirements |
Development Risk | High | Quantified |
Market Demand | Unknown | Quantified |
Certainty of Returns | Low | High |
Business Growth | Unlikely | Projected and Planned |
Academia and the pharmaceutical industry have an old relationship, but it has not always been particularly productive or fruitful. As Jose Carlos Gutierrez Ramos of Pfizer said in a recent interview, “In the status quo, money was going from pharma to academia and it was not really being used. You get money from pharma and it’s like another R01 (a grant from the National Institutes of Health). You keep doing the same research as before. And therefore pharma was withdrawing. They’d say, ‘this is nice, but we aren’t getting anything out of it’ ” [9].
Out of the thousands of medical papers published each year, often one may wonder where the clinical question is among them all. Some studies fail to address the question of genuine interest because of the numbers or ethics involved: “In common with many published studies, the authors allude to just such a question in the final paragraph, clearly stating that their study was unable, and was never designed, to answer such a question, and that other, larger studies might shed more light” [10].
But while scientists may be unaware of what applications their research could address, or which pharmaceutical companies to turn to, at the same time, companies are unaware of what scientific discoveries exist that may be of value to them. “From the time a research group has a cluster of publications to the moment that a pharmaceutical company becomes interested, you could easily expect 5–10 years to elapse. Meanwhile, great research is being done around that pathway of phenomenon, but it’s not directional, because that’s not the nature of academic research,” says Gutierrez Ramos of Pfizer. Adding to the fact that the IP clock continues ticking all the time, 5–10 years means a huge amount of lost profits, which could render the whole project commercially unattractive.
What Are the Causes of the Gap between Science for Science’s Sake and Science for Investment’s Sake?
Cultural barriers often widen the gap between academic research and market adoption. Many experts with years of experience in both academia and the pharmaceutical industry attest that the cultural barrier is probably the main challenge to bridging this gap. Ambivalent attitudes still hamper monetizing publicly funded research. Particularly in Europe, members of the establishment within many universities and national health services see themselves as public service providers, not profit-making organisations. There is resentment toward the new economic reality, rather than optimism and energy to embrace a dynamic new model.
Industry may not always speak the language of the researchers. Although big pharmaceutical companies have a long history of scouting out and developing products in many therapeutic areas that meet a market need while providing a handsome return on investment for shareholders, it can be difficult to carry a transfer discussion to fruition. Anxiety about their own company’s relatively short-term financial goals can lead to impatience, summary rejection of more complex research IP, and a tendency to cut corners. The enormous overheads and colossal expenses put pressure on their profit margins, making them limit their interest to proven technology, rather than the riskier investment necessary to develop early-stage research.
An atmosphere of distrust and lack of understanding often inhibits collaboration. Universities are large bodies whose primary role is teaching and research and do not lend themselves to risk, while entrepreneurs are fast moving, high-risk takers and tend not to comply with rules. This means that it is difficult for both to create a company where they share ownership. In addition, sometimes the researcher leaving the university has been frustrated by his or her experience within the university, and they will do everything possible to avoid the university having any control or influence in their company!
Ironically, misdirected response to public pressure widens the gap. In the past decade, there has been increasing public pressure to “improve the results” of many publicly funded institutions, including universities, by insisting on measurable units and numeric ratings. The previous government in the UK, for example, spent over 10 years laying down such targets and even more time on measures to audit and report these meaningless targets. The brain drain of scientists across Europe in general and in the UK in particular is a direct result of such political policies (albeit coupled with short-sighted taxation laws). Popular articles ranking “The 10 Best Universities” or “The 10 Best Hospitals” are common, despite the lack of methodology, accepted at face value, and university personnel are rewarded in part along these lines. For a university, measurable results primarily include the number of scientific publications, as well as number of patent applications and registrations, and student-generated ratings for teaching excellence. Marketing and selling research findings are far less measurable for universities, and less attributable to individuals. It is hard to quantify and measure, and so receives less structured support, or reward incentive, within the European universities.
Another reason for the gap is that selling research results is simply not part of the academic staff’s job. In response to public pressure and slashed budgets, universities have increased the internal workload significantly. Professors must meet tougher requirements for more and better journal publications, generate their own research funds, deal with increased student numbers with fewer support staff; as a result, increasing time is spent on detailed reporting of their activities and expenditures. While many academic researchers are deeply interested in the market potential and the societal benefit of their research, most of them have a hard time meeting their “measurable” objectives within their 24-hour days, let alone guide research findings along toward marketable entities.
With many noted exceptions, most academics lack the entrepreneurial skills that are so important in convincing the pharmaceutical industry to invest in their research results. As mentioned previously, guiding IP along to a marketable product for industry is not included in their academic functions; there are insufficient (or no!) time and resources allotted to it, and academics are not rewarded for it.
What does the process of commercialization entail? What makes it so complicated? The next section will discuss this aspect of unlocking the market potential of academic research.
Why Commercialization Is Important
There are some attitudes and convictions in academia that its proper role is that of a public servant, and that focus on future commercial uses of research could compromise their position of impartiality, thoroughness, integrity, and independence. Universities are concerned that the increasing demand to obtain their funding from private industries will reduce their noble calling to “mere contract research.” They may be hostile to the notion of adjusting their research strategy and procedures to align with the needs of potential commercial applications, sometimes even when the potential client is in the science park affiliated with the university.
Commercialization drives research. Yet commercial interests have historically been the drivers of research. Leonardo da Vinci knew that Duke Ludovico Sforza in Milan would be interested in his many discoveries to develop for military use, and he actively solicited the Duke’s financial support.
Another familiar example of commercial interests driving research is Edison General Electric, founded by Thomas Alva Edison. The company later became GE. At age 30, Edison built a research laboratory in Menlo Park, New Jersey, to combine the scientists and materials that were required to address complex problems (Figure 15.6). They worked on several projects simultaneously, thinking through subsequent commercial needs and developing solutions. For instance, his power generating and delivery systems made sufficient power available to homes and businesses to ensure the commercial success of his earlier invention, electric light. By creating a critical mass of resources, Edison developed R&D labs that revolutionized the technical research process. He received a record of 1093 U.S. patents [11] (Figure 15.7).
The revenue generated from successful inventions finances future research. In drug development, PhRMA reports that only 2 out of 10 drugs that reach the market return revenues that match or exceed R&D costs [5]. The revenue from the marketed drugs must cover all of the previously conducted R&D for those products, as well as all of the R&D for the “failed” drugs—those that did not reach the market due to any decision factors. However, the “failed” drugs generate much useful research and market knowledge, and have provided employment and income for scientists. These are also noble objectives and do benefit the public. Without commercialization, there are no breakthroughs in drug development. One would be hard-pressed to name an innovative pharmaceutical product from any government-owned research organization in any centrally run country.
As for serving the public, the pharmaceutical industry’s role cannot be underestimated. Innovative medicines extend life expectancy, treat disease, and increase patients’ survival. Although there is criticism of “me-too” products, in many countries, regulatory authorities require that new drugs demonstrate significant improvement compared with medicines already on the market.
Another important public benefit from discovery and innovation in the pharmaceutical industry is that new medicines can reduce the burden of care. Easier routes of administration, fewer administrations, lower dosages, or the possibility for patients to remain at home (or shorter hospitalization periods) translate to significant savings in health care. At present, with all the arguments and debates regarding the price of drugs, both in Europe and in the United States, the share of drugs as a percentage of all health-care expenditure in the United States, which typically has high drug prices, is about 10% [12] (Figure 15.8).
Creating an environment that embraces commercialization will cultivate an attitude in academia that facilitates the development of research into marketable IP. Europe is lagging behind the United States in this area. Many business leaders are calling for policies to address the gap between industry and academia and its consequences: “… the patent that goes unexploited, the research report that gets ignored, or the researcher who leaves for richer labs in San Francisco or Singapore” [13].
Intellectual Property
Used almost interchangeably with “patents,” the term intellectual property (IP) describes a tool for secure exchange of intellectual capital. The patent is the legal registration of the IP, protecting the ownership, and therefore, exploitation, of the IP. A condition for obtaining a patent is that the discovery is new, original, innovative, and has not been made public. Therefore, patent protection should be obtained before a discovery is published. Tension occurs at this stage because rapid publication is of vital importance to academic scientists. However, of course, publications should not be delayed due to patent filing; in practice, this can be achieved by good timing, short decision lines, and the availability of sufficient funds to cover the costs of patent registration [14]. But more important than all that is access to a competent patent attorney, either employed by the research institute or contracted by it—and competent ones do not come cheap! (See Figure 15.9.)
In considering the commercialization of research output, it is important to consider the maturity of the IP. Typically, university IP is very early stage and gets protected via patent well before its equivalent in industry, due to publication pressures. As a result, it frequently becomes superseded during its development phase, when an application is being developed. The formal IP residing in the university is not necessarily required: the research progresses, making the original IP outmoded. If and when the researchers leave and start a company, why would they give a shareholding to the university when they do not require the IP it holds? This represents a missed opportunity for the university, both in terms of the lost value of exploitation of the IP, and also a loss in terms of vital contacts and collaboration with the academics who were involved in its discovery, and who can be the very individuals to help bridge this gap between universities and industry.
Possibly because the number of patents held by a university is considered important in its international rankings, the rush to patent precludes thorough “intellectual property due diligence.” As one colleague expressed it,
Unfortunately, says the TTO officer, we have very little money for external small and medium-sized enterprises (SMEs) to review the IP to tell us its commercial value; however, we have the budget to spend $10 K and more to patent it in every country in the world. Also, let’s say a great researcher consistently brings in $1 M per year to the university. This professor is constantly creating IP. Let’s say, a pre-patent commercial appraisal is done on a few of his or her IP, with the little money they do have. Is it in the university’s best interest not to patent, even if the appraisal comes back negatively? Probably not. It’s just how the system seems to work. Thus, if you follow my logic, why spend the commercial appraisal money in the first place? Now, if every university followed a rigorous process to spend patent money only on viable market opportunities, then they’d spend more up front on SME reviewers, but far fewer dollars on patent costs as a result—the author is convinced of that based on my work with university IPs. Of course, this is theoretical. Some universities do a great job in vetting IP before they patent it.