Is There Hope for Improving Drug Development Productivity?
I was recently reading yet another paper bemoaning the dropping productivity in drug development. The authors were discussing the most likely explanations for the decline, which were
- tighter regulation
- illusory decline (no actual decline)
- depletion of low-hanging fruit
- increasing risk aversion by the drug companies
- rising hurdles on safety
I mention this only because the paper was written in 1978 (Grabowski, Estimating the Effect of Regulation on Innovation). Yes, 1978. The tighter regulation they were talking about was the 1962 amendment which for the first time required that drugs demonstrate efficacy before being marketed.
Apart from all the other questions this paper, by the virtue of being from 1978 yet asking the same questions begs, I think the most interesting is: can drug development productivity improve? Is there hope?
Being an optimist, I would like to believe so. Frankly, even a realist would say that with less ability to price drugs high, and with declining R&D budgets, productivity must improve if the industry is going to survive.
So what can be done? Well, many people have different ideas, and unfortunately, there is no way to prove what will work, but let me toss out a few ideas.
1) Learn from Other Industries
We tend to be fairly insular in our approach to work. There is a lot to know in our own sub-specialization, and drug development itself is very complicated so it’s hard to even learn about other drug development disciplines. How many people fully understand medicinal chemistry, toxicology, PK, clinical operations, clinical science, regulatory science, pricing, marketing, sales, reimbursement, and pharmacovigilance? Exactly.
Now, many people believe that innovation industries are different from traditional industries. I believe that. I would go further and argue that in many ways, our industry is sui generis.
But not in all ways. In some areas, you could make an argument there is a lot we can learn from other industries. Let me give you one example. Clinical operations, which is the execution of clinical trials, is essentially a supply chain problem. You have to bring together drug, physician, patient, case report forms, and data together efficiently and at the right time while managing risk and minimizing redundancies and excess capacity. Many would argue that we are just now starting to use modern supply chain management tools when we do this. For example, Walmart vendors know when an item is sold at a store real-time and can adjust their production accordingly. This minimizes waste and capacity gaps. CROs often only learn there will be a clinical trial when they receive the RFP.
Risk-sharing and information-sharing to make sure the risk is borne by the party best equipped to bear it is honed to a fine art in other industries. We can probably learn a lot from them, but I would hazard a guess that the number of supply chain experts from Walmart who now work in clinical operations in the drug industry is not high.
Similarly, it probably wouldn’t be a bad idea to have some more industry-wide standards and coordination. Some people talk about precompetitive space, and that is fine, but I would like to see some coordination on the other end. For example, is it really necessary that each sponsor audit very CRO and CMO separately? Is it necessary that every sponsor negotiate separate and customized contract with each and every clinical site?
2) Better Animal Models
Anything multiplied by zero is… zero. Anything multiplied by 0.05 is… small. When you have a 5% (or even 10%) success rate, then you’re swimming upstream. We need to raise our success rate. And what do we need to do that? Probably, better animal models.
We use “models” like tumor cells injected into mice or irritants injected into joints to mimic cancer and arthritis. We feel better when these models work, but the longer I’m in the industry, the less stock I find myself putting in those studies. Those models are just not very predictive. And unless we come up with better models, it will be a challenge to increase the success rate in drug development. Please see See my post named “Castalia.”
There are several issues with the models, including the fact they’re not actually replicating the biology as well as the fact that we generally use genetically identical rodents in our studies. So when you do an experiment with 20 mice with tumor cells, you’re actually putting a cell line from a single cell into a single mouse twenty times. You’re doing 1 mouse, not 20 mice. Now that’s what I call personalized medicine.
We need better animal models or some other way to better handicapping success. Could it be from in-silica modeling? Could it be from applying modern decision-making theories to portfolio decisions? I don’t know.
3) Better Feedback
Drug development has a long cycle time, and I would argue that it is what Kahnemann would call a low validation environment. It is difficult to determine whether experts can predict outcomes. Experts themselves may not know if they can predict outcome because feedback is rare and diffuse.
Many people would argue that it is difficult to improve a process without feedback.
For example, let’s say a master medicinal chemist believe that changing the structure of a molecule will improve its performance. He will not know whether he is correct or not for 10 to 15 years, and he will not know whether it was that change or something someone else did that made a difference.
We try to set up interim metrics so that we can measure the chemist’s performance. Many would argue that the interim metrics are almost never validated. In other words, we may not know if the metric toward which the chemist is working will affect the performance of the product. Only other master medicinal chemists are in a position to judge, and they may not know if they are correct.
The best we can say may be that the decisions under the best circumstances are consistent. We may have good precision, yes. But can we can say that they are accurate? Without validation, it is not easy to tell whether what you’re doing is helpful or detrimental. There are instances where the unintended consequences may be worse than the intended consequences. Please see my “New Thinking” post for discussion of P450 and Lipinski’s rules and how they may lead us astray.
But let’s not single out the chemists – these unvalidated metrics probably apply to just about every function, and more importantly, it’s the clinical developers who are failing to give feedback to the early stage people. So there’s lots of accountability to go around.
We need feedback, and we need feedback not on drugs that work but on drugs that fail. 90 – 95% of drugs fail. If you were to ask most people how much time they spend analyzing failed drugs, they would probably give a low number somewhere close to zero. Over 90% of the data necessary to improve drug development process lies in failed drugs. The remaining 5-10% of data may be of limited use without a comparator group. You could argue that looking at what successful drugs look like is less important than looking at the differences between successful drugs and unsuccessful one. Please see “Whence the Low Productivity” post for more discussion on feedback.
The point is that one of the most important things we need to do may be to spend a lot more time giving and getting feedback, and validating the interim metrics we work towards.
4) Design Thinking (Empiricism)
As I talk about in another post, and as others have noted, we probably think we know a lot more about biology than we actually do. I’m all for rational drug design and in-silico modeling but I think we need a lot more data and knowledge before we can make those work.
Instead, I’m in favor of what Trevor Mundel in April 2012 issue of Nature Review Drug Discovery said: it’s hard to figure out what is going to work until you take it into people, so take them into people and get some data before making the decision. Some people call this experimental medicine.
In other fields, this is called design thinking (Ideo Labs), rapid prototyping, or lean startup. It’s an engineering mindset, when you can’t analytically solve a problem, you prototype and see what you get. It’s really empiricism, but that I guess that word has a negative connotation in many circles.
5) Better Clinical Development
Clinical development is where the bulk of the spending is in drug development. We’re not going to get the needle to move on productivity without improving clinical development. I touched on one aspect of clinical development that could be improved already in section 1. And of course, many people are working on lowering costs and timelines by outsourcing to developing countries.
In addition, I think adaptive clinical studies will help improve clinical development significantly. I have a textbook on adaptive clinical trials that goes into more detail.
One additional way to improve productivity which is in theory attractive but in practice difficult is multiplexing. Multiplexing in clinical development is called factorial design.
When I was in college, I recall that labs would have Drosophila screening parties. Dozens of graduate students would come together from all over the world, sit down, and sort through millions of irradiated fruit flies. Each student would be looking for a particular mutant, and if anyone saw one with that mutation, he would give it to the right student. This way, they wouldn’t each have to sort through a million files.
Drugs are a bit like irradiated fruit flies in that sometimes they have biological activity but you don’t know what indication they will work in. At the same time, multiple trials are often being run in for the same indication but for different drugs.
I know it’s an unorthodox suggestion to test multiple drugs in the same study, and it’s also discomfiting to think about putting a drug in multiple indications at the same time. And there are of course issues of IP, competing drug companies’ interests, and potential issues with interactions. However, the way we’re doing drug development, one drug in one study for one indication at a time is somewhat akin to a lone grad student screening a million fruit flies for one mutation. There has got to be a better way.