What We Get Wrong About Drug Development

Looking back at history of drug development, and the direction it has gone in the last twenty years or so, I think there are perhaps some assumptions about biology of drugs that we might want to challenge or question. Below are some of my thoughts about drug development productivity I’ve collected over the last couple of years.

Absolute levels of drugs and molecules tell only part of the story.

The most common way to look at pharmacokinetics is to measure the plasma levels of the drug or molecule. One step beyond that, particularly for highly protein-bound drugs, you measure free plasma levels of the drug. These types of measurements might suffice in many instances, but may be insufficient in more instances than we think.

In many systems, especially hormonal systems, the absolute level of the drug is not as important as the relative level of the drug. The effect of the drug may be affected substantially by number of receptors, sensitivity of receptors, or other interacting factors.

Patients with insulin resistance represent a classic example of how absolute levels can be misleading. We are now comfortable with the knowledge that the concentration of insulin does not correlate with its effects. Type II diabetics are insulin resistant. But it was a shock to many scientists when it was first discovered that Type II diabetics had high, not low, levels of insulin. It is always counterintuitive to imagine a lower level of a molecule or a drug can exert more potent effect than a higher level, but this phenomena can and does happen.

In many instances, especially with endocrine molecules, there is a cyclic variation in the level of the hormone. It is important to understand and measure whether the drug or molecule concentration has been rising or falling. It is also important to take into account how long the concentration of the drug or molecule has been at a certain level.

For example, both growth hormone and Gonadotropin-releasing hormone (GnRH) cycle up and down. Chronic high level of GnRH has the same effect as chronic low levels. Without cycling of GnRH levels, the effect of the hormone diminishes. That’s why GnRH agonists and antagonists have the same suppressive effect on prostate cancer.

In other instances, the level of the drug relative to the time of day may be importance. 5 mg/ml at 3 am may be different from 5 mg/ml at 8 pm.

Finally, in some biological processes, the absolute level may not be nearly as important in some instances as the ratio of the molecule to another molecule. For example, in lupus patients, levels of serum IL-12 is elevated. This would normally lead to a Th1 response. However, lupus patients exhibit Th2 response because it is not the absolute levels of IL-12 that is important as much as the relative ratio of IL-12 compared to the opposing cytokines. The opposing cytokine levels are even higher, so the level of IL-12 that would normally mean Th1 response actually signifies Th2. There are other biological processes that depend on ratio (such as AMP/ATP ratio) rather than the absolute levels of a molecule. There are many other examples of this phenomenon, including in gene expression space, even though typically, this is not how we usually think about levels of a drug or a target.

Gradients and tissue levels of molecules can matter.

The absolute level of a drug may not give enough information as to where that level is. Many biological processes rely on gradients, such as some VEGF-driven processes. Without a concentration gradient, the molecule may not exert its effects.

Also, serum levels of a drug may not yield helpful information on what the level is in the tissue or the cell or the subcellular compartment.

Temporal hysteresis can be important and may be more common than we think.

We also often tend to ignore the temporal hysteresis and maybe even hormesis and anti-hormesis. In other words, when we give a drug that blocks a biological effect, we expect it to result in less biological effect. When the opposite happens, we are usually surprised.

We call this effect tachyphlaxis or rebound, and we consider drugs that exhibit such effect to be exceptions rather than the rule.

In fact, such paradoxical effects may be much more common than we suspect. We certainly see it with beta blockers in congestive heart failure. For a long time, beta blockers were contraindicated in heart failure because they are negatively inotropic, at least acutely. However, over the long-term, beta blockers improve survival in CHF, perhaps by modulating receptor level. In contrast, most inotropes that increase cardiac contraction paradoxically reduce mortality in heart failure patients.

And, long-acting beta agonists, though they are acutely bronchodilatory, appear to increase mortality long-term in patients with asthma.

We also know that many drugs (many antibodies that bind to a receptor for example) have agonist and antagonist effects at different doses. We also know that receptor levels often change in response to chronic stimulation or suppression. The possibility of that a drug could be agonist in the short-term and antagonist in the long-term, or vice versa, is biologically plausible. However, this is not a well-investigated area.

However, we know of several biologically plausible ways that this type of phenomena can occur. For example, receptor levels are often upregulated or downregulated in response to stimulation. More the stimulation, lower the receptor levels may become. Also, many biological system have built-in self modulating features. For example, with some zymogens (which are inactive pro-enzymes), the “inactive” cleavage product can sometime exert negative feedback upon the activation cascade. (Some zymogens exert negative feedback before cleavage as well, so this can work in reverse as well.)

With many drugs, the result of the clinical trial is not just neutral, they are the opposite of what is expected. While this can due to chance, sometimes they may be due to paradoxical temporal hysteresis.

There are some who contend that hormesis, or the phenomenon of a low dose of a compound causing beneficial effects while causing the opposite effect at higher doses, is widespread. I’m not so sure about that, but temporal hormesis, the idea that a low dose toxin may have beneficial effects after a period of time after body has responded to it, may be more common. Temporal anti-hormesis, where a drug that is beneficial in the short-term may be detrimental in the long-term after body adjusts to it, may be more common still.

Biological phenomena are often not bidirectional

We are wired to think bidirectionally. If we increase compound A, and that makes compound B go up, then we expect that when we increase it further then compound B will go up further and that if lower compound A then compound B will go down. Of that if a mutation in a gene causes a disease then removing that mutation will reverse the disease.

This is fallacious in many ways. I talked about hysteresis above, where there is a path dependency in biological phenomenon. But there are others. U-shaped curves is one example. High levels of GNRH hormone for example, will make it become an antagonist.

A very common example of this fallacy is often seen with knockouts. There is a tendency to think that if something is important, then if you knock it out, something terrible will happen. And it often does – many mutations are lethal. If you knock out those genes, then the animal dies in an embryonic stage.

But the reverse is not true. If you knock out a gene, and nothing apparent happens, it doesn’ t mean that the gene is unimportant or not necessary. This is a fallacy. Just like a key that opens all locks is important, and a lock that can be opened by all keys are not the same.

Before “junk DNA” was recognized as being important, I read this argument about junk DNA. “There are organisms that have very little junk DNA. Therefore, it is not necessary, and must be unimportant.” It’s sort of like saying that because not all animals have wings, wings must not be important.

Under one set of circumstances, the gene may not be important – for example, under normal development, or under one environment. For example, the G6PD gene. If you knock it out, you won’t find any obvious changes in the phenotype. The person will look normal, until he eats a fava bean. Then his red blood cells will burst apart.

The key here is that the environment is always changing, and the repetoire of genes you have must be suited not just to the current environment but environmental you may face anytime in your life, and also the environments your progeny may face in their lives.

Microenvironments and nanoenvironments matter.

Because of the tremendous ability to isolate genes and to study them, most modern biologists have become molecular biologists. There are relatively few biologist who operate at a scale larger than molecules, and even fewer who operate at a scale larger than cells.

What we have forgotten is the importance of three-dimensional structures and spatial information. The microenvironment, such as the environment in a lymph node or around a tumor cell can have a very important biological effects. The nanoenvironment, such as which side of a cell is presented with a signal, or which subcellular compartment a protein docks, can have a very important role as well.

Protein kinases, for example, appear to obtain their specificity partly by the virtue of the fact that they localize to specific subcellular compartments.

Genes for secondary metabolites in plants are often physically next to each other on the chromosome and the proteins they encode are often linked together so the substrate can be physically passed on from one enzyme to the next.

It is probably important not just what genes code, but where the gene is, physically, where the RNA transcripts go, and where the translated proteins go within the cell, and outside the cell.

In addition, there may be information stored in the three dimensional spatial coordinates of various molecules within the cell and within the tissues that are not readily discernible from DNA sequence information. This is accepted in neurology, where the physical synapses are believe to hold information, and in certain areas of immunology and cardiology (lymph node structures and electrical circuits), but not as broadly investigated or accepted in other fields.

We probably understand a lot less about biology than we think we do.

In 1968, Gunther Stent, a well-regarded molecular biology, wrote an article in Science. The theme of the article? That molecular biology, as a field, was mature – that there was little left of molecular biology to discover now that everything has been discovered (this was 1968, mind you, before restriction enzymes or ligases, before sequencing, before PCR, before RNAi, before microRNA, before epigenetics). He basically declared end of molecular biology. Kind of like Fukuyama and end of history except more so.

I don’t know if it’s just biologists (among which I count myself as one) or all types of scientists, but we seem to always be thinking we know just about everything worthwhile to be known. If you asked biologists how much we know about how cells work, many will say 70% or 90%.

Of course, every few years we discover the other “50%” of biology. Like siRNA. Like exosomes. Like discovering that proteins often don’t have a tertiary structure (see my post “Proteins Flop Around”).

Hence the current rush into translational medicine, which presumes that we know enough about biology that we can rationally translate that into clinical benefit. Don’t get me wrong, translational medicine is really promising and I am actually writing a textbook on translational medicine, but let’s not get ahead of ourselves here. I would be surprised if we knew more than a very small part of how cells work, but I am very optimistic that we will know more in the near future.

Interactions

Interactions refer to a situation where one factor alone doesn’t cause an effect but in the presence of another factor it does. A classic example is cigarettes and matches. Cigarette, without matches, does not cause cancer. Matches alone don’t cause cancer. Cigarettes lit by matches and then inhaled cause cancer.

Similarly, negative interactions hide causality. Someone hit by a bullet is at risk of dying. If he is wearing a bulletproof vest, then being hit by a bullet wouldn’t cause death.

The problem is that when you have thousands of genes, it is computationally difficult to test for interactions. In cases where several genes must work together, it is almost impossible to uncover causality unless you have a good biological pathway or a prior hypothesis.

When we take a target-based approach, we often forget that the milieu of the cellular system is interconnected. We often ignore that there may be many other genes that can modulate the effect of the target gene. Protein X may only have the anticipated effect in the presence of protein Y, but we may not know that. High level of a molecule may cause a disease in conjunction with and in the presence of high level of a second molecule but not in the absence of that second molecule.

Redundancies

In addition, there is often homeostatic mechanisms that tends to push the equilibrium back to where it was. The work with chaperone proteins are suggesting that most organism may have many mutations. There seems to be backup systems that hide and compensate for these mutations, and there are often redundant mechanisms. Teleologically, this makes sense, since there is a clear advantage to having redundancies within the cell – it makes for a lot less fragile system.

For more discussion on this, see my post on Reductionism, where I talk about reverse synthetic lethalist, or as I called it, synthetic survival.

This is different from inter-cellular signaling systems. In those cases, there seems to be lot less redundancy, and it seems easier to modulate those pathways.

Intercellular systems

When we look at the gene expression profile within a cell, we often assume that the proteins in the cell reflect  gene expression. With the discovery of exosomes, which can shuttle genes (including transposons and miRNA) and proteins across cells, we need to look more broadly, and entertain the possibility that many of the proteins and genes shuttle across cells and that what is transcribed in one cell may be translated in another.

Non-DNA information

We, for the most part, subscribe the orthodoxy that most biological information is stored and transmitted by DNA. It now appears that perhaps that epigenetic information such as methylation may be very important for certain biological functions.

But beyond that, it is possible that there are additional, hereto little suspected, mechanisms for information transmission. For example, it appears that conformation of certain proteins, specifically amyloidogenic proteins, may store and transmit information across cells, and perhaps even across generations. Yeast cells appear to store information about the environment in amyloidogenic proteins that get passed on from one generation to the next. Exosomes may carry information coded in DNA, RNA, and proteins across cells and generations. They certainly seem to be involved in shuttling of antigens between APCs and immune cells. Most importantly, exosomes carrying proteins and immune-regulatory miRNA have been found in milk. Vertical transmission of information through milk, if true, may turn out to be only the first example of a class.

Look at host-pathogen interactions

If you look at drugs, and even at biology in general, we owe a lot to host/pathogen systems. The most sophisticated biological systems are involved in mediating the interactions between host/pathogen/parasite.

For example, molecular biology is based largely on restriction enzymes, which are host defense systems. TAL proteins, which may turn out to be almost as revolutionary as restriction enzymes are as well. RNA interference and perhaps miRNA appear to be or evolved from host pathogen systems.

Transposons, which are essentially parasitic elements, appear to have been co-opted into building blocks of the adaptive immune system, and perhaps into building diversity into neurons in the brain as well.

More importantly for those of us developing drugs, the majority of our drugs come from natural products. And the majority of those natural products are plant “secondary metabolites.” Secondary metabolites are compounds that are not essential for the plant to live and grow. Most botanists believe secondary metabolites serve as defenses against parasite and herbivores because when plants are attacked by insects, they appear to upregulate secondary metabolites and the secondary metabolites are toxic to insects and herbivores. For example, tobacco is toxic to many insects. Chocolate is toxic to dogs. Essential oils from many spices (including the oils in Listerine) are both anti-bacterial and toxic to cats.

In other words, most drugs are pesticides. Digoxin comes from foxglove. Before digoxin, toad skin that contain toxins were used to treat congestive heart failure. Metformin comes from French lilac. Hedgehog gene was discovered because goats would bear lambs with only one eye when they ate a certain plant. Coumadin was discovered the same way.

There is a tremendous amount of evolution that is focused and directed to host-pathogen systems. This manifests firstly as very complicated systems (such as adaptive immune system, TAL proteins, CRISPR system), and secondly as high information content in the systems. Natural products can be seen as high information content molecules.

In fact, if we look at the history of biology, much of the early (and modern) work would not have been possible without highly evolved chemicals from nature. Elucidation of many biological systems, such as citric acid cycle and cytochrome system would not have been possible without natural inhibitors that were used to dissect the biological pathways.

Natural products, such as plant-derived drugs, also tend to target biological choke points – part of the animal’s physiology where redundancies are few. Evolutionarily this makes sense. This may be why the newer “targets” are less tractable to modern drug development. In general, if a biological system has never been targeted by virus, bacteria, parasite, or plant, then the system is probably a poor target for a drug.

Beware of unintended consequences

Unintended consequences. In complex system – for example in human societies – it is often the case that you try to affect the system one way and you end up having some other tangential effect or sometimes the opposite effect of what you’re trying to accomplish.

Drug development is a complex endeavor. With any complex system, it is not always easy to predict what the unintended consequences are. However, unlike most other endeavors, it takes so long for effects of a decision to mature that it may be difficult to realize the unintended consequences have occurred.

Let me take two potential examples. First is the practice of screening out drug candidates that are metabolized by or induce p450. P450 system is a detoxification system that has evolved to do something very well – to recognize and detoxify biologically active molecules. Specifically, for herbivores and omnivores, it is designed to recognize and degrade pesticide in plants. Even modern humans, with largely domesticated diets, eat grams of natural pesticide every day. In other words, it’s a filter that has been optimized to select molecules that have the highest likelihood of being active drugs. (The other filter that’s highly evolved to detect drugs are taste buds–dangerous chemical like drugs are often bitter. The old medicinal chemists used to taste their new concoctions and pick out the bitter ones for testing.)

When we systematically weed out, during hit or lead selection, drugs that interact with the P450 system, we intend to reduce the likelihood of drug-drug interactions. At the same time, though, we may be reducing the likelihood that the drug we eventually develop will have a biological effect. So can one unintended consequence be that we are selecting out the very drug candidates most likely to succeed? We are certainly filtering out hydrophobic drugs, which may not be such a good thing.

The P450 system, because it is a detoxification system, metabolizes/detoxifies drugs. By screening for drugs unaffected by the p450 system, it is also possible we are selecting for molecules the body cannot detoxify, thereby making the toxicity profile potentially worse.

We are making the assumption that metabolism by P450 is an independent factor from the likelihood of the molecule being biologically active, or at least assuming that the benefit from removing the P450 interaction outweighs any deleterious effect on biological activity or other factors that are important for a successful drug. This assumption(s) may not be correct.

The second is Linpinski’s hypothesis of fives. More commonly called rule of fives, i think a more accurate name is hypothesis of fives because it has not been proven to increase the ultimate (drug approval) success rate. Based on published data that examine the validity of the rule, of which there is lot less than you might expect, the hypothesis of fives seem quite good at selecting for drugs that are absorbed and therefore get past Phase I, and equally good or even better at selecting for drugs that are effective at getting to liver and other tissues and causing significant toxicity. Wenlock published a paper in 2003 in J. Med. Chem. that is one of the few to actually compare the success rate of drugs against the characteristics Lipinski outlined. There seems to be a moderate correlation between molecular weight (and hence hydrophobicity) and success but H-bond acceptor/donor correlation is not very convincing at all.

Hypothesis of fives has been a boon to chemists because it lets them get their molecules into clinic and past Phase I but it’s not clear that it has helped much with the ultimate success rate. Possibly, it has made it harder to get a drug actually to market. Once again, perhaps an example of unintended consequences.

Don’t get led astray by the name

This is a comment often made by wizened drug developers. Don’t get led astray by the name. We see a name like growth hormone and we think that the primary action of the molecule is growth. We see tumor necrosis factor and we think it shrinks tumors when it may do the opposite. If TNF were called cardiac repair factor, would they have done a trial to block it in CHF?

Would it surprise you to know that earlier names for serotonin were thrombotonin, thrombocytin, and enteramine? Would it change your thoughts about what it did? Would it surprise you to know vascular endothelial growth factor was originally called vascular permeability factor?

It’s a natural tendency to believe names have something to do with function.

But unlike normal words, which evolve over time and generally are reflective of meaning, in many cases molecules are named for the first activity that someone detects, and can be misleading. And as new functions are discovered, the scientific name often doesn’t change to reflect them.

Remember that diseases are intellectual constructs.

Diseases are set up by clinicians as categories. The categories change over time, and importantly, reflect the available therapies. Diseases are often separated by their responses to specific therapies. So don’t be afraid to split or lump disease together when thinking about new drugs.

Single-strain mice are less predictive for drug development than you think

Remember that lab rodents are bred to study natural physiology. They’re genetically identical. Don’t forget when you study 20 mice from the same strain, you’re testing the same animal 20 times. It is difficult to resist the temptation to conflate mouse models for real disease. Mice are not people. Or to be more accurate, mice that have a condition designed to mimic a disease are not people with the disease.

Think like Darwin

Understand the evolutionary lineage of your target. If you perturb or target ancient molecules, you will find that you almost always get some biological effect but it is often quite difficult to avoid unintended side effects. Most ancient molecules have been co-opted into serving multiple functions, especially functions that evolved from the same system. You may end up repeating Tegenro experience.

For example, inflammation and immunity are closely linked, and most people would recognize that right away. However, some people may not know that in primitive animals and insects, the coagulation system and the immune system are one and the same. This is probably why so many factors involved in immune response overlap with factors involved in coagulation, and why it is difficult to fully understand one system without understanding the other.

As another example, serotonin is an ancient molecule used to signal satiety in mollusks and other animals very distant from humans. This means that serotonin’s role in appetite is likely very old. Typically, ancient systems that are shared across very disparate animals tend to have a central role in the biology of the function it is involved in.

Look at natural experiments

If you have a target, the first thing you should do is see if there are natural mutants. If you don’t find any reports of the gene being mutated or deleted in people, that’s a bad sign. Typically, it means it is critical for survival because almost every gene that is not vital has a case report or two of human mutant in the literature. If the mutation has absolutely no effect, that’s bad sign as well. What you want is a natural knockout that has the phenotype that is consistent with your hypothesis.