The right road to drug discovery?
|Fragment-based screening casts doubt on the Lipinski route.
Lead discovery is a high-risk endeavor. Several major pharmaceutical companies have acknowledged that they are only successful in identifying a high-quality lead for a druggable protein target in around one out of four attempts (1). Thus, although chemistry and screening throughputs have massively increased over the past decade, lead discovery productivity has not necessarily increased accordingly. This inability to identify multiple high-quality leads that are novel, tractable, and efficiently optimizable remains a key bottleneck in todays drug discovery environment.
Despite continued and unprecedented levels of investment in high-throughput screening (HTS) and combinatorial chemistry (combichem) technologies, these two techniques clearly do not provide solutions for all targets. Although most pharmaceutical companies have derived significant value from these technologies, there continues to be a gap in lead discovery productivity that the industry is seeking new methods to fill.
So, where might an increase in productivity be found? Some enhancement will probably come from further refining of the HTS and combichem approaches. Improvements in productivity also might come from the many innovative approaches now being developed, including knowledge-based technologies such as pharmacophore-based screening (2), virtual screening (3), and systems-based searches (4).
This article focuses on the concepts and precedents for fragment-based screening and explores the argument that the industry may currently be focused on making and screening the wrong types of compounds for lead generation.
The wrong compounds?
Many researchers say that compounds with good drug properties may not necessarily make the best leads for further optimization. That is, leadlike properties and druglike properties, although not mutually exclusive, are significantly different (5, 6).
Over the past decade, the industry has been active in defining druglike properties. The much-cited Lipinski rule of five (7) derives empirically from the vast amount of data that the industry has gathered on properties that maximize an oral drug candidates probability of surviving development: molecular weight (MW) < 500, number of hydrogen bond donors < 5, number of hydrogen bond acceptors < 10, and ClogP < 5.
Although these rules are useful for assessing the risk profile of an oral drug candidate entering development, they do not necessarily define the properties of a good lead. Such druglike property rules, however, have been applied almost universally to the design and selection of compounds for lead discovery.
Rather than studying only druglike properties, Mike Hann and his colleagues at GlaxoSmithKline (Stevenage, U.K.) studied a set of more than 450 pairs of commercial drugs and their corresponding leads (5, 8). Thus, for the first time, a large body of data was analyzed from which the differences between historical druglike and leadlike properties could be derived.
On average, historical leads had lower MW, lower lipophilicity (ClogP), fewer aromatic rings, fewer hydrogen bond acceptors, and lower Andrews binding energy functions than the corresponding final drug. Other independent work also concluded that it is highly likely that libraries consisting of compounds with MW = 100350 and ClogP = 13 are greatly superior for finding leads than those comprising druglike compounds, with higher MW and ClogP (6).
The MW and lipophilicity of initial leads typically increase during the lead optimization process. Thus, if the initial lead is already too druglike, then the optimization process that is likely to be needed to tailor the molecule to the new receptor or enzyme will likely result in a higher MW and a more lipophilic drug candidate. The candidate may thus no longer possess druglike properties. This suggests that when looking for leads, the guidelines given by Lipinski should be lowered so that leads that are found by HTS give more room for further property optimization.
Complexity and hit rate
This trend might in hindsight be intuitive, but the exponential severity of the fall in the number of ligandreceptor interaction matches as the leads complexity increases is not. Additionally, making a large number of compounds via combichem tends to produce more complex compounds, which have a lower chance of matching the receptor as the number of interactions to be satisfied grows. This view is completely consistent with the observations of historical lead and drug pair differences discussed above.
A key aspect of molecular recognition is the probability that any one of a molecules features is compatible with those of a designated binding site. Effective molecular recognition is essentially the matching of properties of a molecule with its binding pocket through complementarity of shape and electronic properties, such as charge and hydrophobicity. In their simplest form, these are localized recognition elements that are highly detrimental to binding if incorrectly matched but that are beneficial (or neutral) if correct.
Molecular recognition can be simply modeled in one dimension by a sequence of
This model can be used to explore the effect that the increasing complexity of the ligand (as indicated by its number of features) has on its chance of matching a binding site of given complexity. This model was used to calculate the probability that a randomly chosen molecule might match the binding site. In this extreme model, one mismatch is defined as sufficient to totally obviate binding.
The probability of finding a match decays exponentially as the size of the ligand increases, because as ligand complexity grows there are far more ways of obtaining a mismatch than a match. This is in conflict with the fact that any observed affinity will be high if a complex ligand does match. One interpretation of this model is that the industry, in trying to quickly identify high-affinity matches, has focused on screening complex druglike compounds, but that in doing so it has unwittingly screened compounds whose complexity leads on statistical grounds to low hit rates. A way out of this dilemma, then, is to initially screen simpler and more leadlike compounds that have a higher probability of efficiently binding (matching), even if they deliver less active starting points.
Consider a simple pyrmidine-based library with three variable points, each of which could be one of 100 different constituents. Testing every possible druglike molecule would require the production and screening of 106 compounds. However, by adopting a fragment-based approach, only 300 would need to be made and screened to explore the same chemical space (Figure 2). This sampling efficiency is based on the additive nature of the fragments, as compared to the multiplicative procedure needed for the larger druglike compounds.
Results and prospects
In each case, the molecules derived were novel and thus unlikely to have been exactly represented in screening collections for HTS work. However, the strength of the fragment approach is the potential to use less complex starting points and some evidence of activity to work into areas of chemistry that have not been previously explored. Patentability (also known as novelty) continues to be a necessity for undertaking an expensive drug discovery campaign, and these methods present opportunities to explore chemical structures that are not already embedded in corporate collections or suppliers catalogs.
Several new companies are basing at least part of their lead discovery strategy on the coupling of fragment-based screening to innovative assay technologies. Examples include Astex Technology (www.astex-technology.com), which uses high-throughput X-ray crystallography; Triad Therapeutics (www.triadthera.com), which uses NMR; and Graffinity (www.graffinity.com), which uses surface plasmon resonance.
Hits can be weak
So, will fragment-based screening prove to be a major breakthrough technology? Or will it, like many other technologies, not quite meet its early promise? Only results and time will tell, but with the rapid growth in the field it will not be long before we know what position fragment-based screening will assume in the ar mory of methods that will ultimately be needed to find tractable leads for every important target.
Robin Carr is vice president of drug discovery at Astex Technology (Cambridge, U.K.), and Mike Hann is director of computational and structural sciences at Glaxo SmithKlines Medical Research Centre (Stevenage, U.K.). Send your comments or questions regarding this article to email@example.com or the Editorial Office by fax at 202-776-8166 or by post at 1155 16th Street, NW; Washington, DC 20036.
|Return to Top || Table of Contents|