ACS Publications Division - Journals/Magazines
About MDD - Subscription Info
September 2001, Vol. 4
No. 9, pp 26–28, 30, 32.
To MDD Home Page
Focus: Molecular Modeling
Feature Article
2001: A dock odyssey

RANDALL C. WILLIS

Computers allow researchers to design potential drugs from libraries of compounds that don’t exist in reality (yet).

Every day, hundreds of research journals contain articles about a new pathway or gene that purports to provide all the answers for a given disease or infectious state. Whether the study involves the mechanism by which malarial parasites feed in the human bloodstream or identifies yet another gene involved in the pathogenesis of Alzheimer’s disease, most published researchers are enthusiastic that a solution is just around the corner. Ask any chief scientific officer at a pharmaceutical firm, however, and he or she will offer the sobering thought that the real work on the problem has just begun.

In trying to find small molecular weight compounds that combat disease, researchers have concentrated their efforts on two fronts, one of which involves experimental high-throughput screening (HTS) of large libraries of compounds (Figure 1). Several previous articles in Modern Drug Discovery have presented the ways in which advances in automation, assay development, and combinatorial chemistry have led to an explosion of HTS. Increased analytical sensitivity—approaching the detection of single molecules—and refinements in robotics precision have allowed throughput to increase dramatically, such that the 96-well plate has largely been replaced by the 386-well, which in turn will soon be replaced by the 1536-well plate. Similarly, improvements in both fluorophore chemistry and genomic analysis have pushed the development of microarrays from oligonucleotides into proteins and from answers of yes or no into answers of where, when, and by how much.

But these advances come with a hefty price tag. Precision robotics require advanced electronics and precision tooling. Ever-expanding combinatorial libraries require equally prolific data management systems, which in turn rely on expanded computational power. Furthermore, although chemical assays can be relatively simple and have few components, biological assays, especially those that are cell-based, can be complex and suffer from a high number of false positives. And we have yet to consider the physical storage space required for the vast arrays of equipment, supplies, and products. Finally, there is a certain irony in the fact that HTS technologies can be labor-intensive and require highly trained staff. These factors, and the failure of several products to proceed beyond Phase III trials, combine to make any successful product very expensive.

This cost, of course, has not meant the end of HTS. Instead, several research groups are concentrating on smaller subsets of problems and solutions, such as by focusing on one target family or optimizing compound libraries to minimize redundancy. Another angle of attack, however, is the use of virtual library screening and de novo drug design using computational molecular modeling methods.

As the structures of more potential drug targets are elucidated, the opportunity for computers to perform initial binding studies is increasing. By computationally docking a ligand to a protein, you limit concerns about assay complications such as compound solubility and the need to maintain extensive physical compound libraries. In fact, some protocols screen libraries from outside sources and thereby eliminate the need for in-house resources. Furthermore, it may not be necessary to use the structure of the protein of interest as a docking template. Instead, the structure of a related or homologous protein may suffice to narrow the search for high-affinity ligands, which can then be tested by and refined with more traditional methods.

Although the technology of docking a ligand to a protein has been around for 30 years, computational advances in the late 1980s and 1990s empowered the technology to reach new heights. Table 1A & 1B lists some of the available programs. Most of the programs use the same basic principles to calculate a bound structure, looking for sites of interaction between a ligand and a target (metaphorically described in Figure 2). These interactions can involve hydrogen bonds, covalent attachments, metal–ligand bonds, water molecules, ionic or dipole interactions, van der Waals contacts, or hydrophobic interactions.

In a recent paper (1), Didier Rognan and colleagues at ETH Zurich divided the techniques into several categories:

  • fast shape matching (e.g., DOCK and Eudock),
  • incremental construction (e.g., FlexX, Hammerhead, and SLIDE),
  • Tabu search (e.g., PRO_LEADS and SFDock),
  • genetic algorithms (e.g., GOLD, AutoDock, and Gambler),
  • Monte Carlo simulations (e.g., MCDock and QXP), and
  • distance geometry (e.g., Dockit).

Given the overall similarity of the programs, I will describe only a few of them.

Developed by Irwin Kuntz and colleagues at the University of California, San Francisco (UCSF), DOCK fits compounds to a target’s binding site such that three or more atoms interact with specific contact points on the binding site surface. The contact points are effectively spheres around atoms in the active site, and the type of interaction expected largely determines the radius of each sphere. The program quickly docks each compound from a library, whether proprietary or commercially available (see Table 2), and ranks the compounds according to a scoring function. In an effort to address the fact that few compounds are rigid structures, recent DOCK releases have allowed multiple ligand conformations. Combining DOCK with combinatorial chemistry, researchers at Northwestern University (Evanston, IL) and the University of Modena (Italy) identified a novel micromolar-level inhibitor of thymidylate synthase, an enzyme critical for DNA metabolism and a potential antimicrobial target (2).

Rather than try to dock an entire compound at once, Hammerhead breaks a test compound into components carrying as few as two rotatable bonds (three linked atoms) and tries to dock the fragments into a binding site similar to that used by DOCK. Once it has established a computational beachhead, it incrementally adds the next fragment in the chain, maximizing its location before joining it to the initial fragment and moving on to the next one. In a recent paper about Hammerhead, UCSF professor Ajay Jain and colleagues described the analysis of an imaginary compound with eight rotatable bonds (3). To dock the complete ligand and subsequently test the various conformations at each bond would require approximately 6500 (i.e., 38) positions or alignments. Breaking the compound into three fragments of two rotatable bonds and aligning them incrementally, however, would require only 117 alignments, assuming two rounds of refinement using the five best-scoring partial alignments [i.e., (3 x 32) + (5 x 32) + (5 x 32)].

Programs like GOLD, which was developed by researchers at the Cambridge Crystallographic Data Centre (Cambridge, UK), GlaxoWellcome (Uxbridge, UK), and the University of Sheffield (Sheffield, UK), rely on genetic algorithms to dock ligands into a binding site. The algorithm creates a virtual population of possible docked structures and then puts the structures through rounds of mutation (changes in the side chains within a single structure) or breeding (exchanges of chains between structures) in the hopes of finding low-energy structures. GOLD also features an “island” function that isolates specific regions of the docked molecule from the rest of the system, rendering them unchangeable.

Of course, some in the scientific community think that a dose of reality should be meted out with the marvel of technology. “Computational or molecular docking does not really decrease the amount of experimental work that one needs to do. It changes the focus,” says Nigel Richards, a chemist at the University of Florida (Gainesville). Richards combines computational chemistry with benchtop biochemistry to elucidate the functions of various protein systems and determine ways to inhibit their activities. “For example, you may do more experimental work to validate [your claims], so that, ultimately, you may make fewer compounds to get the active one that you need,” says Richards. In his opinion, the work you save in one area has to be paid back in another.

De novo design
One area of drug discovery that generates a lot of controversy is the rational creation of a compound based on proposed binding restraints, so-called de novo drug design. “There are two problems with using screening to find an initial lead compound followed by structure-based optimization of that compound,” wrote Diane Joseph-McCarthy of Wyeth Research (Cambridge, MA) recently (4). “If the initial compound does not already exist, it will never be found; and, in this process, a great deal of time and effort goes into refining a few lead compounds, and thereby many of the resulting drug candidates for a given target are chemically similar to one another.”

Like the basic docking algorithms, de novo design programs come in all shapes and sizes, but they essentially fall into two categories: energy-based and rules-based. Most energy-based methods rely on a GRID algorithm, which makes a three-dimensional grid within the active site and uses an energy function such as a molecular-mechanics force field to position the ligand’s functional groups or atoms.

In contrast, the rules-based algorithm LUDI determines hydrogen bonding and lipophilic interactions by using a purely geometrical approach, recognizing that a given functional group can take one of several positions within an active site. Upon docking two or more small compounds or fragments, LUDI bridges the smaller components to make a new molecule with an affinity higher than that of each of the starting fragments. The problem with this technology comes in establishing the initial active site grid. According to Richards, even the slightest error can lead to significantly different results.

The final frontier
Regardless of the method by which you choose to dock molecules, however, the accuracy of the model comes down to the strength of the method’s scoring function. It is in this area that perhaps the most work has been done recently (5). Most scoring functions have been developed alongside docking algorithms and are based on empirical training sets. Thus, the accuracy of a docking model is based on how often the intermolecular interactions being examined have occurred in the structures that were used in training. This knowledge-based or statistical approach also allows the incorporation of factors, such as solvation and flexibility, for which more traditional theoretical methods such as force-field potentials cannot account.

The complexity of scoring algorithms ranges widely. Some systems use a simple heuristic scoring model of +1 for every “good” situation, such as when two hydrophilic moieties interact, and –1 for every “bad” situation, such as when a hydrophobic side chain approaches a hydrophilic patch. Other researchers view this method as too simplistic and have developed more complicated scoring functions. But, in Richards’ opinion, the simple algorithm is as good as the sophisticated one, and both leave much to be desired. “The scoring algorithms are so bad right now that it doesn’t matter how you score. At the end of the day,” he says, “there is no rational way of deciding if this score is better than that score.”

Richards acknowledges that many biochemists can look at docked molecules and add personal (and accurate) intuition to the computational scoring function. Alternatively, he offers, it is possible to supplement the docking with a massive amount of experimental data, narrowing the options in the process. “But if you’re doing that,” he suggests, “you can ask the question, ‘Why did I do the calculation?’”

To illustrate his concern, Richards gives an example involving the interaction of two proteins with known structures. One protein contains a module that binds to peptides containing phosphotyrosine residues, while the other protein contains a domain with four tyrosine residues. Your challenge is to dock the two molecules together correctly. Computationally, you would add a phosphate group to the tyrosines and then try to associate each one with the binding domain.

“Every one of the four possible complexes will look right—sterically, complementary, electrostatically,” Richards offers. “The error in molecular mechanic [energy calculations] is about 10%, so you can’t tell the difference. So how do you use software to know which one of those complexes is right?” Without having performed at least some biological testing, it is almost impossible to determine which of the four tyrosine residues are the actual target of the binding domain.

What is becoming increasingly obvious in the world of drug discovery is that no single technology will provide all of the answers. The physical, chemical, and computational limits described in this article have slowed—and will continue to slow—progress, but concerted and collaborative efforts may just be the key to discoveries yet to come.

References

  1. Bissantz, C.; Folkers, G.; Rognan, D. J. Med. Chem. 2000, 43, 4759–4767.
  2. Tondi, D.; et al. Chem. Biol. 1999, 6, 319–331.
  3. Welch, W.; Ruppert, J.; Jain, A. N. Chem. Biol. 1996, 3, 449–462.
  4. Joseph-McCarthy, D. Pharmacol. Ther. 1999, 84, 179–191.
  5. Gohlke, H.; Klebe, G. Curr. Op. Struc. Biol. 2001, 11, 231–235.

Further reading

  • Designing Bioactive Molecules: Three-Dimensional Techniques and Applications. Martin, Y. C., Willett, P., Eds.; American Chemical Society: Washington, DC, 1998.


Randall C. Willis is an assistant editor of Modern Drug Discovery. Send your comments or questions regarding this article to mdd@acs.org or the Editorial Office by fax at 202-776-8166 or by post at 1155 16th Street, NW; Washington, DC 20036.

Return to Top || Table of Contents

Copyright © 2001 American Chemical Society
CASChemPortChemCenterPubs Page