ACS Publications Division - Journals/Magazines
About MDD - Subscription Info
May 2001, Vol. 4
No. 5, pp 32–34, 36, 39.
To MDD Home Page
Focus: High-Throughput Screening
Feature Article
Screening’s age of insecurity

DEBORAH J. AUSMAN

High-throughput screening hasn’t given us more drugs. Can better data management make a difference?

opening art
PHOTO BY TONY FERNANDEZ
Researchers at today’s biotechnology and pharmaceutical companies should be brimming with confidence. The data are fruitful and multiplying. Many crucial target systems have been well characterized, and druglike behavior is better understood. Besides this knowledge, we have the technology—the robust assays, automation, and detection methods—necessary to move from targets, to hits, to leads, to drugs.

Yet the pace of research remains frenetic, and the competitive landscape cutthroat. That’s because organizations have little to show for all of the innovations of the 1990s, at least in terms of new chemical entities. High-throughput screening (HTS), in particular, has not lived up to the expectations that greeted its initial adoption in the first half of the decade. HTS has matured as a field, but lead attrition rates remain high, and little time has been shaved off R&D pipelines. Screeners seem confident in their programs’ ability to generate data. But they are less sure about whether they and their organizations are doing the best things with data generated.

“What we’ve entered now is an age of insecurity,” says Peter Hecht, senior vice president of discovery research operations at Tripos. “We have more than ever to choose from in terms of targets, compounds, and options for what we should be doing next. But we’re discovering that all the data around us isn’t making the answers come any easier.”

Maturation and industrialization
In the beginning, there was screening. Then came the 96-well plate and the concept of increasing throughput. The appeal was obvious: Why screen just a few compounds in an assay when you could screen whole plates at a time, particularly when a convenient new technique called combinatorial chemistry was letting you synthesize new compounds faster than ever before? By the mid-1990s, most pharmaceutical and biotechnology organizations had initiated HTS programs, and it was not long before the “ultra” modifier was added to the acronym.

Yet while screening’s modifiers have become increasingly superlative, emphasizing both speed and miniaturization, today’s screeners often reject these labels. “We don’t have a badge that says, ‘We are high throughput,’” points out Mike Snowden, head of the molecular discovery department at Glaxo Wellcome.

Snowden credits HTS’s active “conference culture” with perpetuating the notion that the technique is somehow set apart. In practice, Snowden and others say screening is screening, scaled according to the project at hand and balanced on a continuum with respect to quality and throughput. Says Bob Burrier, senior director of biochemical technologies at GelTex Pharmaceuticals, “We chose to focus on developing an infrastructure that would support our various projects, rather than a platform for a particular volume of screening. We have the ability to screen 100,000 data points a day, but we only do it when the project requires it.”

Such considerations demonstrate that screening has come of age—an industrial age in which high-throughput techniques are no longer adopted faddishly but applied as just one more tool in the biologist’s research repertoire. Much of screening’s maturation can be credited to experience, which has revealed the technique’s strengths as well as its limitations. But technological advances have also played an important role. Factors influencing screening’s maturation and industrialization include the following:

  • screening scientists having more experience managing the logistics of scaling assays to a high-throughput environment;
  • access to more sensitive detection devices, particularly the wide range of fluorescence techniques that are now commonplace even in primary screening;
  • standardization of the 96- and 384-well plate formats, which can now accommodate most assays. Rather than pursuing further miniaturization, most screening groups are looking at pooling or high-content screening as methods for increasing throughput;
  • better-engineered automation, with more options available both in terms of workstations and fully integrated automated systems; and
  • the emergence of software systems tailored specifically to the needs of screening scientists and their organizations.

Managing data: Then versus now
Screening’s industrialization, particularly the availability of robust data management tools, has freed screeners to adopt a more systematic, process-oriented mind-set toward the techniques. “If you’d asked me five years ago to name my crucial data management need, I’d have answered, ‘Just get the data into Excel, please,’” says Snowden. “Today, getting data out of the readers, processing it for percent inhibition, and updating Oracle tables are of little interest to anyone because we can all do it, ad nauseam. It’s what you do with the data in your database, how you mine it—that’s where the competitive advantage is.”

The need to pause and think more carefully about data explains why increasing throughput has become less of a priority even as screening technology has made it easier to achieve. In the early days of screening, most assays generated single data points, such as a percent-inhibition value. Not surprisingly, higher-throughput techniques are at their most productive when generating data that can be easily interpreted. In fact, today’s data management systems are often customized to identify and automatically advance compoundsfrom a screening run possessing a particular percent inhibition value.

But the high-throughput model works less well as the data get more complex—and complexity comes in many forms. Modern assay techniques, such as fluorescence and pooling, generate multiple data points for a well that must be resolved before scientists can act on the results. Cell-based assays and other high-content screens also tend to generate data that are not readily mappable back to a standard database unit. And then there are the logistical concerns arising from the focus on projects rather than particular screening platforms. Compounds stored on 96-well plates, for example, may be run in 384-well plates, requiring some way to map back to the original compounds before scientists can begin comparing plates or deciding which compounds show the most promise in an assay.

“As you generate more and more interrelated data points, you end up spending much more time interpreting that data,” explains Neil Carlson, a senior software engineer in the advanced discovery sciences group at Applied Biosystems. “The value of focusing purely on the volume of data produced drops as the amount of knowledge you gain from each data point increases.”

Screeners have confidence that they can generate data. They know they can put the results into a database, even those generated by newer detection devices or by more complicated assays. The bottlenecks—and the key opportunities for creative solutions to speed the process—now center on two key questions: Are the data produced by screening any good? And how can the good data be used to make better, faster decisions?

Software for screeners
In the early days of screening, home-grown software ruled the day. But in the past three years, vendor solutions have become more robust, giving organizations viable alternatives to building and maintaining systems themselves. In addition, many scientific software vendors have realized that tools supporting tasks such as cheminformatics integration and data visualization are essential, rather than peripheral, to screening. The following are a few of the providers and their products.
Online, unseen QC

The emphasis on quality and process control—on tracking HTS data in real time during the course of a project—is a direct outgrowth of screening’s industrialization. Not that quality is a new concern for screening scientists, who have always had tools available to help them spot problems with an assay. But screeners today don’t just want to weed out the junk; they want to stop it before it starts.

“A long time ago, when screening was a manual process, you could set up your 50 plates to run, get a cup of tea, wait for the reader to read the plates, and then sit down to work out whether the experiment had failed or not,” says Snowden. But today, Snowden and colleague Chris Molloy, head of HTS information technology (IT) and automation,note that online, immediate analysis of assay performance saves organizations critical time and money. Glaxo Wellcome’s online quality control system monitors and examines plates as they come off robots. Potential problems can be identified immediately and the entire operation shut down to make necessary corrections. It is a process that hearkens to a manufacturing assembly line rather than a research laboratory, and Molloy points out that such online, unseen data management has changed the screening work flow by removing tedious tasks. “Screeners today can spend time on more cerebral problem solving rather than wading through the mass of high-quality data that doesn’t need their attention,” Molloy says.

The availability of commercial software designed for screening has helped companies implement their own unique data management tools. Glaxo Wellcome’s internal IT staff built the quality control functionality as a module to an underlying commercial system; they also created software for hit definition and cheminformatics. All three of the major suppliers of screening data management software claim to provide fully functional packages along with tools and consulting services for helping customers build custom solutions (see box at right).The reason is simple, according to GelTex’s Burrier. “For commercial software to be useful, it must not only work out of the box, but adapt easily. Drug discovery is arguably the most data-intensive industry out there, so we need products that will move with us as our needs change,” Burrier says.

Yet some companies dislike the overhead associated with out-of-the-box commercial applications. “We found that most commercial solutions forced us to follow a number of procedures to get the data into the database and knowledge back out of the database that just weren’t relevant to our existing processes,” says Applied Biosystems’ Carlson. Applied Biosystems chose to build its own visualization system for assessing screening performance, which enables a scientist to validate and analyze screens in just two hours rather than eight.

Screening data management
IDBS (www.idbs.co.uk): Offers ActivityBase, an integrated chemical and biological data management system, and two data analysis packages: XLfit, for curve-fitting, and the SARgen reporting tool.

MDL Information Systems, Inc. (www.mdli.com): Offers Assay Explorer for biological data management and a plate management system, Apex, which was released this April.

Pharmacopeia, Inc. (www.oxmol.com): Pharmacopeia acquired Oxford Molecular last year and with it two software pacakges: OMMM, a screening plate and data processing system, and the RS3 Discovery HTS biological data management system.

Other providers
Each of the vendors listed above provides complementary systems for integrating screening data with related data, particularly that from chemistry, toxicology, pharmacology, and genomics. They are joined in this endeavor by the following:

Spotfire, Inc. (www.spotfire.com): Released in April, Spotfire DecisionSite for Lead Discovery brings Spotfire’s patented data visualization tools into a single, Web-based environment for analyzing chemical and biological data.

Tripos, Inc. (www.tripos.com): Tripos is currently partnering with Pfizer and Bayer to build new informatics methods for analyzing chemical and biological data.

Integration expectations

Although data management systems have helped streamline many of the tasks essential to screening, they have been less successful at providing the context to assist medicinal chemists and others with interpreting and acting on screening results. The problem is not new, according to Trevor Heritage, senior vice president of discovery technology operations and marketing at Tripos. “Even when we were screening in tubes, we had trouble storing the metadata associated with each data point,” Heritage says. “Moving to 96-well plates made the problem worse, and now, at really high throughputs, it’s become nearly impossible.”

Lack of integration could be tolerated during screening’s infancy, when other data management and logistical issues had yet to be resolved. But with screening itself no longer a bottleneck, organizations can now afford to ask why screening has failed to jump-start stalled R&D pipelines. Jack Elands, vice president of marketing at IDBS, points out that disillusionment with HTS has led scientists back to another technique that was once viewed as a drug discovery panacea: rational drug design.

“In the early 1990s, computational chemistry was the technique that would lead to new drugs, and testing would simply confirm these new leads,” Elands recalls. “But when pure rational design didn’t pay off, the pendulum swung wildly the other way to blitz-screening of everything in a compound library. Today, both techniques have matured. Perhaps together, they will be able to achieve what they couldn’t separately.”

Better integration may be the solution, but it is not something that can be easily provided in a one-size-fits-all, out-of-the-box package. “The real questions that you need to answer during HTS are, ‘What do you want to test, and what do you want to make?’” says GelTex’s Burrier. Different companies answer these questions in different ways, and they are not interested in sharing their particular way of answering the questions. As a result, organizations expect vendors to provide open systems that readily hook into the components that scientists might need to access.

Possible solutions typically involve rationalizing screening using available data from genomics; cheminformatics; and absorption, distribution, metabolism, and excretion studies.Outsourcing screening organizations, such as Applied Biosystems and Discovery Technologies, Ltd., have implemented integrated lead-finding processes to make their services more attractive to customers. Tripos recently patented a method of characterizing the structural diversity of large combinatorial libraries, which can aid screeners in selecting compounds for testing. And Glaxo Wellcome points to its in-house cheminformatics system, which lets scientists consider chemical information immediately after primary screening.

A security blanket?
HTS’s growth from fad to industrialized process has been mirrored by a change in mind-set. More data, it turns out, does not mean more answers—and it certainly does not translate automatically into more viable leads. “You can’t count on serendipity,” says Burrier. Whether an organization is intent on focused screening or backing out to run broader diversity screens, the emphasis today is on collecting good data and doing good things with the information. Only time will tell whether this increased attention on data management will help screening outgrow its age of insecurity.


Deborah J. Ausman is a freelance science writer living in Houston. Send your comments or questions regarding this article to mdd@acs.org or the Editorial Office by fax at 202-776-8166 or by post at 1155 16th Street, NW; Washington, DC 20036.

Return to Top || Table of Contents