About MDD - Subscription Info
June 2001
Vol. 4, No. 6, pp 21–22.
clinical trials track

In the mind’s eye

For many drugs, efficacy must be determined by quantifying subjective patient assessments. Clinical research trials (especially at the Phase III level) are the means for determining the overall safety and efficacy of a new medication or therapeutic device; the Code of Federal Regulations outlines the rules that control such studies. A trial’s design—whether single- or double-blinded, placebo-controlled, or some other form—establishes only the parameters under which data collection will occur.(The components of a research protocol will be addressed in greater detail in future Clinical Trials Track departments.)Thus, questions of what data are collected and how those data are used to prove a drug’s effectiveness cannot be neglected.

When judging efficacy, it is impossible to completely separate clinical assessments from a study’s overall design, because assessments are of value only if the trial design is appropriate. Put another way, a series of assessments may indicate that drug X is effective in treating migraines as compared to a placebo, but it does not prove that X is as effective as a competitor’s currently marketed migraine drug or even a common headache remedy such as aspirin. (Without at least an incremental improvement over a marketed product or some additional novelty, such as providing a previously injectable-only medication in oral form, most pharmaceutical companies would not want to bear the cost of developing and marketing a new drug, even if FDA approval were assured from the outset.) Even so, it is useful to discuss the variety of clinical assessment methodologies, their appropriateness for specific patient populations, and their ability to indicate a drug’s efficacy when used in a properly designed trial.

Clinical Trials Web
Information on various psychological rating scales: www.priory.com/psych.htm, www.psychiatry.com. General information on clinical trials: www.centerwatch.com, www.fda.gov/cder.
Means of discovery

Clinical data contains numerous bits of demographic information, medical histories, outcomes of ongoing laboratory evaluations, concurrent medication lists, and adverse event citations. Although some of this information is used to assess the effectiveness of a new treatment, much of it is targeted toward determining the safety and side-effect profiles of the therapy under consideration. As a result, the “meat” of the efficacy data collected during a trial frequently comes from diagnostic tests, rating scales, patient diaries,and subjective patient and investigator/coordinator assessments. Therefore, much of the data are difficult to quantify; but when they are reviewed for all patients over the entire course of a trial, they provide a clear representation of whether a medication does what it is intended to do. However, as mentioned in the introduction, it is critical when interpreting data to view it in the context of the study design and to recognize that any conclusions drawn from it are only as accurate as the framework on which they are based. The FDA often rejects study claims because the conclusions drawn from the clinical data are not appropriate given the structure, depth, and breadth of the corresponding trial design.

Efficacy is evaluated most easily for drugs that produce directly measurable effects in test patients. For example, it is easy to determine whether an antihypertensive agent works by regularly tracking patients’ blood pressure over the course of a treatment period. It is similarly no great feat to establish the ability of a cholesterolemia drug to lower blood cholesterol levels. The difficulty, however, comes in trying to assess antipsychotic medications, antidepression agents, and therapies designed to manage and reduce cancer pain. In these instances, there is no straightforward diagnostic test to accurately measure a patient’s feelings. Unfortunately, a significant portion of all new pharmaceutical products present this very evaluation challenge.

Subjective analyses

One means of tracking how a patient feels during a study is to use a patient diary, or a so-called diary card. These pamphlets or cards either ask patients carefully designed, pointed questions about their physical condition or ask them to rate their quality of life on various scales.In one approach, patients record how they feel using a numerical scale that they address at various times during the day for a period of several days. More commonly, patients use a graphical rather than a numerical scale. In both methods, though, a quantitative result is obtained from these subjective assessments.

The hazards of asking patients to maintain diaries are twofold: Patients must be relied upon to provide routine and consistent responses to the queries, and many patients tend to write more than is required. The former issue relates to the usefulness of diaries in different populations, while the latter involves data quality. In numerous trials, pharmaceutical companies have discovered that older and bedridden patients respond best to diaries, whereas patients suffering from mental illnesses such as schizophrenia are ill-suited for their use.

The bigger concern in the use of diary cards, however, is data quality. Specifically, problems occur because patients often include extraneous information, which may conflict with or confuse other data collected by the investigational site. For example, if a patient reports, during her regular clinic visit, that she had no adverse events since the last visit, but the coordinator later finds a mention of a headache scribbled in the margin of the diary, the event must be recorded even though no clear start date, stop date, severity, or possible self-treatment (e.g., “the patient took an aspirin”) is known.

An even more complicated situation occurs, for example, when a patient scribbles, “Sitting on the porch tonight, the stars seemed especially dim.”In this case, the investigational site must decide whether the comment is irrelevant or represents an adverse event. If there was a full moon that night, the stars probably did seem dimmer and the event is not an event at all; however, if the patient meant that her vision was impaired, the entry is significant. Sometimes, these sorts of issues can be resolved by talking to patients, but just as often they are not noticed until long after the fact, when a patient’s memory is of questionable reliability or the patient is lost to follow-up. Hence, the best-designed diaries leave participants with little room to elaborate on their responses or include extraneous information.

One final difficulty in the use of diary cards is that the study coordinator must transcribe patients’ responses into the study record, a process that at best is laborious and at worst introduces data errors. (The patient record is known as the “case report form” and, when it is complete, it constitutes the sum of all research data collected for a given patient. It is from this paperwork that the analysis is drawn.) Because pharmaceutical companies do not view the original diary cards, they must rely on the coordinators to accurately copy them into the record; site monitors verify the transcription.

Besides diaries, various psychological rating scales may be used to quantify a patient’s feelings. For example, many trials incorporate at least one standardized “quality-of-life” questionnaire. This assessment method requires that a trained investigator or coordinator ask patients a series of questions and rate their responses on a numerical scale. It is not uncommon for the patients to fill out some of the scales themselves, but if they do so, the coordinator must review the various responses to ensure completeness and accuracy. Other rating scales vary with the therapeutic area under investigation. The most common ones include the

  • positive and negative syndrome scale (PANSS),
  • brief psychiatric rating scale (BPRS),
  • SF-36 health survey questionnaire,
  • Hamilton depression rating scale (HAM-D), and
  • clinical global improvement scale (CGI).

The use of a rating scale introduces two major sources of error: inter- and intrarater reliability. All of the psychiatric scales are designed to accurately quantify a specific facet of a study participant’s mental state at the time of rating, but their accuracy depends on the skills of the test administrator. A problem obviously arises when similar patients are rated differently from site to site or at the same site, or when different people rate the same patients at various study visits.

The sponsoring pharmaceutical companies generally address this problem in two ways. The first involves training: Usually during an investigator’s meeting, relevant personnel from each site are instructed in proper administration of the scales and are tested to verify their accuracy and consistency in doing so. Besides training, it is often required that an individual rater be assigned to a specific patient for the duration of that patient’s study involvement and that no other raters assess that participant if at all possible.

Effective or not?

The final judgment as to whether a medication is effective is based on more than just the sum of its clinical assessment parts. Drugs do not perform identically in men and women, or, for that matter, any two individuals. Just as their side effects may vary, so too does effectiveness. In addition, it is crucial to ask in what instance, or in comparison to what other treatment, a therapy is successful before extending any broad claims or seeking FDA approval.

The name of the game in a pharmaceutical assessment, however, is to combine qualitative and quantitative information into measurable data that can be easily analyzed statistically. Only then, with an appropriate sample size, a meaningful study design, and a comprehensive review of the research data collected from every investigative site, can a drug’s true usefulness be determined.

Cullen T. Vogelson is an assistant editor of Modern Drug Discovery. Send your comments or questions regarding this article to mdd@acs.org or the Editorial Office by fax at 202-776-8166 or by post at 1155 16th Street, NW; Washington, DC 20036.

Return to Top || Table of Contents