[ Skip Navigation ]

THE PHARMACEUTICAL GOLDEN ERA: 1930 - 60

The middle third of the 20th century witnessed a blossoming of pharmaceutical invention, with breakthroughs in the development of synthetic vitamins, sulfonamides, antibiotics, hormones (thyroxine, oxytocin, corticosteroids, and others), psychotropics, antihistamines, and new vaccines. Several of these constituted entirely new classes of medicines. Deaths in infancy were cut in half, while maternal deaths from infections arising during childbirth declined by more than 90%. Illnesses such as tuberculosis, diphtheria, and pneumonia could be treated and cured for the first time in human history.

As in other domains, wartime support for research accelerated the development of certain therapies. Programs sponsored by the U.S. government focused on antimalarials, cortisone (which was thought to permit aviators to fly higher without blacking out), and, most especially, penicillin. The development of penicillin by 11 U.S. pharmaceutical companies under the oversight of the War Production Board gave U.S. firms a leading position after WWII. In the late 1940s, they produced over half of the world's pharmaceuticals and accounted for one-third of international trade in medicines.

Harvey Washington Wiley, head of the Division of Chemistry of USDA, predecessor of FDA, is pictured here around 1899 with his technical staff behind him. Wiley's formal attire perhaps suggested his activity that day to stump for the latest in a long line of comprehensive federal food and drug bills COURTESY OF FDA HISTORY OFFICE

CHIEF Harvey Washington Wiley, head of the Division of Chemistry of USDA, predecessor of FDA, is pictured here around 1899 with his technical staff behind him. Wiley's formal attire perhaps suggested his activity that day to stump for the latest in a long line of comprehensive federal food and drug bills.

Pharmaceutical firms in the U.S., Europe, and Japan expanded rapidly following the war with strong investments in research, development, and marketing. In this period of rapid growth in pharmaceutical research, companies expanded in-house R&D while continuing collaborations and consulting relationships with academic researchers. At the same time, the primary methods used for drug invention shifted radically between 1930 and 1960. During the advent of the antibiotic era, drug firms screened thousands of soil samples in a global search for antimicrobial agents. Antibiotics including streptomycin (Merck), chlortetracycline (Lederle), chloramphenicol (Parke-Davis), erythromycin (Abbott and Lilly), and tetracycline (Pfizer) gave companies the opportunity to extol the miracles of medical research to health professionals and consumers alike. Profits from the sale of antibiotics enabled companies to build campuslike research parks from which further breakthroughs were expected.

Within a short time, firms shifted their research focus from natural products to modified natural products to synthetic chemistry. Associated with this shift, new analytical techniques and instrumentation entered the research laboratory to aid in the determination of the molecular structures of antibiotics, steroids, and other potential medicines. X-ray crystallography, as well as ultraviolet and infrared spectroscopy, initiated a gradual shift from wet chemistry of solutions in beakers and test tubes to dry chemistry of minute samples and molecular models. As a result, chemists began to develop a good working knowledge of the relationships between molecular structure and bioactivity, making possible the first effective antipsychotics, tranquilizers, antidepressants, and antihistamines.

This period also saw the institution of safety regulations in the U.S. in the wake of the 1937 sulfanilamide incident. Tragedy struck when a scientist at S. E. Massengill used diethylene glycol, a sweet-tasting but toxic chemical, to prepare one of the then-new sulfa drugs in syrup form. Although chemists at the firm examined the appearance, flavor, and fragrance of their "elixir of sulfanilamide," they did not test it on animals or even review published literature on solvents. After more than 100 people--mostly children--died from the compound, a public uproar prompted rapid approval of the 1938 Food, Drug & Cosmetic Act. The law significantly expanded the Food & Drug Administration's authority over the marketing of new drugs. Officials were required to review preclinical and clinical test results and could block a drug's approval by requesting additional testing or by formally refusing to allow its marketing.

Less stringent regulations than in the U.S. were put in place in other countries during the first half of the 20th century. In Germany, for example, a wartime ban on new medicines (Stoffverordnung) was continued into the 1950s as a means for the health ministry to regulate pharmaceutical manufacturers. In England, the Therapeutic Substances Act was revised and consolidated in 1956, bringing more substances under government control and setting formal standards for their testing and manufacture. Similar laws in the U.S., European countries, and other nations around the world distinguished over-the-counter therapies from prescription drugs. This division, in turn, drove further specialization by the pharmaceutical industry into high-profit prescription-only medicines.

Sir Alexander Fleming (front) in 1945 at a pharmaceutical production facility COURTESY OF BRISTOL-MYERS SQUIBB

SCALE-UP Sir Alexander Fleming (front) in 1945 at a pharmaceutical production facility.

As part of their regulatory oversight, government officials promoted the double-blind, clinically controlled trial as the gold standard for testing new medicines on patients. For regulators, data from formal clinical trials narrowed the field of decision-making by characterizing drugs in terms of their safety (and eventually, effectiveness) across large patient populations. In time, pharmaceutical companies found that testing helped target populations who would purchase a new drug, generated information useful in marketing, and raised the entry costs for competing firms. Medical reformers and consumer protection advocates expected more rigorous premarket tests to prevent harmful or useless products from reaching consumers. Interestingly, despite national variation in regulations and clinical trial methods, in all countries the pharmaceutical industry retained responsibility for product testing. Thus, even as government regulatory agencies expanded their authority, they remained reliant on the industry to sponsor and oversee the vast majority of clinical trials.

Despite increasing regulation, the industry operated largely out of the public eye, in part because its marketing was oriented primarily to physicians. In most countries, companies advertised pharmaceuticals to physicians, physicians prescribed them to patients, and patients obtained the drugs from pharmacies. Drug salesmanship was transformed into a professional service that educated physicians about new therapeutic options during the middle third of the 20th century. At the same time, critics such as John Lear of the Saturday Review began exposing marketing practices in widely read articles such as "Taking the Miracle out of the Miracle Drugs" (Saturday Review Jan. 3, 1959, page 35) and "Do We Need a Census of Worthless Drugs?" (Saturday Review May 7, 1960, page 53). Nevertheless, one consequence of the post-WWII growth of social democracies and socialized medicine in Europe and greater availability of private health insurance in the U.S. was that neither patients nor physicians paid close attention to drug prices in this time period.

NEXT PAGE: Social reassessment, regulation, and growth: 1960-80

C&EN SPECIAL ISSUE

The Top Pharmaceuticals
That Changed The World
Vol. 83, Issue 25 (6/20/05)
Table Of Contents