When viewed from the 21st century, the political bargain struck between the scientific community and the US government during the 1940s can perhaps seem idealistic and remote from modern-day policy concerns. Flushed with the success of their war work, scientists lobbied for the establishment of a semi-autonomous, decentralised system that would assure steady funding of basic and applied research through expansion of existing agencies and establishment of new ones.1-3 In return, they optimistically promised discoveries and innovations that would fill "millions of pay envelopes on a peacetime Saturday night".4
Underpinning the scheme was the political assumption that scientists would be trustworthy stewards of public money. They would play an active role in allocating funds and guiding research agendas, and would retain institutional level autonomy over management and conduct of projects. Allowing scientists, in effect, to "self-regulate" their own activities seemed a reasonable approach at the time. Faith in science was high; the image of scientists, positive. Americans probably thought that if you can't trust a scientist (in their minds undoubtedly imagining Albert Einstein or Marie Curie), then who can you trust?
With such an arrangement, American taxpayers expected to reap considerable profit from the investment, even if some basic research proved irrelevant or unnecessary. Peace would be maintained; the economy would prosper. As progress rolled out of the laboratories in unending waves, the promise seemed fulfilled.
This fundamental expectation of integrity - rooted firmly in bureaucratic mechanisms to certify accountability - explains in part why scientific misconduct later became such a controversial political issue in the US. As Mark Frankel observes, scientists' professional autonomy is "not a right, but a privilege granted by society";5 it is negotiated on the assumption that scientists will monitor their colleagues' performance.6 The discovery, in the 1970s and 1980s, of faked and fabricated research in a few prestigious American institutions was disturbing enough to the politicians charged with oversight of the federal research system.When prominent scientists then reacted defensively to legislators' inquiries and even argued that the US Congress should "back off", lest scientific freedom be endangered, politicians, journalists, and the public reacted with dismay and disappointment. The subsequent years of squabbling over whether the "problem" consisted of only a "few bad apples" or a thoroughly "rotten barrel", as well as scientists' open questioning of federal authority to investigate allegations of fraud, furthered the erroneous impression that scientists in general cared little about integrity and only about preserving their autonomy.
How did the situation in the US shift from an atmosphere of such optimism and trust in the 1940s to today's increasing expansion of government regulation of research? How did a professional climate evolve in which accusations of unethical conduct can now trigger not just formal investigations and ruined careers but also years of litigation? This essay examines the particular circumstances and history of the controversy in the US, especially with regard to the biomedical sciences, and with attention to the lessons contained therein.
Was this article helpful?