The US experience A rancorous tone

In the US, the tone that characterised the struggle between scientists and the Federal Government on the issue of scientific misconduct was evident from the start. In 1981, during testimony in the first of over a dozen Congressional hearings, the opening witness, the President of the National Academy of Sciences, asserted that problems were rare - the product of "psychopathic behavior" originating in "temporarily deranged" minds -and called for Congress to focus its attention on increasing the funding for science rather than on these aberrant acts.7 The Chairman, then-Congressman Albert Gore, Jr, thanked him for this "soothing and reassuring testimony". At the conclusion of the hearing, however, Gore could not "avoid the conclusion that one reason for the persistence of this type of problem is the reluctance of people high in the science field to take these matters very seriously."8

This edgy tone characterised practically every step of the process until the first Federal Regulations were issued eight years later, in 1989. It is worth noting that though the regulations were federal, in that they applied equally to researchers and their institutions in every state, until the end of 2000, they still applied officially only to research performed under grants from the Public Health Service (which includes the NIH) and the NSF, which funds much biomedical research. This was presumably because the cases of research misconduct that grip the imagination of the media, the public, and their elected representatives were those involving physicians and clinical research. However, as it was hard for research institutions to operate with more than one set of rules governing misconduct, these early, restricted regulations had a widespread effect. Even later, as general consensus as to handling of cases, standards, and a common, government-wide definition (which defacto covers all research conducted at US universities and hospitals) emerged, the process was marked by continuing rancour and heated - and often unsupportable - rhetoric. This aspect of the American experience seems worth understanding, as it may well be impeding progress elsewhere.

Most informed observers agree that serious scientific misconduct is probably quite rare in what is a large and successful scientific enterprise. If that is so, why does the debate about the definition and handling of misconduct arouse so much antagonism? Part of the reason is that hardworking and honest scientists resent the disproportionate attention the media give to occasional spectacular cases of malfeasance, but even those who are resigned to the media's emphasis upon flashy bad news at the expense of workaday good news seem to have felt personal jeopardy from the original proposal to implement rules governing misconduct, and then later from any, and every, proposed change to those rules. A common element seemed to be apprehension that rules would be unfairly applied to one's own solid science. Had the scientific community been deeply divided at any point as to definition and response, this visceral fear would be easier to understand. What is so confusing is that the divisions have been only at the margins, and careful examination of the issues shows a consistent, remarkably high level of general and fundamental agreement throughout the implementation process.

History

In early 1993, we gave a brief account of the turbulent history of scientific misconduct in the US.9 Until 1989, universities and the Federal Government relied upon ad hoc efforts to respond to allegations of malfeasance, much as now seems to be the case in the UK. We described the widespread publicity that accompanied numerous cases in the 1980s and the consequent public perception that fraud was rife. We noted the reaction of the US Congress, which asserted that there was a major problem; and that of research institutions and scientists who, without providing evidence, countered that it was uncommon, and should be left to them to handle. Finally, we noted how the massive publicity and the mishandling of aggrieved whistle-blowers caused Congress to conclude that the institutions' track record was too spotty to maintain the public trust. Despite continuing resistance from scientists and their institutions, Congress insisted on some accountability and governmental oversight, predicated on the government's responsibility to oversee the proper use of tax dollars. The result was the requirement that the main federal funding agencies promulgate regulations and establish offices to provide a more systematic and structured response to allegations of malfeasance.

Definition

The definition adopted in 1989, under Congressional pressure, was "Fabrication, falsification, plagiarism or other practices that seriously deviate from those that are commonly accepted within the scientific community for proposing, conducting, or reporting research."10

This definition caused problems for many reasons. Everyone agreed that fabrication, falsification, and plagiarism were antithetical to good science. However, the phrase "... other practices that seriously deviate..." was immediately seized upon by the Federation of American Societies of Biology (FASEB), who argued that this clause could allow penalties to be applied to novel or breakthrough science, and mobilised its members to remove this phrase completely and limit the definition to "FF&P". As we noted,

Underlying the objections to the "other practices that seriously deviate..." clause is the fear that the vague language will result in application of a vague and misty standard of misconduct that cannot be known in advance. It seems fundamentally unfair to stigmatize someone for behavior they had no way of knowing was wrong. Unhappily, consideration of cases shows that some of the most egregious behaviors, abuse of confidentiality, for example, are not covered by the FF&P label. We cannot have a definition that implies that this sort of behavior is not wrong. Moreover, since we cannot possibly imagine every scenario in advance, the definition must ensure that perpetrators of acts that are deceptive and destructive to science are not exonerated. If they are, the public and our legislators, applying the standards of common sense, will rightly deride the outcome as nonsensical.9

Since the Office of Research Integrity (ORI) - the government office charged with oversight of scientific integrity within biomedicine, the research field controlled by the Public Health Service - never invoked the "other practices..." clause, but the other large scientific grant-awarding government agency, the National Science Foundation (NSF), did, researchers funded by different government agencies effectively came to be covered by different definitions of research misconduct. In addition, ORI, in a move that was purely administrative, and made to reduce the number and complexity of their formidable backlog of cases, announced that it would not take cases of alleged plagiarism if the authors of a work had been coauthors together, not least because such cases proved singularly awkward to sort out. By definition, ORI asserted, all such cases fell into the category of authorship disputes and would not be examined for the elements of plagiarism. NSF never instituted such a policy, and continued to examine cases where students or co-workers alleged that their contributions had been appropriated by another without cause or attribution. A system in which some can have their complaints examined and others cannot, could not succeed for long.

Intent

Science is a risky enterprise, often requiring much trial and error. No one could possibly undertake scientific experiments if error was construed as misconduct. As Mishkin has pointed out, "Misconduct" in legal parlance means a "wilful" transgression of some definite rule. Its synonyms are "misdemeanour, misdeed, misbehaviour, delinquency, impropriety, mismanagement, offense, but not negligence or carelessness."11 Distinguishing error from misconduct requires making a judgment about intent. Whilst scientists are often cowed by this necessity, citizens routinely make them in other settings, most notably in our established criminal justice systems.

It is our opinion that this assessment should be made only at the time of adjudication, after the facts of a case of scientific misconduct have been determined, for example, "words were copied" or "no primary data have been produced." This sequential approach has two salutary effects: first, it reduces the potential that the factual determinations will be obscured by other considerations. The danger otherwise is that - as has frequently happened - a panel's sympathy for the accused ("He's too young, too old, meant well", etc.) interferes with a rigorous analysis of events. Second, this approach introduces proportionality into the response: what, if any, consequence should there be, in light of all the relevant circumstances? This factor is important in the final sense of whether the process "worked" or not - both for participants and for observers.

The scientific dialogue model

Originally, the ORI tried to keep misconduct proceedings in the hands of scientists rather than lawyers. The "scientific dialogue model" they advanced soon came under criticism for being unfair and flawed.9 Changes were made, and the standards for responding to allegations gradually became more structured and legalistic so that results could withstand scrutiny from administrative tribunals. Defendants, faced by the loss of their livelihoods, hired lawyers to insist on their basic right to fundamental fairness and due process. Most fundamental among these rights are the rights to know and to respond to all evidence to be used against an accused. Unfortunately, these rights were all too easy to overlook while collegiality prevailed ("the scientific dialogue"), and where hard issues were not always faced directly or even-handedly.

The early 1990s: the heat increases

Despite these problems, and the heat they engendered, in February 1993, we concluded on a note of cautious optimism:

... practically everything to do with scientific misconduct is changing rapidly: the definition, the procedures, the law and our attitudes... It will take time to accumulate a body of experience (a case law, as it were) and to get it right. The challenge is to seize the opportunity, to capitalize on the wealth of accumulating information, and to focus on the long-term goals of strengthening science and the society that depends on it.9

Our optimism was premature. In 1994, despite more than 20 years of widely publicised cases of misconduct, more than a dozen congressional hearings, years of regulations as a result of congressional impatience with science (and layers of modifications to them), and, first, an Office of Scientific Integrity and then of Research Integrity, there remained widespread division and dismay. The definition was still hotly debated, as was the process owed an accused scientist, the extent of federal oversight, how to protect whistleblowers, and how to prevent misconduct. At the same time, in the early 1990s, several high-profile cases were decided against government science agencies and their ways of proceeding.12

Was this article helpful?

0 0

Post a comment