Development

Renée C. Fox and David J. Rothman both argued that bioethics began in the 1960s as a social and intellectual movement. The earliest concerns of bioethics were focused on acute ethical problems in research settings. Influenced by the U.S. civil rights movement, bioethical inquiry also exposed weaknesses in institutional arrangements that no longer adequately protected research subjects or patients (Fletcher). From its origins to the present, the bioethics movement has had two arms: (1) an interdisciplinary dialogue, known as bioethics, that became a new academic subdiscipline in the larger field of ethics; and (2) an agenda for institutional and social change to prevent abuses and enhance the values that guide decision making concerning research subjects and patients. Social changes in research settings to protect human subjects preceded such changes in patient-care settings by almost a decade.

The 1960s saw a number of widely publicized and much debated cases that brought to the fore the value-laden nature of clinical practice and the difficult choices posed, in part, by rapid advances in medical technology (Jonsen, 2000). The invention of a plastic arteriovenous shunt by an American physician, Belding H. Scribner, in 1960 made possible chronic hemodialysis and, simultaneously, created a profound ethical dilemma because there were far more patients in need of chronic hemodialysis than the Seattle Artificial Kidney Center could accommodate. This dilemma led to the establishment of the Admissions and Policy Committee, later infamously referred to as the "Seattle God Committee," which employed "social worth criteria" to select candidates for dialysis. Throughout the decade, successes in organ transplantation created similar ethical dilemmas related to resource allocation. In 1967 South African surgeon Christiaan Barnard's successful transplantation of a beating heart from a patient with "irreversibly fatal brain damage" raised serious ethical questions about the definition of death. In response, a committee at Harvard Medical School, the following year, formulated a statement that defined "brain death" (Jonsen, 2000).

If the ethical dilemmas raised by chronic hemodialysis and organ transplantation remained a bit removed from the lives of ordinary people, the 1970s were dominated by cases that clearly resonated with the general populace. In the racially charged climate of the early 1970s, the New York Times' 1972 expose of the U.S. Public Health Service's forty-year Tuskegee Syphilis Study of the progression of untreated syphilis in African-American men powerfully demonstrated how social values, even disvalues such as racism, can dramatically affect "scientific" practice in clinical settings. The study, which ran from 1932 to 1972, enrolled 600 African-American men from Tuskegee, Alabama. All participants were told that they had "bad blood" and were in need of regular medical exams, including spinal taps. In exchange for these exams, participants were given transportation to and from the hospital, hot lunches, medical care, and free burial (upon the completion of an autopsy). Of the study participants, 200 did not have syphilis, while the other 400 were diagnosed with syphilis but were never told their diagnosis or treated for their disease (even after effective treatment became available) (Jonsen, 2000; Pence). In January 1973, less than a year after the Tuskegee expose, the value-laden nature of clinical practice was again thrust into the public eye when the U.S. Supreme Court handed down its landmark decision in Roe v. Wade. In setting off a decades-long struggle over the morality and legality of abortion, the case also introduced extramedical notions such as "personhood," "viability," and "privacy" into the public debate.

Despite the significance of Tuskegee and Roe, no single case captured the public imagination or shaped the development of clinical ethics more than the tragedy of Karen Ann Quinlan did (Pence). Quinlan was a twenty-one-year-old patient at St. Clare's Hospital in Denville, New Jersey. Having lapsed into a coma in April 1975 as a result of the combined effects of alcohol, Valium, and, possibly, Librium, she was dependent on a respirator (ventilator) and was eventually deemed to be in a persistent vegetative state (sometimes referred to as being permanently unconscious). In addition to the respirator, Quinlan was dependent on the technological administration of nutrition and hydration through the use of a nasogastric (NG) tube (one that delivers food and water to the stomach through the nose). After months of anguished deliberation, Quinlan's parents, Julia and Joseph Quinlan, in consultation with their parish priest, decided to remove her from the respirator and let her die. The Quinlan's decision, however, was opposed by hospital officials on the grounds that to remove the patient's respirator support in order to let her die was euthanasia—the moral and legal equivalent of murder (Pence).

Though New Jersey Supreme Court, in a 1976 ruling, ultimately supported the rights of the Quinlans to remove their daughter from the respirator, the tragedy of Karen Ann Quinlan had a dramatic impact on society and, in particular, on the rise of clinical ethics. Quinlan's dependence on a respirator and feeding tube came to symbolize, for many, "an oppressive medical technology, unnaturally prolonging dying" (Pence, p. 31). Once again, technological developments in medical science, this time the respirator and NG tube, had created new and difficult ethical dilemmas. Before the advent of respirators and feeding tubes, patients in Quinlan's situation simply died. There were no questions about "withholding" or "withdrawing" treatment, "active" or "passive" euthanasia, "ordinary" or "extraordinary" means, or who should be allowed to make life-and-death decisions and under what circumstances. If some people could not identify with chronic hemodialysis, organ transplantation, and the like, everyone could identify with the plight of Quinlan. Indeed, the New Jersey Supreme Court seemed to recognize this when it suggested that ethics committees be developed in hospitals so that future cases might be addressed before reaching the courts (In re (Quinlan, 1976).

Not surprisingly, then, the 1970s saw the first clear growth of formal efforts in clinical ethics. Ethics committees began to be established in major hospitals. Scholars in bioethics increasingly taught new courses as faculty members of medical, nursing, and other professional schools. Bioethics scholars also served developing programs in the "medical humanities." In addition, some academic medical centers began to use bioethics and medical humanities scholars to offer ethics education and even ethics consultation in cases involving patients (Jonsen, 1980).

Throughout the 1980s difficult cases continued to spur the development of clinical ethics. In part because of the Quinlan case and a national debate on end-of-life decisions, 1980 saw the establishment of the President's Commission for the Study of Ethical Problems in Medicine and Biomedical and Behavioral Research, which in 1983 issued its groundbreaking report, Deciding to Forego Life-Sustaining Treatment. The 1980s also saw the debate about withholding/withdrawing life-sustaining treatment extend to neonatal intensive care medicine with a series of hotly debated "Baby Doe" cases involving impaired newborns. The cases of Nancy Cruzan (Cruzan v. Director, 1990) and Elizabeth Bouvia (Bouvia v. Superior Court, 1986) raised additional ethical issues concerning end-of-life decisions and adults: Is artificially administered nutrition and hydration medical treatment? What evidentiary standard should be satisfied in making end-of-life decisions for formerly competent, but now incompetent, adults? Who is authorized to set such a standard? Does a competent adult have a right to refuse nutrition and hydration? Finally, the emergence of the HIV/ AIDS epidemic raised a host of ethical issues that surfaced throughout the 1980s, including, but not limited to, concerns about: confidentiality and privacy; health professionals' duties to treat HIV-infected patients and duties to disclose their own HIV/AIDS status; duties to warn at-risk third parties; patient duties to disclose HIV/AIDS status to health providers; and mandatory testing for health professionals and others.

During the 1980s, several postgraduate training programs, some textbooks, and one journal declared that they addressed clinical ethics, a term that had not been used in the earlier bioethics movement. The practice of ethics consultation began to be defined in the early to mid-1980s (Fletcher, Quist, and Jonsen), and ethics committees multiplied in clinical settings to protect shared decision making with patients and family members.

With the Patient Self-Determination Act of 1991 and the stipulation of the Joint Commission on Accreditation of Healthcare Organizations (1993) that member institutions must have a "mechanism" for "the consideration of ethical issues arising in the care of patients and to provide education to caregivers and patients on ethical issues in health care" (R.1.1.6.1, p. 9), the importance of formal efforts in clinical ethics was given expression through regulatory requirements in the United States. These rules intensified the need for competence and leadership in clinical ethics. Partly in response to this, the 1990s saw efforts by groups in Canada and the United States to address standards for ethics consultants and consultation. From the mid- to late 1990s physician-assisted suicide and palliative care captured much of the clinical ethics debate, and the rise of managed care pushed organizational ethical issues into the clinical domain.

There can be little doubt that clinical ethics is becoming an established subdiscipline of the broader field of bioethics. Highly multidisciplinary, clinical ethics is pursued by clinicians—physicians, nurses, social workers, and other health professionals—as well as by those with backgrounds in the humanities (including philosophy, theology, history, and literature), social sciences (including sociology, anthropology, and public health), and law. By 2001 there were at least forty-seven academic institutions offering graduate training programs (including certificate and fellowship programs) in bioethics or medical humanities; a number had clinical ethics components; and several were specifically devoted to clinical ethics (Aulisio and Rothenberg). Despite the rapid increase in graduate training programs in bioethics and medical humanities, the vast majority of the people offering clinical ethics services at healthcare institutions have little or no formal education and training in clinical ethics (Aulisio, Arnold, and Youngner, 2003). This suggests a continued need for educational and training programs tailored specifically to this group.

Anxiety and Depression 101

Anxiety and Depression 101

Everything you ever wanted to know about. We have been discussing depression and anxiety and how different information that is out on the market only seems to target one particular cure for these two common conditions that seem to walk hand in hand.

Get My Free Ebook


Post a comment