The Bioethics Mess
A critique of bioethics as it has developed in North America.
Crisis Magazine, May 2001
Reproduced with permission
Dianne N. Irving, M.A., Ph.D.
*
"Bioethics" -- the word sounds like old-fashioned
medical ethics applied to new medical technology.
It's the application of traditional philosophical or
theological principles to the moral dilemmas created
by, say, cloning or experimenting with new AIDS
drugs, right? Not really.
Like the word "bioethics" itself, which formally
dates only from the early 1970's, the philosophical
underpinnings of bioethics are completely different
from those that underlie traditional medical ethics.
Traditional medical ethics focuses on the
physician's duty to the individual patient, whose
life and welfare are always sacrosanct. The focus of
bioethics is fundamentally utilitarian, centered,
like other utilitarian disciplines, around
maximizing total human happiness.
Such factors as the feelings and preferences of
other people -- the parents of a child with severe
birth defects, the husband whose wife seems
permanently comatose, or even the doctor who decides
that an elderly Alzheimer's patient would be better
off dead -- along with the possible cost of
treatment to society, can weigh in against and
ultimately outbalance the afflicted person's needs.
Good-bye, Hippocrates; hello, Peter Singer. And
good-bye, especially to the Catholic understanding
of the sacredness of the life of each individual
human being.
Bioethics as understood and practiced
today was created by a congressional mandate in
1974. During the late 1960's and early 1970's, there
was an explosion of exposes of research abuses in
medicine, and also of ethical dilemmas created by
new life-prolonging technologies. There were reports
of patients enduring agonizing deaths, spending
their last days -- or even last weeks or months --
hooked up to mazes of tubes and impersonal machines.
Nursing homes and hospitals seemed to be overflowing
with the hopelessly ill apparently consuming scarce
medical resources. There were also revelations that
entire non-consenting populations -- orphans in
institutions, poor black men recruited by the
Tuskegee Institute, prisoners, the mentally ill,
residents of inner cities -- had been used as human
guinea pigs in government-sponsored medical
experiments. Aborted fetuses were rapidly becoming
prized biological materials for medical
investigation, raising serious moral questions. And
so, bioethics was formally "born".
In the late 1960's
and early 1970's, Senator Edward Kennedy, the
Massachusetts Democrat, and then-Senator Walter
Mondale, later to become vice president under
President Jimmy Carter, conducted hearings on many
of these abuses. The result was a piece of federal
legislation called the 1974 National Research Act.
It required the secretary of the Department of
Health, Education and Welfare (now Health and Human
Services, or HHS) to appoint a commission to
"identify the basic ethical principles" that the
federal government should use in resolving these
extraordinary dilemmas. Those "ethical principles"
were to be translated into practice as the basis of
federal regulations concerning the use of human
subjects in research.
In 1974, Casper Weinberger,
President Gerald Ford's health, education, and
welfare secretary, appointed an eleven-member
commission that in 1978 issued a document called the
Belmont Report, which identified and defined three
ethical principles: respect for persons, justice,
and beneficence. To this day, those principles are
called "the Belmont principles." "principlism" for
short, or simply "bioethics."The Belmont principles
became the foundation for the guidelines that the
Office for Protection from Research Risks uses when
assessing the ethics of using human subjects in
research. They also underlie a host of other federal
regulations and guidelines for medical research, and
they have worked their way into the private sector
as well. Universities and hospitals routinely use --
or try to use -- the three principles when approving
research projects, deciding who qualifies for
certain medical treatments, and even who lives, who
dies, and who makes those decisions.
Thus, bioethics
is really a brand-new ethical theory, a brand-new
way of determining right and wrong. How did we get
there? How did it come about that our government and
its non-elected experts, rather than religious
leaders or even traditional philosophers, acquired
the power to define what is normatively ethical for
all Americans facing complex medical or scientific
issues?
A Short History of Medical Ethics
The discipline of medical ethics goes back to
ancient times, to the Greek physician Hippocrates
(about 460-380 B.C.), who was concerned about the
qualities of "the good physician," and the decorum
and deportment that a physician should display
toward patients. The good physician was, in
Hippocrates' view, a "virtuous physician," whose
duties included helping rather than harming the
sick, keeping patients' confidences, and refraining
from exploiting them monetarily or sexually.
Hippocrates' code of conduct strictly forbade
abortion and euthanasia.The paradigm of those duties
was the Hippocratic oath, which most medical schools
routinely administered to their graduates until
relatively recent times.
During the Middle Ages, a
more Christian and communal view of Hippocratic
medical ethics prevailed that required physicians to
present themselves to the public as "professionals"
and to show themselves as worthy of trust and
authority. Medicine became more than a
physician-patient relationship. Its practitioners
now had the sole privilege of education, examining,
licensing, and disciplining other physicians, who
pledged themselves to use their skills to benefit
society at large as well as their own
patients.
Starting in the late 19th century, with the
rise of medical schools and teaching hospitals,
traditional Hippocratic ethics began to incorporate
new rules governing the behavior of physicians
toward each other. There developed what was called
an "ethics of competence," especially in the
practice of medicine in hospital settings. The
emphasis was now on extensive cooperation among
physicians and all the other professionals involved
in the care of patients. Accurate record-keeping and
written patient evaluations became the norm.
Physicians were supposed to inform their patients
about their diagnoses and courses of treatment and
not to exploit them for teaching purposes. Senior
doctors were not supposed to exploit junior doctors.
"Moral practice" was defined as "competent
practice," including the mastering of advances in
medical science.
After World War II, new medical
research and technologies began to complicate
patient care, thanks to massive federal funding of
the health sciences. The crucial bonds of the
physician-patient relationship were beginning to
fray. Traditional Hippocratic medicine was breaking
down rapidly, seemingly impotent in the face of
pressing new questions: Could one experiment on
dying patients to "benefit" other patients"? How
should the growing intertwining of medical practice
and government, commerce, and technology be handled?
How should the benefits and burdens of medical
research be justly distributed, or scarce medical
resources allocated? Who should make these
decisions? Patients? Their families? Physicians?
Clergy? Experts?
The Conferences
Starting in the
1960's, there were a series of conferences around
the country on such issues as population control,
thought control, sterilization, cloning, artificial
insemination, and sperm banks. One of the first, the
"Great Issues of Conscience in Modern Medicine"
conference at Dartmouth College in 1960, hosted an
array of scientific and medical savants, including
the microbiologist Rene Dubos of the Rockefeller
Institute, the physician Sir George Pickering of
Oxford University, and Brock Chrisholm, a leading
medical light of the World Health Organization,
together with such famous humanists as C. P. Snow
and Aldous Huxley.
The hottest topics were genetics
and eugenics. Dubos declared that the "prolongation
of the life of aged and ailing persons" and the
saving of lives of children with genetic defects --
two benefits of post-World War II advances in
medicine -- had created "the most difficult problem
of medical ethics we are likely to encounter within
the next decade." Geneticists worried that the gene
pool was becoming polluted because the early deaths
of people with serious abnormalities were now
preventable. The Nobel Prize-winning geneticist
Hermann Muller offered his own solution to that
problem: a bank of healthy sperm that, together with
"new techniques of reproduction," could prevent the
otherwise inevitable "degeneration of the race" that
might ensue thanks to medical advances that allowed
the defective to reproduce.
At another conference,
"Man and His Future," sponsored by the Ciba
Foundation in London in 1962, the luminaries
included Muller; Joshua Lederberg, winner of the
Nobel Prize in medicine; the geneticists J. B. S.
Haldane and Francis Crick; and the scientific
ethicist Jacob Bronowski. As at Dartmouth, concerns
about human evolution, eugenics, and population
control were primary. The biologist Sir Julian
Huxley declared, "Eventually, the prospect of
radical eugenic improvement could become one of the
mainsprings of man's evolutionary advance."
Huxley
proposed a genetic utopia that would include strict
government controls over physiological and
psychological process, achieved largely by
pharmacological and genetic techniques. They would
include cloning and the deliberate provocation of
genetic mutations "to suit the human product for
special purposes as the world of the future."
Other
conferences of the 1960's delved further into the
implications of science for the modern world. One
was a series of Gustavus Adolphus Nobel meetings in
Minnesota in which many Nobel winners participated.
At the first of them, in 1965, whose theme was
"genetics and the Future of Man," the Nobel
physicist William Shockley presented his maverick
views on eugenics. He suggested that, since human
intelligence was largely genetically determined,
scientists would embark on serious efforts to raise
the human race's collective brainpower by various
means, including sterilization, cloning, and
artificial insemination.
Also evolving during this
time were new concepts of scientific and medical
ethics and the possible roles that professional
ethicists and theologians should play in the
critical debates over the new standards of right and
wrong. Most of the savants of the 1960's espoused a
then-fashionable ethical relativism, which raised
concerns among some theologians and philosophers
about the wisdom of allowing the scientific elite to
develop policies outside the constraints of
traditional ethical principles.
Some theologians,
such as the Christian ethicist Paul Ramsey,
persisted in proposing distinctly theological
principles and values to guide such deliberations.
Others, philosophers, especially those of the
reigning "analytic" school in America and Britain,
proposed that secular philosophical principles
should serve as the sole guidelines for public
policy. Some in that group, such as James Gustafson
of Emory University, argued for trying to reach a
"consensus" of society on medical ethics, rather
than looking to traditional norms.
The result was the
secularization of both theology and philosophy for
public policy purposes. For example, Reed College in
Portland, Oregon, sponsored a conference in 1966
titled, "The Sanctity of Life." It included a
lecture by the sociologist Edward Shils titled, "The
Secular Meaning of Sanctity of Life." Daniel
Callahan, later to found The Hastings Center, a
leading bioethics think tank, pressed for
formulation of a new normative medical ethic that
would be influenced solely by secular moral
philosophy. Most agreed with Gustafson's proposal
that "consensus" would be the method of achieving
that formulation. This sort of thinking would become
a major characteristic of the new field of bioethics
yet to come.
The Think Tanks
As the 1970's approached, the debates and their
participants moved from conferences at universities
to permanent think thanks. Callahan and William
Gaylin set up The Hastings Center outside New York
City in 1969. There, such pioneers of bioethics as
Dubos, Ramsey, Gustafson, Renee Fox, Arthur Caplan,
Robert Veatch, and even Mondale and the liberal
Catholic journalist Peter Steinfels held forth.
The
first "research groups" at The Hastings Center
addressed such issues as death and dying, behavior
control, genetic engineering, genetic counseling,
population control, and the conjunction of ethics
and public policy. In 1971, the first volume of the
Hastings Center Report appeared, a publication that
was to become a bible of secular bioethics, which
was just then acquiring its name. As Albert Jonsen,
a pioneer of bioethics who taught at the University
of Washington, noted in a 1998 book, The Birth of
Bioethics (Oxford), "The index of the Hastings
Center Report over the next years defined the range
of topics that were becoming bioethics and
constituted a roll call of the authors who would
become its proponents."
Under the leadership of the
Dutch fetal-development researcher Andre Hellegers,
the Kennedy Institute of Ethics (originally named
the Kennedy Center for the Study of Human
Reproduction and Development) opened at Georgetown
University in 1971. Its mission was to study the
ethical issues involved in reproductive research in
a Catholic context, if a generally liberal Catholic
one. Such scholars as the Rev. Richard McCormick, S.
J., a Catholic bioethicist of decidedly liberal
views, and later, Edmond Pellegrino, a more
traditionalist Catholic bioethicist, worked out of
the Kennedy Institute at various times.
Also in the
1970's, a Protestant counterpart to the Kennedy
Institute opened, the Institute on Human Values,
sponsored by the United Ministries in Education, a
partnership of the Methodist and Presbyterian
churches.Many of the conference participants of the
1960's and the think-tank scholars of the 1970's
were among those testifying before the Mondale and
Kennedy congressional hearings that led to the
passage of the National Research Act of 1974. Many
in this army of secular scholars also sat on the
committee that later issued the Belmont Report with
its three principles. Those scholars were the
midwives at the formal "birth of bioethics" that the
1974 act mandated. They were also the first formally
designated "bioethicists."
The three Belmont
Principles -- respect for persons, justice , and
beneficence -- were supposedly derived from the
works of leading secular moral philosophers of the
18th, 19th and 20th centuries, chiefly Kant, John
Stuart Mill, and John Rawls, a highly influential
Harvard University philosopher whose 1971 book, A
Theory of Justice, was a blueprint for certain
radically egalitarian legal and social theories of
the 1970's, such as affirmative action and wealth
redistribution.
Predictably, the new bioethics was
anything but systematic. The commission selectively
took bits and pieces from different and
contradictory ethical theories and rolled them up
into one ball. Furthermore, each of the three
principles of the new bioethics was prima facie: no
one principle could overrule any of the other two.
In dealing with real-life medical and scientific
problems, the bioethicist was supposed to
simultaneously reconcile the values of all three
principles.
Inevitably, theoretical cracks began to
form in the very foundation of this new bioethics
theory. In fact, because the Belmont principles were
derived from bits and pieces of fundamentally
contradictory philosophical systems, the result was
theoretical chaos. More problematically, when people
tried to apply the new theory to real patients in
medical and research settings, it didn't work
because, practically speaking, there was no way to
resolve the inherent conflicts among the three
principles.
Furthermore, while the Belmont Report
gave a nod to the traditional Hippocratic
understanding of beneficence as doing good for the
patient, it also included a second definition of
beneficence that was essentially utilitarian: doing
"good for society at large." The report even
declared that citizens have a "strong moral
obligation" to take part in experimental research
for the greater good of society. This obviously
contradicts the Hippocratic interpretation of
beneficence, and it also violates time-honored
international guidelines, such as the Nuremberg Code
and the Declaration of Helsinki, which bar
physicians from experimenting on their patients
unless it is for the patient's benefit.
The second
Belmont principle, justice, was also defined along
utilitarian lines, in terms of "fairness":
allocating the benefits and burdens of research
fairly across the social spectrum. This
Rawls-influenced definition is very different from
the classic Aristotelian definition of justice as
treating people fairly as individuals.Even the third
Belmont principle, respect for persons, ended up
serving utilitarian goals. Respect for persons is
supposed to be a Kantian notion, in which respect
for the individual is absolute.
But the Belmont
Report blurred that idea with Mill's utilitarian
views of personal autonomy. In Mills' view, only
"persons" -- that is, fully conscious, rational
adults capable of acting autonomously -- are defined
as moral agents with moral responsibilities.
However, those incapable of acting autonomously --
infants, the comatose, those with Alzheimer's --
became defined in bioethics theory as non-moral
agents and thus "non-persons" with no rights. It is
only a short step from this kind of reasoning to
that underlying Princeton ethicist Peter Singer's
"preference" utilitarianism, in which animals have
more rights than young children.
Breaking Ranks
Eventually, discontent began to smolder within the
brave new discipline. Even the founders of bioethics
have recently admitted that the Belmont principles
present grave problems as guidelines for physicians
and researchers.The Hastings Center's Callahan has
baldly conceded that after 25 years, bioethics
simply has not worked. The University of
Washington's Jonsen recently wrote that principlism
should now be regarded as "a sick patient in need of
a thorough diagnosis and prognosis." Gilbert
Meilaender, a Christian medical ethicist at
Valparaiso University, has noted "how easily the
[reality and worth of the individual human] soul can
be lost in bioethics."
Another reason for the
theoretical and practical chaos surrounding
bioethics these days is that almost anyone can be a
bioethicist. Few "professional" bioethics experts,
the doctors, researchers, and lawyers who sit on
hospital and government bioethics committees, have
academic degrees in the discipline, and even for
those few who do, there is no uniform or
standardized curriculum. Most professors of
bioethics don't know the historical and
philosophical roots of the subject they teach; the
courses vary from institution to institution; there
are no local, state, or national boards of
examination; and there are no real professional
standards. There is not even a professional code of
ethics for bioethicists.
Because of these criticisms,
many bioethicists now prefer to say that their field
is more a form of "public discourse" than an
academic discipline, a kind of "consensus ethics"
arrived at by democratic discussion rather than
formal principles. The problem with this line of
reasoning is that the ethical principles used in the
'discourse" are still the same-defined bioethics
principles, and those who typically reach the
"consensus" are the bioethicists themselves, not the
patients, their families, or society at large, so
the process is not exactly neutral or democratic.
And if bioethics is just a "discourse," then why are
its practitioners regarded as "ethics
experts"?
Furthermore, the three principles of
bioethics -- respect for persons (now almost always
referred to as autonomy), justice, and beneficence
-- still pop up everywhere in the literature of a
myriad of public policymaking bodies with
jurisdiction over medical, social, and political
decisions.
The President's Commission for the Study
of Ethical Problems in Medicine and Biomedical and
Behavioral Research, created by Congress in 1978,
has cited the three principles in presumably
definitive reports on such wide-ranging
medical-moral issues as the definition of death;
informed consent; genetic screening and counseling;
regional and class differences in the availability
of health care; the use of life-sustaining
treatment; privacy and confidentiality; genetic
engineering; compensation for injured subjects;
whistle-blowing in research; and guidelines for the
institutional review boards set up by universities
for research on human subjects.
The National
Institutes of Health's 1988 Human Fetal Tissue
Transplant Conference, its 1994 Human Embryo
Research Panel, and the National Bioethics Advisory
Commission set up by President Bill Clinton in 1995
also cite the Belmont principles as norms in their
determinations of what is "ethical." The list of
bioethics-based government regulations and policies
is endless.
The principles of bioethics now also
pervade the "ethics" of other academic disciplines,
such as engineering and business. Many colleges,
universities, and medical schools require a course
in bioethics in order to graduate. Bioethics has
also heavily influenced legal and media ethics and
is even taught in high schools. Furthermore, the
principles of bioethics themselves have led to
radical consequences.
Peter Singer is teaching
undergraduates at Princeton that the killing of even
healthy human infants can be "ethical." Or ponder
the thought of Tristram Engelhardt, a bioethicist on
the faculty of the Baylor College of Medicine:
"Persons in the strict sense are moral agents who
are self-conscious, rational, and cable of free
choice and of having interests. This includes not
only normal adult humans, but possibly
extraterrestrials with similar powers."
Bioethicist
Dan Wikler of the World Health Organization has
declared, "The state of a nation's gene pool should
be subject to governmental policies rather than left
to the whim of individuals."
As bioethics supplants
traditional ethics before our very eyes, few seem to
question its underlying premises. But we should know
it for what it is: a form of extreme utilitarianism
in both its theoretical and practical forms. It
bears no relation to the patient-centered
Hippocratic ethics that for nearly 2,500 years
required physicians to treat every human being in
their care as worthy of respect, no matter how sick
or small or weak or disabled. It certainly bears no
relation to Catholic medical ethics, which continue
the Hippocratic tradition in light of church
teachings on moral law. And bioethics offers little
concrete guidance to physicians and scientists even
on its own terms. Perhaps one of these days, society
will come to grips with the moral and practical mess
that bioethics has created and come up with
something to replace it. This time society will
perhaps not rely so heavily on the self-proclaimed
scientific and moral experts.