ETHICS IN THE SCIENCE CURRICULUM

Jeffrey Kovac


This document may be too large for your printer buffer to handle. We suggest downloading this document to a disk if printing difficulties are encountered or e-mailing the author for a hard copy by clicking on his/her name.


In 1993 the distinguished physicist Freeman J. Dyson published an essay in The American Scholar entitled "Science in Trouble."1 In it he argued that science is in trouble on three levels: personal, local and global. On the personal level, the trouble centers around the issue of scientific integrity. On the local level, the trouble comes from public fear of the possible dangers to the public health caused by scientific experiments. On the global level, Dyson is concerned about the increasingly troubled relationship between science and society. Dyson's thoughtful essay is just one of many recent articles and books addressing ethical questions in science. This increasing concern with scientific ethics is part of what Michael Davis has termed "the ethics boom."2 In this article I will try to provide both a conceptual framework and a practical strategy for incorporating the teaching of scientific ethics into the curriculum. I take the view that ethics is an integral part of science and not a supplementary topic to be treated as an optional sidebar in a textbook.

We must distinguish three uses of the word "ethics." The first and most common meaning is ordinary morality, those more or less universal rules of behavior that we expect every rational person to obey. Ordinary morality governs our day-to-day behavior and includes such maxims as "don't kill" and "don't lie." The rules (and permissible exceptions) of ordinary morality are learned from parents, ministers, school teachers and others who influence us as we grow into adulthood. We can assume that most students enter college with a well developed sense of ordinary morality.

The second meaning of ethics is ethical theory, a branch of philosophy. Ethical theory is the attempt to put ordinary morality into a systematic framework. It is the content of most courses in ethics taught in philosophy departments where the ideas of the major ethical thinkers such as Aristotle, Kant and Mill are studied.

The third meaning, the one we are concerned with in this paper, is professional ethics. Professional ethics is the special rules of conduct adhered to by those engaged in pursuits ordinarily called professions, such as law, medicine, engineering and science. Professional ethics is specific. Legal ethics applies only to lawyers (and no one else); scientific ethics applies only to scientists. Professional ethics is consistent with ordinary morality, but goes beyond it. Professional ethics governs the interactions among professionals and between professionals and society. In many cases it requires a higher standard of conduct than is expected of ordinary people. To understand professional ethics, it is necessary to study carefully the concept of a profession.

A profession derives from two bargains or contracts: one internal and one external. The internal bargain governs the interactions among members of the profession while the external bargain defines the relationship of the profession to society. (Dyson's three levels of trouble in science -- personal, local and global, are another way of describing these bargains.) Both, however, are based on an ideal of service around which the profession is organized.3 For lawyers the ideal is justice under law. For physicians the ideal is curing the sick, protecting patients from disease and easing the pain of the dying. As Michael Davis has argued, these moral ideals go beyond the requirements of ordinary morality. Since science is not a client-oriented profession like law or medicine the underlying moral ideal is more abstract. A powerful definition of the moral ideal of science can be found in Jacob Bronowski's book, Science and Human Values: the "habit of truth."4 Science is the dispassionate search for the understanding of nature, what John Ziman has called "reliable knowledge."5 Further, scientific truth is considered to be of intrinsic value, independent of its applicability. While science does lead to useful products and inventions, such applications are only secondary to the search for what Einstein called "the secrets of the old one."6

To attain the ideal of service, the profession must agree on an internal code of practice and negotiate the relationship between the profession and society. The internal bargain consists of several parts: standards of education and training, a formal or informal certification or licensing procedure and a code of practice which might include a formal code of ethics. In science the standards of training have evolved over the years but currently an earned doctorate from a reputable university is the usual requirement. Scientists without a doctorate, however, can be recognized after publication of credible research in refereed journals. Formal certification is not common in science so recognition comes from accomplishments rather than a professional license.

There have been a number of attempts to formulate the internal code of practice of science. Perhaps the most famous is that of Robert K. Merton.7 Merton identified four principles of scientific practice:

(1) Universalism: Truth claims must be evaluated using preestablished impersonal criteria.

(2) Communism: (Other authors have preferred the term communality.) This is the obligation of public disclosure of scientific findings. In Ziman's terminology, science is public knowledge.8

(3) Disinterestedness: The advancement of science is more important than the personal interests of the individual scientist. (Steven Shapin has recently written a fascinating historical study of the importance of disinterestedness in the development of early science as exemplified by the role of Robert Boyle in the Royal Society of London.9)

(4) Organized skepticism: All scientific truth is provisional and must be judged based only on the evidence at hand. Scientific conclusions are always open to question. (This is similar to Popper's famous principle of falsifiability.10)

Merton's list has been modified and expanded by later writers to include such ideas as objectivity, honesty, tolerance, doubt of certitude, selflessness, individualism, rationality and emotional neutrality.11 While there is no consensus on a detailed list of norms, the moral ideal underlying the norms is widely accepted by scientists. A nice statement of this ideal was given by Richard Feynman:

That is the idea we all hope you have learned in studying science in school -- we never explicitly say what this is, but just hope you catch on by all the examples of scientific investigation. It is interesting, therefore, to bring it out now and speak of it explicitly. It's a kind of scientific integrity, a principal of scientific thought that corresponds to a kind of utter honesty -- a kind of leaning over backwards. For example, if you're doing an experiment, you should report everything that you think might make it invalid - not only what you think is right about it; other causes that could possibly explain your results; and things you thought of that you've eliminated by some other experiment, and how they worked -- to make sure the other fellow can tell they have been eliminated. Details that could throw doubt on your interpretation must be given, if you know them. You must do the best you can -- if you know anything at all wrong, or possibly wrong -- to explain it. If you make a theory, for example, and advertise it, or put it out, then you must also put down all the facts that disagree with it, as well as all that agree with it. In summary, the idea is to try to give all the information to help others to judge the value of your contribution, not just the information that leads to judgment in one particular direction or another.12
In addition to these norms there are more specific research practices that can vary depending on the discipline. Some of the criteria that distinguish "good physics" are quite different in kind from those that distinguish "good biology." Learning the techniques and standards of research in a particular discipline is a major part of the graduate education of a scientist. It is what Thomas Kuhn has called normal science.13

It is important to recognize that the internal code of science has evolved over time. While the broad principles of the code go back to the early days of the Royal Society of London, specific details of scientific practice have changed significantly since Boyle and Newton.14 Therefore, recent charges of scientific fraud directed at historical personages should be regarded with some skepticism.

The external bargain addresses the relationship of the profession to society. In general, the profession lays claim to a body of specialized knowledge and skill not easily attainable by the majority of people. In return for a monopoly on the practice of those skills, the profession agrees to use them to serve society and to render professional judgment when asked. For some professions such as law, medicine and engineering the bargain with society is highly structured; parts are even written into law. For science the agreement is more informal. A brief historical sketch will help to clarify the relationship of science and society.

Perhaps the first agreement between science and government came with the establishment of the Royal Society of London by Charles II. The Royal Society was given the right to publish without censorship and pursue the new specialty of natural philosophy. In return, the Royal Society was to avoid the study of politics, morality and religion. In the words of Robert Hooke the "Business and Design" of the society was "To improve the knowledge of natural things, and all useful Arts, Manufactures, Mechanics, Practices, Engynes, and Inventions by Experiments (not meddling with Divinity, Metaphysics, Moralls, Politicks, Grammar, Rhetoric or Logick)."15 The gentlemen who founded the Royal Society established the standards for scientific practice. The central question was what should be considered scientific truth? Robert Boyle was a central figure in this development.16

There was an interesting tension in the early Royal Society between what we would now call pure and applied science and it can be exemplified by Boyle and Hooke. While Boyle was the paradigm of the Christian gentleman scientist, Hooke, the curator of experiments, was considered the "greatest mechanick this day in the world."17 Boyle's disinterested gentility contrasted sharply with Hooke's protection of patent rights. This same tension between the scientific ideal of open communication and personal economic gain is an important contemporary issue in professional ethics in science.

While the practical aspects of science have always been important, the so-called German model of pure research was the dominant theme in the development of science in the U.S. Science in the universities and research institutes was pursued for its own sake. Practical applications were certain to follow as the secrets of nature were revealed. American science could point to outstanding examples of the practical utility of pure science, for example the work of Irving Langmuir at General Electric and John Bardeen at Bell Laboratories.

The Second World War changed the nature of research in America forever. The Manhattan Project and the development of radar showed how science, with generous government support, could make significant accomplishments in a short time. The new bargain between science and society was outlined in two postwar reports: Vannevar Bush's Science: The Endless Frontier (1945)18 and John R. Steelman's Science and Public Policy19. These reports led to our current system of research funding centered on the National Science Foundation.

The essence of this bargain can be summarized in a few words: "Government promises to fund the basic science that peer reviewers find most worthy of support, and scientists promise in return that the research will be performed well and honestly and will provide a steady stream of discoveries that can be translated into new products, medicines or weapons."20 While there has been considerable recent discussion of the suitability of this form of the contract21, essentially all practicing scientists would agree with this statement of the bargain between science and society.

Both the internal and external bargains, like all human bargains, depend crucially on trust. Members of the profession must trust each other to follow the professional code and society must trust that scientists are performing their work with integrity, particularly as it affects public health and safety. Dyson's three levels of trouble in science all stem from the breakdown of trust. A particularly good statement of the importance of trust in science was given by Arnold S. Relman, Editor of the New England Journal of Medicine:

It seems paradoxical that scientific research, in many ways the most questioning and skeptical of human activities, should be dependent on personal trust. But the fact is that without trust the research enterprise could not function.22

Steven Shapin discusses the role of trust in science in The Social History of Truth.23

Trust among scientists and between science and society has eroded over the past ten years. Reports of real or alleged scientific misconduct appear regularly in the popular and scientific press.24 While there is controversy over how widespread ethical problems in science are,25 concern in the scientific community is significant. In 1992 the National Academy of Sciences issued a comprehensive report entitled Responsible Science: Ensuring the Integrity of the Research Process.26 This report contains a number of recommendations to ensure integrity in the research process. From my perspective the most important is Recommendation 2:

Scientists and research institutions should integrate into their curricula educational programs that foster faculty and student awareness of concerns related to the integrity of the research process.27

In other words, we need to teach scientific ethics.

Traditionally, scientific ethics has been taught informally by example or in the context of actual issues that arise in the conduct of research. The quotation from Richard Feynman given earlier is an illustration. In the smaller, slower-paced scientific enterprise of earlier times this informal method worked well. Contemporary science is bigger, faster-paced and filled with more complicated issues. Research groups are larger, so faculty advisors and students have less personal contact. Economic pressures are enormous. Contemporary science is expensive so the grant writing process is hugely important and time consuming. In addition, there are increasing opportunities and pressures for the commercialization of research, particularly in the emerging biotechnology industry. The scientific work force is much more diverse so it is not safe to assume that all scientists share a common value base. Media and government scrutiny of science is intense. All these factors suggest that a systematic approach to the teaching of scientific ethics is needed.

When I discuss the teaching of scientific ethics with my colleagues, I hear two major objections: (1) Professional ethics is best taught in the research group, and (2) You can't teach ethics; either people are moral or they are not. To the first objection I would respond that the research group may be the best place to teach scientific ethics but that one cannot guarantee either that it will be taught there or that it will be taught well. I was lucky to have research advisors and mentors who had high standards of professional conduct and provided me with excellent role models. Not everyone is so fortunate. Such an important aspect of professional education cannot be left to chance. The second objection reveals a confusion between ordinary morality and professional ethics. I would agree that a fundamentally immoral person cannot be transformed by instruction in professional ethics. On the other hand, even if students come to college as fairly sophisticated moral decision makers in their day-to-day lives, they probably are not skilled at making decisions regarding professional ethics. As I argued above, the two are different.

Science is filled with ethical decisions. Many decisions that on the surface seem purely technical also involve professional ethics. Some examples include:

(1) Discarding a data point. Data points are discarded in many experimental investigations for legitimate reasons such as contamination of the sample, improper functioning of an instrument or procedural errors. An important component of professional judgment in science is the ability to know when an experiment has worked properly. At the frontiers of research the signal may not be easily distinguishable from the noise.28 On the other hand, there are a number of historical examples of scientists whose expectations of the outcome of an investigation caused them to misinterpret data, either discarding relevant data or retaining incorrect measurements.29

(2) Writing a scientific article or report: The scientific article is not a dispassionate recording of the details of an investigation; it is an argument.,30, 31 An author is faced with many decisions. Who should be a co-author? Which results should be included? How should they be presented? How should the inevitable "loose ends" be treated? How should potential objections be treated? What prior work should be cited? These are all questions in professional ethics.

(3) Laboratory practices and safety: Experimental science is full of danger. A working scientist has to make decisions concerning the potential hazards of a particular experiment and adopt reasonable safety precautions. Are the potential health and safety risks of a laboratory procedure acceptable? These decisions have an ethical component.

I have only given three examples; there are many others. Questions of professional ethics arise quite naturally in science courses. Ordinarily, they are treated only as technical issues; the ethical component is ignored. It is important for science faculty to broaden their perspectives and explicitly recognize that decisions in science are both technical and ethical questions.

There are at least four benefits to the teaching of scientific ethics.32

(1) Moral sensitivity: Students will be better able to identify ethical issues and, as professionals, less likely to make a poor decision out of ignorance.

(2) Knowledge of relevant standards: As ethical questions are considered students will learn the relevant professional standards. Since many of those standards in science are unwritten, discussion of ethical questions will help clarify them.

(3) Skill in ethical decision-making: Moral decision making must be learned. One way that parents teach their children to make moral decisions is by allowing them to make decisions and then reviewing their consequences. Another way is to point out moral questions when they arise and discuss possible courses of action. As children grow and mature they are able to deal with more and more complicated ethical situations. The aspiring professional needs the same kind of training in professional ethics.

(4) Improved will-power: Often the most ethical course of action is difficult; peer and societal pressure can be immense. By education and example faculty can help students develop the courage to act ethically when they want to.

While it is possible to isolate the teaching of professional ethics in a separate course, it is better to integrate ethics into the curriculum, embedding it in the context where the issues naturally arise.33 There are a number of ways to do this. First is the "ethics moment." While developing the regular course material the instructor can use professional ethics as part of the motivation for understanding the concept or technique. A simple example is the concept of significant figures in measurement, an early topic in many introductory science courses. Students are taught the rules for the proper reporting and interpretation of experimental measurements. It is easy to point out that reporting too many significant figures is an example of lying and that there might be adverse consequences. Other workers could rely on the incorrectly claimed precision in a crucial design that might then fail. A variant on the "ethics moment" is the general use of history. The story of how a scientific idea was developed can provide examples of both good and bad professional behavior. Good materials on the history of science are increasingly available.

A second method of integrating ethics into the science curriculum is the "ethics problem set." Students can be asked to identify and comment on questions of professional ethics that arise in assigned problems. For example, a typical assignment in a course in organic chemistry is the design of a synthetic route to a particular product. Students could be asked to identify safety and environmental issues in their proposed synthesis and discuss how to handle them.

Perhaps the best method is the use of case studies. Case studies are hypothetical, but realistic, situations that raise issues in professional ethics. Collections of cases in science and engineering have recently become available.34 The best cases are ones that students can identify with because they can see the ethical intricacies. Other people's moral problems are easy to solve; your own are always more complicated. Case studies are best used in class discussion where a variety of perspectives can be heard, but they can also be used as a basis for writing assignments. Case studies can be integrated into the curriculum or provide the basis for a separate seminar in professional ethics.

Whatever the pedagogical methods, the key is to make professional ethics a prominent theme in science education. Students must understand that questions of professional ethics arise every day in the life of a working scientist. They must learn how to recognize and deal with ethical questions. Ethical decision making can and must be learned. The alternative is the continuing decline of trust both within the profession and in society. Science is too important to contemporary society for us to allow this to happen.


Acknowledgments: I am grateful to the Camille and Henry Dreyfus Foundation and to the College of Arts and Sciences of the University of Tennessee, Knoxville, for financial support of my work in scientific ethics. Many of the ideas in this article have been developed and clarified in discussions with Michael Davis, Don Gotterbarn, Roger Jones, Priscilla A. Frase, Donna Walter Sherwood and Susan Davis Kovac.


REFERENCES

1. F. J. Dyson, "Science in Trouble, The American Scholar, 1993, 62, 513-522.

2. M. Davis, "The Ethics Boom: What and Why." The Centennial Review, 1990, 34, 163-186.

3. M. Davis, The Moral Authority of a Professional Cod," in J. R. Pennock and J.W. Chapman, eds., Authority Revisited, Nomos XXIX, NY and London: New York University, 1987.

4. J. Bronowski, Science and Human Values, Revised Edition, New York: Harper Torchbooks, 1956.

5. J. Ziman, Reliable Knowledge, Cambridge: Cambridge University Press, 1978.

6. quoted in Einstein: A Centenary Volume, A. P. French, ed., Cambridge: Harvard University Press, 1979, p. 275.

7. R. K. Merton, "The Normative Structure of Science, " in R. K. Merton, The Sociology of Science, Chicago: University of Chicago Press, 1994.

8. J. M. Ziman, Public Knowledge, Cambridge: Cambridge University Press, 1968.

9. S. Shapin, A Social History of Truth, Chicago: University of Chicago Press, 1994.

10. K. Popper, The Logic of Scientific Discovery, New York: Harper and Row, 1965.

11. A. Cournand and M. Meyer, "The Scientist's Code," Minerva (1976) 14, 79-96, B. Barber, Science and the Social Order, New York: Free Press, 1952, H. Zuckerman, "Deviant Behavior and Social Control, " in E. Sagarin, ed., Deviance and Social Control, Beverly Hill: Sage, 1977. 12. R. P Feynman, Surely You're Joking, Mr. Feynman, New York: Norton, 1985, p. 341.

13. T. S. Kuhn, The Structure of Scientific Revolutions, Chicago, University of Chicago Press, 1962.

14. G. Holton, "On doing One's Damnedest: The Evolution of Trust in Scientific Findings, " in D. H. Guston and K.Keniston, The Fragile Contract, Cambridge: MIT Press, 1994. See also, S. Shapin, note 9.

15. R. N. Proctor, Value-Free Science, Cambridge: Harvard University Press, 1991.

16. S. Shapin, see note 9.

17. S. Shapin, "Who Was Robert Hooke?! in M. Hunter and S. Schaffer, eds., Robert Hooke: New Studies, Woodbridge, Suffolk: Boydell Press, 1989, pp. 253-285.

18. V. Bush, Science: The Endless Frontier, Washington, DC: National Science Foundation, 1990 (1945).

19. J. R. Steelman, Science and Pubic Policy, The President's Scientific Research Board. Washington, DC: U.S. Government Printing Office, 1947.

20. D. H. Guston and K. Keniston, "Introduction: The Social Contract for Science," in D. H . Guston and K. Keniston, eds., The Fragile Contact: University Science and the Federal Government, Cambridge: MIT Press, 1994, p. 2.

21. For example, G. E. Brown, Jr., "The Objectivity Crisis, " American Journal of Physics, 1992, 60, 779-781.

 22. quoted as the epigraph in C. Djerassi, Cantor's Dilemma, New York: Penguin, 1989.

23. S. Shapin, see reference 9.

24. See, for example, W. Broad and N. Wade, Betrayers of the Truth: Fraud and Deceit in the Halls of Science, New York: Simon and Schuster, 1982; M. C. LaFollette, Stealing Into Print, Berkeley and Los Angeles: University of California Press, 1992; R. Bell, Impure Science, New York: J. Wiley and Sons, 1992; P. S. Zurer, "Misconduct in Research, "Chemical and Engineering News, 1987, 65, 10-17.

25. For two contrary positions see: J. P. Swazey, M. S. Anderson and K.S. Lewis, "Ethical Problems in Academic Research, American Scientist, 1993, 81, 542-553; and D. Goodstein, "Scientific Fraud," The American Scholar, 1991, 60, 505-515.

26. Panel on Scientific Responsibility and the Conduct of Research, Responsible Science, Washington, DC, National Academy Press, 1992 (Volume I), 1993 Volume II.

27. Responsible Science, Volume 1, p. 13, see reference 26.

28. G. Holton, "Sub-electrons, presuppositions, and the Millikan-Ehrenhaft dispute, " in G. Holton's, The Scientific Imagination, Cambridge: Cambridge University Press, 1978.

29. For example, F. Franks, Polywater, Cambridge: MIT Press, 1981; F. Close, Too Hot to Handle: The Race for Cold Fusion, Princeton: Princeton University Press, 1991; M. J. Nye, "N-Rays: An episode in the history and psychology of science," Historical Studies in the Physical Sciences, 1980, 11, 125-156.

30. R. Hoffman, "Under the Surface of the Chemical Article," Angew. Chem. Int. Ed. Engl. 27, 1593 (1988).

31. D. Locke, Science as Writing, New Haven: Yale University Press, 1992.

32. M. Davis, "Ethics in Engineering and Technology: What to Teach and Why," Journal of the Tennessee Academy of Science, 1995, 70, 55-57.

33. M. Davis, "Ethics Across the Curriculum: Teaching Professional Responsibility in Technical Courses," Teaching Philosophy, 1993, 16, 205-235. B. P. Coppola and D. H. Smith, "A Case for Ethics, " Journal of Chemical Education, 1996, 73, 33-34. J. Kovac, "Scientific Ethics in Chemical Education, " Journal of Chemical Education, in press.

34. For example, J. Kovac, The Ethical Chemist, Knoxville: The University of Tennessee, 1995; C. E. Harris, Jr., M. S. Pritchard, and M. J. Rabins, Engineering Ethics: Concepts and Cases, Belmont, CA: Wadsworth, 1995; R. L. Penslar, Research Ethics: Cases and Materials, Bloomington: Indiana University Press, 1995; M.J. Bebeau, Moral Reasoning in Scientific Research, Bloomington, IN: Poynter Center for the Study of Ethics and American Institutions, 1995.


Copyright Jeffrey Kovac

FAX: (423) 974-3454


Talk to the Conference Participants


Questions and comments may be directed to the Conference Convenor, Alvin G. Burstein or individual authors by clicking on his/her name.

HOME

Last updated: June 6, 1996