reed magazine logospring2006

life in venice title

Should We or Shouldn’t We?

caplan imageArthur Caplan directs the Center for Bioethics at the University of Pennsylvania School of Medicine. Reed magazine asked him to reflect on the risks and challenges posed by current efforts to create artificial life in the laboratory.

Since no one has created artificial life yet, should we be evaluating the bioethical issues at this point? It is crucial to begin the debate now, even absent knowing what specific entity might be made. Crossing into this area will be so startling, so momentous, and so socially unnerving that the prospect of doing so demands proactive ethical, theological, and scientific discussion.

Some bioethicists have raised concerns over the safety and environmental risks of creating modified life forms using genetic engineering–i.e., inserting DNA from one species into another, or combining two species to create a chimera. Are the same concerns present with artificial life research? The risks are mostly the same. Each raises monumental ethical challenges. But creating new life forms is a quantum leap psychologically from the other issues, even if those pose more real-world risks. And there are ethical questions besides safety. Should there be a social sign-off before the line is crossed in creating artificial life? Is the very act inherently one that ought to be off limits as too unnatural and too Promethean? What sort of access will individuals have to the plans and processes used? How can such work be controlled or regulated? Will patent law apply, when in the process, and why?

Since artificial life might be autonomous of human control once it is created, should precautions be taken to make sure that it does not harm human life? I’m not sure such precautions can be taken, but every effort must be made to ensure that artificial life forms cannot readily or easily spread into humans, plants, or animals. Using harmless viruses or microbes for various scientific purposes, as is done today, is a good model for what limits and restrictions ought be placed on the creation of synthetic microorganisms.

Can a distinction be made between a top-down and a bottom-up approach to creating artificial life, in terms of potential risk? It is almost impossible to know which strategy would create the "riskiest" life forms. Philosophically, to strip down and rebuild a novel life form from existing parts–while amazing and creative–will not be seen as quite the achievement as creating a life form from scratch. This points out the interesting problem that the very definition of who has created a living thing will be part of the debate. Is making a synthetic virus creating a new life form or not, since viruses are seen as basically parasites?

Do scientists have an ethical obligation to exhaustively test an artificial life form’s potential impact on the environment and existing life before employing it? Scientists should submit their plans to a group or body to review them. That group should have both scientific and ethical expertise. It should be national or even international–not local. And it should be transparent. That group should insist that early work be done in strict biological confinement until the properties and powers of new life forms are understood, as was done in the earliest days of genetic engineering of microorganisms in the 1980s post-Asilomar.

Are international agreements to limit research into artificial life, or subject it to peer review, likely to work? Right now we don’t really have any serious mechanism or forum to enforce international standards. In the United States, there is almost no accountability for those who want to undertake this type of research, if only private money is used.

Does the potential use of new life forms for medical purposes in the human body present unique challenges? Any introduction of viable new life forms into the human body creates special moral concern just by dint of their newness. I think the best analogy is xenografting–using animal parts in humans where the risks of uncontrolled spread and possible contamination via contact, sex, or breathing has led to calls for severe restrictions on human application.

Creating an artificial life form could be seen as eliminating one more arena ascribed to God. Should the scientific community consider the ethical or social consequences of that? They had better. Right now, society is very wary of science in many nations, including the United States. If the public does not think scientists can be trusted or will act in a responsible manner, there could be a drying up of public funding and a huge regulatory backlash against science. If embryo research and cloning have anything to teach us, it is that the scientific community had best be fully prepared and engaged to talk to the public and laymen about these issues, lest there be a repeat of the public policy fiasco that followed the cloning of Dolly.

Might artificial life forms have rights, for instance, not to be used to remediate human-caused environmental problems, or not to be harmed or killed? No single-cell entities can have rights. Rights require self-awareness and the possibility of some form of mental life. Creating life forms to help solve human problems is absolutely ethical and commendable, but safety, risk, and cost need to be taken very, very seriously in deciding who makes what, and when, and where it is used.

If the release of artificial life into the environment were determined to pose a remote potential to endanger existing life–human or non-human–might that be grounds for halting the research worldwide? Yes, even a remote potential could be a reason not to proceed. The issue would be: What are the benefits? Just having a life form, and being the first to make it, would not justify much in the way of risk. So release would have to be done on the basis of a very important goal or purpose.

How should society balance the speculative potential benefit of artificial life (for instance, to cure disease or reverse global warming) against the speculative potential danger of widespread release into the environment? Speculative danger counts heavily until real benefits are very, very likely from the new experimentation. Most experimentation in health care, medicine, and environmental science does not work. The failure rate is enormous. So the burden is on those who would create risk–even remote speculative risk–to show the likely return from taking that risk.

Return to Life in Venice article.