Student Blogger - Think of the Consequences
Imagine you are the newly appointed director of the Centers for Disease Control. You are reviewing a report that suggests that a new vaccine is likely to prevent thousands of deaths from an impending outbreak of a new viral epidemic. However, vaccines like this one have had problems in the past, causing many recipients to get severely ill. You call your two most trusted advisors, Ben and Kent, into your office and explain your problem, and say, “Look, fellas, I’m not going to play politics here. I want to do the right thing.”
Ben quickly chimes in, “This is an easy one. You only need to look at the total social benefits weighed against the total social costs. Avoiding thousands of deaths sounds like a lot of benefit, so I say release the vaccine.”
Kent interjects, “Wait. Don’t you remember your Hippocratic Oath? You have a moral duty to do no harm. That has to be your guiding principle.”
This exchange leaves you in a bit of a muddle. Should you start by aggregating harms and benefits or should you start from an acknowledgment of a duty to respect the inviolability of each individual affected by your policy choice? What did Kent actually advise you to do?
Stanford Law School’s Barbara Fried is skeptical that any answer derived from a duty-based theory like Kent’s can get off the ground. Fried presented her argument from a forthcoming book on risk regulation at the Law and Philosophy Workshop.
Fried starts by noting that Kent-type answers tend to assess the violation of a purported duty only after that duty has been breached. If you toss a box from the second story of a warehouse and it innocently hits the ground, no harm, no foul. If, however, your box-toss injures an innocent pedestrian on the sidewalk below, you have harmed another person through your action, and you have violated your duty not to harm. Under this interpretation of moral duty, how can one decide whether it is morally permissible to engage in box-tossing before seeing where the box lands? Fried suggests that there are two ways to view such a duty from a before-the-fact perspective. The first interpretation is that the duty requires you not to engage in any activity that could possibly cause harm (a sort of moral one-percent doctrine). Alternatively, it might require only that you refrain from activity that is certain to cause harm. While the first suggestion would foreclose almost any activity (driving your car creates some risk to other drivers and pedestrians, watching television imposes some minimal risk of fire to your neighbors, etc.), the second would allow almost anything (even insanely reckless actions might fail to harm anyone). Any intermediate account of the duty not to harm in before-the-fact situations of uncertainty (due care or proportionality, as two examples) implements an aggregative approach, where the costs and benefits of a planned course of action are compared and balanced. By itself, the principle that one has a duty not to harm, where the resulting fact of harm determines the moral permissibility of the conduct, cannot provide any more robust guidance when deciding how to act.
What may be apparent already is that Fried operates from an intuition that it cannot be useful to morally condemn someone for an action we would not condemn at the time she decided to act. Moral judgments are of no value if they operate in a way that prevents them from guiding our conduct. [This drives much of her later critique of various extra-rational feelings that feed the intuitions driving the canon of hypothetical scenarios intended to display the problems with an aggregative framework.] While this concern for guidance does limit her critique to the realm of non-intentional harms (neither crimes, nor intentional torts fall within the scope of her critique), this area includes the vast majority of decisions people of good faith face every day.
One line of questioning at the workshop focused on the fact that a purely aggregative decision-making process can lead to results that challenge our moral intuition as well. When setting a speed limit, we might find it inappropriate to allow Bill Gates to drive way faster than everyone else, or we might find it objectionable to allow a faster speed limit in a poor neighborhood than in a wealthy neighborhood. We would find these objectionable because they do not give enough weight to a duty of equal treatment. So, while a duty-based process, without more, might fail to adequately guide decisions, a purely aggregative process, without any reference to duties owed to individuals, might fail to adequately guide decision-making in the same way.
Fried agreed with the last point, but distinguished between the choice to aggregate and the choice of a particular aggregative procedure. Where legitimate interests infringe upon one another (the right to walk through town with no risk of being hit by a vehicle, with the right to drive one’s car to the supermarket), Fried suggested that we generally have no choice but to balance those interests against each other in some form (for Fried this balancing just is aggregation). But how we balance them—what aggregative procedure we use—depends on how we value them. For most people, fairness concerns would rule out, for example, an unmediated willingness-to-pay criterion like the one that drives the problems in the two speed-limit examples above. Peoples’ notions of fairness will almost certainly involve duty-based principles (as one participant noted, even Bentham required equality). Fried’s point is that drawing on duties to fashion an aggregative procedure is dramatically different from invoking them to deny the moral permissibility of any form of aggregation.
Related to this line of questioning was the problem of attempting to aggregate harms that will simply not convert to a common currency (that is, harms that are deeply incommensurable). This is certainly a deep problem, but it seems to cut against both duty- and aggregation-grounded theories. If one has a duty not to harm, then ostensibly this applies to causing death as much as to causing a person to lose her job. Both are harms. Perhaps the duty not to harm can be further refined to place these harms in some lexical order, but if this is possible, it would seem that we are admitting the necessity (if not the possibility) of some type of rough-and-ready relative valuation. If instead, we say the two harms are incommensurable and can never be compared when opposed, then we are left without a decision-guiding principle under either theory.
In another vein, one participant suggested that we might actually be concerned with a duty of reciprocity. Fried suggested that reciprocity was certainly important when we consider the compensation of harmed parties. But compensation can be usefully conceived as a second stage concern, distinct from the first step where we determine the permissibility of a certain species of conduct. For example, we might create a strict liability regime where we expend no energy assigning blame for some conduct and simply require actors to provide compensation for those they harm through their permissible conduct. This is a distributional concern that might also create incentives to behave in a socially beneficial way by forcing actors to internalize the costs of their activities. The way our compensation regimes affect incentives is a fully valid concern for policymakers, and our tort system often uses victim compensation in this way (the negligence standard is an example of coincidence), but it is not inevitably tied to the question of the moral permissibility of risky conduct.
For Fried, to oppose the use of an aggregative procedure (as we might interpret Kent to be advising) is to suggest a one-percent doctrine, where bad consequences imply that the conduct that produced them was wrongful. Policymakers’ knowledge that they will be blamed in the event of any high-visibility bad outcomes short-circuits reasonable policy decision-making under risk. If there is a way to make a non-aggregative risk regulation policy operational, Fried doesn’t think she’s heard it yet.
Next time: 11/23 Professor Eric Posner’s “Human Welfare, Not Human Rights”