Resolved: It is morally permissible to kill one innocent person to save the lives of more innocent people.For the affirmative, especially one adopting a utilitarian framework, this raises several problems.
1. The problem of application.
Let's borrow a line of thinking from rule utilitarianism. If the resolution were adopted as a moral principle, what sort of behaviors could we expect as a result? Perhaps the greatest harm is not to dignity, or to life, but to the very fabric of society, as individuals, ensured that such a tradeoff is morally permissible, take actions they might have otherwise avoided. Combine with human fallibility, and we have a recipe for vigilantism, tragic accidents, and all kinds of slippery slopes.
2. The problem of agency.
a. Is the agent of action a person, a government, a society, an ideal moral agent, or some other entity?
b. Is the morality of the action different if the agent is different? For example, does a hospital's ethics board, which sometimes has to make similar decisions, bear a different level of moral culpability than an individual faced with the same choice? What about a democratically elected government?
c. Is the "problem of application" different if the agent is different?
3. The problem of calculation.
This problem can be thrown in a utilitarian's face: how do we calculate the relative value of different lives? The resolution gives no word as to the length or quality of life being weighed in the moral scales. For instance, a utilitarian might say that trading a child's life for two retirees' is unfair. The child still has (potentially) eighty years of happiness left, while the two gerontocrats share a good twenty between them. However, since all we know is the quantity of people--one versus an indeterminate many--all hedonic bets are off. The negative cannot let the affirmative presume that utilitarian math is simple.
4. The problem of context.
Does context reign supreme over the simple question of a moral calculus? Do different contexts, in other words, lead us to different answers? For example, consider capital punishment, which you might not think of the first time you consider the resolution, but has its own connection.
Suppose that out of every thousand persons executed is actually innocent, and presume that this is the inescapable, hard fact of any imperfect human institution. Suppose also that capital punishment has a real deterrent effect--that executing a hundred murders each year saves at least ten lives, in two ways: it prevents one free citizen from going through with a planned killing, and it prohibits one potential reoffender from murdering a second victim after escape, pardon, or release on parole. (These estimates are conservative on purpose; according to some theorists, each execution may deter an average of eighteen murders.)
Thus, our moral calculus: every ten years, on average, one innocent person is killed by the state, as is necessary to prevent the deaths of twenty other innocents.
Must an affirmative thus admit that capital punishment is morally permissible under the resolution? Does it matter that the state did not intend to kill an innocent, but only that it acceded to the killing as a probable consequence of its deterrent strategy?
In what other ways might the resolution be contextualized? Embryonic stem cell research? Abortion? Preventing genocide?
5. The problem of scale.
Affirming the resolution means, potentially, arguing that one innocent should be killed to prevent the death of two innocents; after all, the word "more" has no inherent scale, and is minimally two. However, is this limited reading fair? An affirmative might argue that societies only consider these sorts of calculations when either of two conditions hold: there is a situation that brings us to the moral brink (such as the "ticking time bomb scenario," mentioned in comments to this post), and that there is no conceivable superior option. In a sense, this is a way to skirt around the problem of application raised above, and to avoid sliding down the slope to moral anarchy. However, the negative can hold a hard, literal line: the resolution specifies no conditions (see the problem of context, above), and any attempt to make the reading more "reasonable" or general is a conditional affirmation.
6. The problems I haven't considered.
If you think of another, raise it in the comments, or post a solution to one of the problems above.
Regarding the problems of calculation and application, it seems that the easiest way to defend utilitarianism would be to limit the scope to the number of people on each side. Would it be possible to do this by using a dual criterion of Act Utilitarianism and the Veil of Ignorance? It seems like this would make it only logical to save the several as there is a higher chance of being part of that group as opposed to the individual. Also, would the Veil of Ignorance provide a form of deontological justification to the Aff?
ReplyDeleteAlso, regarding the problem of application, one could argue that the resolution only deals with cases where there is a certainty of it saving more lives. (This is shown by there not being any qualifying words, such as "probably") Because of this certainty, it removes the possibility for error, making it such that the agent would be irrelevant. Also, it could make the slippery slope arguments of rule utilitarianism non-topical.
I'm not yet sure about using the Veil of Ignorance and this resolution. After all, if we're talking losing one life to save more than one, your odds, even behind the veil, are better--at least 2/3--that you're not the innocent who dies. The math is incontrovertible.
ReplyDeleteRegarding your defense in the second paragraph, it's definitely a reasonable reading of the resolution. One could also raise the issue of intent versus results, although, paradoxically, that usually ends up leading to a deontological standard.
Who's going to post Hobbesian cases to his blog soon?!?
ReplyDeleteThis guy.
@okiedebater and Jim-
ReplyDeleteI thought Rawls used the Veil of Ignorance to show utilitarianism is wrong, in that utilitarianism is promoted by those who would benefit from the transaction. If you end up as the one who gets killed, versus the alternative of being one of the benefactors who would obviously want the one to be killed.
For the question of agency, how could I create an observation that the agent who does the killing is not the one who is killed or one of the benefactors of the killing.
Ann O'nymous, I don't know that you'd have to worry about ruling out noble suicide. Certainly, one can argue that, say, tackling a suicide bomber off of a subway platform, sending the two of you to your death, is a noble and morally permissible way to save the lives of those still standing on the platform.
ReplyDeleteThe problem for the affirmative, though, is that the resolution doesn't rule out homicide. Promoting self-sacrifice on its own is conditional affirmation.
I just don't see how the observations you mentioned can be made, without reading far too much into the resolution.
Does the affirmative just have to prove that in one situation it is okay to kill one innocent to save more innocents, or do they have to prove that it is always okay to kill the one.
ReplyDeleteI'm pretty sure that the affirmative only has to prove it to be morally permissible most of the time. Because it is against the rules to place a categorical burden on the aff but a purely conditional aff wouldn't prove the resolution true.
ReplyDelete