Sep 11, 2008

moral agency and the permissible killing of an innocent

A reader writes,
I was wondering if I could pose a question that doesn't seem to be being talked about very much right now...

The resolution for September/October doesn't specify an agent, one who will be doing the killing of the innocent. From this, can you fiat anything concerning who the agent might be? Is there an agent or is this an absolute kind of deal? Because if it means anyone can kill in innocent to (in their eyes) save more innocents, then I see some serious problems for Aff. Is it possible to say that the government, or society, or something to that effect is the agent? Then that opens up several more VC options than just utilitarianism. If the government is the agent, you could use the harm principle, the veil of ignorance, those kinds of things...
I don't think either side can presume a specific agent of action. The verb phrase "to kill" is simply not agent-specific. However, since one isn't specified, the Neg can raise that particular fact, either in CX or in a Resolutional Analysis, and use it to launch several lines of attack.

1. One attack depends on what is known as "role morality." When a person adopts a specific role--doctor, soldier, lawyer--they also adopt a specific code of ethics. In certain situations, for example, lawyers are required to keep a client's secrets, even if that information could, say, convict someone else. (This is called "attorney-client privilege.") The Neg can attack by claiming that the resolution cannot be applied across all (or even most) roles, and the Aff is stuck conditionally affirming.

2. Governments, according to some theorists, act according to different moral rules--or are not beholden to moral rules at all. Since the resolution doesn't specify an agent, then the affirmative might be caught conditionally affirming if their V/C doesn't apply to a governmental entity. As an example, consider a situation like 9/11, where the president has ordered fighter jets scrambled to intercept an airliner full of innocents, an airliner about to be used as a weapon against (potentially) even more innocents. Should the president have the authority to order the plane shot down--to kill innocents to save more? Is the president's moral calculus different from any average citizen's? If not, why not? More generally, consider a program of inoculation against measles. If 1 in a billion people will die from the injection, but it will save 30,000 children from death, is the government permitted to make the vaccination mandatory?

3. One Neg argument could be based on a slippery slope scenario: if we affirm, we allow citizens or governments carte blanche when it comes to terrifying moral decisions. Even if the moral rules are the same for governments and regular folks, each also provides a means to avoid the slippery slope.

a. Governments codify rules in the form of laws, and (potentially) have a deliberative nature that mitigates a potential slippery slope. In fact, we usually expect a governmental check against individuals taking others' lives into their own hands.

b. On the other hand, individuals, unlike governments, feel the sting of guilt, which has its own preventive effect. Even if the Aff argues that killing an innocent to save others is morally permissible, the choice to act in that situation is no less difficult for the person making it--and will likely haunt that person for years to come. As a result, people will never make that decision willy-nilly. (If the Neg attempts to bring up a depraved sociopath, the Affirmative should gently remind all present that the Aff is required to affirm the resolution true as a "general principle," not to defend it against any conceivable exception. Beside, sociopaths aren't bound by moral rules anyhow, so they're a problem on either side of the resolution.)

4. So, what's an Aff to do? Like my reader suggests, it's important to consider whether your V/C truly can apply to all agents in the vast majority of situations where the resolution might be instantiated.

5. As an aside, rule utilitarianism is likely better at covering individuals and governments than act utilitarianism. I leave it to the reader to determine why that might be.

These are disjointed thoughts that could use refinement, or maybe a good bashing. Have at 'em in the comments.

3 comments:

Anonymous said...

just some arbitrary thoughts as I was reading through this:

1. good point for "role morality", but most people don't have these specialized roles that would make a difference (doc, lawyer, etc.), so I don't think the point holds

2. most judges (and debators) would say the pres. is justified in doing that, which would lead the aff to say that a gov. moral calculus is the same as a citizen's
also...the problem w/ the measles ex. (imo) is that the death of the 1/billion is an unintended consequence, while the resolution implies intent (KILL)-this leads to Double Effect neg case

3. I don't understand this at all...

also...if u decide to bring up moral agency, how can u credibly contend that a certain group of agents is a majority??

Jim Anderson said...

1. It's not about whether "most people" have the role. The question really is, who, mostly, makes these kinds of decisions? Average citizens? I'd argue that in the vast majority of cases, it's people in specific roles, or members of governments. There won't be many--if any--times in your life when you'll have to choose the life of several innocents over one. But military commanders, doctors, and policymakers make those choices all the time.

Also, considering the government: its officers take an oath that (conceivably) binds them to obligations that standard moral agents do not have. So, whether the president's actions are differently judged from a role perspective or a government actor perspective, they're different nonetheless.

2. "Kill" does not imply intent. "Murder" implies intent. Depending on how the arguments shake down, I'd depend that point vigorously. You can be killed by a train without the train "intending" to kill you. You can be killed by a drug overdose without intending to kill yourself.

3. is intended to rebut "slippery slope" objections--anyone who's arguing that "if we affirm, we create a horrible world where innocents may die at any time, simply because people thought they were saving more lives." Or, worse, that the resolution devalues human dignity to the point of undermining morality as a whole.

Anonymous said...

I see...
one thing:
i guess the word intent was the wrong word to use there
when the resolution says kill...to save others, it is saying that the killing is a means to the end of saving the others-which by double effect is bad (right?)
which is why the measles ex. doesn't hold (and the simplest of the trolley problems)