"That's fallacious."
In debates, these callouts are ubiquitous. On controversial discourse of topics such as abortion, death penalties, or even veganism, pointing to "ad hominem" and "appeals to emotion" are common tools. If you can use logic to point out a contradiction in your opponent's ethical stance, you've effectively won the audience.
For example, if we are debating veganism and I try to maintain all of these following points:
- Causing unnecessary suffering is unethical
- Eating factory farmed meat causes unnecessary suffering
- Eating factory farmed meat is not unethical
You'd likely call out the logical inconsistency and even disprove me with propositional logic.
Let:
- = "Causing unnecessary suffering"
- = "Eating factory farmed meat"
- = "Is unethical"
Premise 1 asserts:
Premise 2 asserts:
Premise 3 asserts:
By transitivity of premises 1 and 2, we now conclude
This conclusion directly contradicts with , so this disproves my argument.
Or does it?
This essay will explore attempts in philosophy to create objective ethical truths, the history of emotivism, and the nature of moral argument.
Ethical Frameworks
The foundation of ethics is found in the question "What should we do?"
One of the most popular ways to answer that question is with deontology. Deontology determines the morality of an action by asking "did they act within duties and principles?" Instead of focusing on whether the consequences of the action are desirable or not, deontological ethics looks at following rules. Simplifying heavily, the key principle is
Act only according to a principle where you'd be okay with everyone following it.
Duties are rules that, when broken, are always morally wrong. For example:
- Do not murder
- Do not lie
- Keep your promises
In deontology, even if breaking a rule leads to a "better" result, it's immoral regardless. Deontology is incredibly appealing because it gives you clarity. If you have to make a decision, you avoid calculating possible scenarios and act within simple rules. One of the most powerful attributes of deontology is its consistency. The rules apply to everyone all the time. You can't pick and choose which rules to follow, and there is comfort in knowing that your actions are considered "objectively right".
However, you can see glaring issues. The strength of having the ease to just follow simple rules turns into a weakness when following a rule results in an undesirable consequence. For example, let's say that you follow the duty "I shall not lie" unconditionally. And then you find yourself in a situation where you harbor an innocent family of refugees hiding from a tyrannical regime. A soldier knocks on your door and asks you "Are there any people hiding here?" By duty, you must answer truthfully, and if you do, you expose the family. As a deontologist, you'd then directly cause their capture, and likely their death.
Hearing this common argument, my first thought was "well, of course following broadly worded rules would result in situations like these." Why couldn't we just amend the duties to be more specifically worded? Such as "I shall not lie, unless it saves lives"? In that case, you've crossed into utilitarianism, because now the outcome matters.
Utilitarianism abandons the duty/principle based philosophy. In short, the central idea is
Act to maximize overall utility.
Utility is the overall well-being produced by an action. Whether it be pleasure, happiness, satisfaction, or the avoidance of something displeasurable, the ends justify the means. Originally, the question was "What would maximize utility right now?" This mindset is called act utilitarianism.
Problems emerge pretty quickly. Imagine that you owned a subway transportation system. Each person that approaches your station weighs two choices: paying or evading the fare. Reasonably, every single one could say "If I avoid the fare, I maximize utility because I save a lot (the entire fare), but the subway owner only loses a little. (only one rider out of thousands)"
If every single rider made this decision, no one would pay your fare. The cause of your subway's downfall was that utility in this situation depended on coordinated behavior. Each act appears to make them happier than you'd be saddened, but the aggregated effect from thousands of riders results in a collapsing system.
From this problem, rule utilitarianism emerges. Instead of asking "what would maximize utility right now?", you ask "what rule, if everyone followed, would maximize overall utility?" It's similar to deontology, but it adds an exception justified by consequences in the rule itself. For example, this would solve the fare problem because if everyone followed the rule of not paying fare, it would reduce overall utility. Additionally, it can solve the problem of deontology by making exceptions that deal with outcomes, like "I shall not lie, unless it saves lives."
This obviously has its own issues as well. What about those scenarios where breaking the rule would maximize utility? For example, "don't torture anyone" sounds like a rule that would maximize overall utility. But if you needed to torture a bomber to reveal the location of bombs he planted throughout the city, which would save the lives of thousands, why wouldn't you? The justification of the rule is that it maximizes utility most of the time, but breaking the rule would maximize utility in this specific moment. If you follow this logic, rule utilitarianism collapses back into act utilitarianism due to the equal justifications.
Ethical Emotivism
Ethical emotivism is distinct from the frameworks previously mentioned because it doesn't attempt to answer "what should we do?" It's a simple statement about moral statements themselves. David Hume lays groundwork for this idea in his 1751 book An Enquiry Concerning the Principles of Morals:
"While we are ignorant whether a man were aggressor or not, how can we determine whether the person who killed him be criminal or innocent? But after every circumstance, every relation is known, the understanding has no further room to operate, nor any object on which it could employ itself. The approbation or blame which then ensues, cannot be the work of the judgement, but of the heart; and is not a speculative proposition or affirmation, but an active feeling or sentiment."
Hume states that in a homicide, reason gives you the facts, but facts alone cannot produce moral judgement. Moral judgement is inherently emotion-driven, therefore ethics are rooted in sentiment and not reason.
A.J. Ayer, in his book Language, Truth and Logic pushes that idea into something closer to what emotivism looks like today.
"The presence of an ethical symbol in a proposition adds nothing to its factual content. Thus if I say to someone, "You acted wrongly in stealing that money," I am not stating anything more than if I had simply said, "You stole that money." In adding that this action is wrong I am not making any further statement about it. I am simply evincing my moral disapproval of it."
Emotivism rejects the idea that an action can be objectively "morally wrong". Moral statements have no truth value; they cannot be true or false. Saying "Murder is wrong" expresses something equivalent to "Boo murder!"
C.L. Stevenson's work expands upon Ayer's ideas. On top of a moral statement being an expression of emotion, it's also an attempt to influence. Therefore "Murder is wrong" to Stevenson expresses not only "I disapprove of murder" but "You should share my disapproval" as well.
Additionally, Stevenson explains why moral disagreement makes sense. If moral claims are purely emotions, it wouldn't make sense to argue. However, Stevenson answers that moral disagreement is often about attitude and belief. Two people may share values but disagree on facts, share facts but disagree on values, or simply disagree on both. With that in mind, moral argument isn't pointless. They're attempts to appeal to shared facts/values in order to influence others.
Stevenson then distinguishes two ways to analyze moral statements.
The first pattern has two parts: a declaration of the speaker's attitude and a command to mirror it. The command, or imperative, cannot be proved. Any "reason" merely elaborates or supports it. For example, in the moral statement of "I condemn factory farming, you should do so too", "you should do so too" is the imperative. If you ask the speaker "why should I do so too", they would likely answer with something along the lines of "because it causes suffering in sentient beings". The "reason" they gave neither proves nor disproves the imperative, it only makes it more compelling based on how much you care about suffering in sentient beings.
The second pattern of analysis is about rules and principles. Instead of interpreting it as "I hate this, you should hate it too", it's justified like "I hate this because it has property ", where is some general attribute of action, like "increases overall happiness", "causes unnecessary suffering", or "violates consent". In this interpretation, the speaker isn't judging an action directly, rather evaluating it according to a general rule. For example, saying "Abortion is wrong" might mean "Abortion kills a potential life". Interpreting a second-pattern statement leads to a first-pattern ones: "I disapprove of things that kill potential life, you should do so as well."
With these analyses in mind, Stevenson categorizes moral argument into three main methods.
The first method of argument is logical consistency. If an imperative is backed by "reasons", you check for contradiction. For example, if someone makes three statements:
- "All participants of war are unethical"
- "My nation is a participant of war"
- "My nation is not unethical"
One of these statements must be false, and the speaker must retract one statement or else they become guilty of logical inconsistency.
The second is rational psychological method, correcting factual errors. The goal is not to point out a logical inconsistency, but to correct factual error. Modifying the previous example, say that an observer sees citizens of my nation gathering weapons. They hold that all participants of war are unethical, and that my nation is a participant of war (concluded from the weapon gathering behavior). However, if I reveal that the citizens of my nation were gathering weapons for a theatre production and that we were not going into war, that would be a rational psychological form of moral argumentation.
The third is non-rational psychological method, using tactics to psychologically influence the audience. Instead of logic, you shift the listener's attitude with emotionally charged language. With the war example, a non-rational psychological form of moral argumentation could look something like "Imagine if all nations started collecting weapons! Clearly this nation is betraying the collective peace by preparing for violence!"
Very often, people consider the third method of argumentation to be dishonest, as it relies on influence through emotion instead of logic. However, I'd like to reconsider the system of logic itself.
Epistemology
Observe the Flat Earth community. They have a sizeable number of members, who all believe that the Earth is flat. You, from outside the community, can watch this rejection of science and call it out as "false". How do you know that it's false? Empirical observation.
In argument, a platform must be present for any meaningful discussion to occur. If one person believes that the world they see around them is "true", but the other belives that a malicious demon is manipulating their senses and trapping them inside a Matrix-like system, the conversation is practically dead. When neither can agree on what is "true", discourse is useless. Engaging in conversation with others implies that you are accepting the assumption that all which is empirically observed is effectively true. Not metaphysically true, but "true" within the bounds of the discussion.
Back to the Flat Earth example, the community is making a truth claim: "The Earth is flat." With empirical observation, we can disprove that claim and observe as fact that the Earth has a roughly spherical shape. With rationality, we can argue and objectively disprove the Flat Earth community. Rationality in this context is a system of logic that when given one epistemological observation, can predict another one accurately.
Over two thousand years ago, Eratosthenes observed that at noon in Syene, the Sun was straight overhead and cast no shadows. But at the same moment in Alexandria, a stick cast a shadow. The difference in shadows could only have happened in the surface was curved. By measuring angles and the distance between cities, he calculated Earth's circumference accurately up to 1-2%. Today, we can verify this reasoning by directly using satellites and lasers to map Earth's geometry.
Eratosthenes's logic was "correct" because he took an observation (the different angles) and predicted another (the Earth's is spherical and has a circumference). The only true measure of rationality is how well it grounds itself in our observations.
Syllogisms, perhaps the most famous form of rational argument, come in three parts:
- Major premise: A general rule
- Minor premise: A specific case
- Conclusion: What must follow if both premises are true
If the logic is valid but the conclusion is not true, then at least one of the premises must be untrue. If both premises are true, then the conclusion must be true. Syllogisms work rationally because throughout history, being able to correctly predict future events has been evolutionarily advantageous. Imagine an early human scavenging for food:
- All humans that consume red mushrooms died shortly after
- I am a human
- Therefore if I eat this red mushroom I will die shortly after
Realistically, early humans used inductive probability-based logic instead of syllogisms. This example is only for the purposes of this essay.
This was crucial to passing your genetic information on. If you were an early mammal that couldn't detect danger after observing the two premises, you'd last one generation and your genome would end there.
Epistemological argument has an important purpose: observable facts should be universal. If someone outright denies a claim that you believe to be fact, it threatens the integrity of your worldview. Correcting false facts supports the idea of a centralized, objective truth.
Even if the Flat Earthers ignore your attempts to persuade them with science, you can still walk away knowing for a fact that the Earth is not flat. If they conceded that "were the Earth flat, ships would grow smaller uniformly as they get farther from shore" and observed with their own eyes that "ships disappear bottom half first", yet still argued that "the Earth is flat", their argument is illogical. It is impossible for all three statements to be true.
Inconsistency
In Stevenson's three methods of moral argumentation, the first method, pointing out logical inconsistency, seems to be a concrete method of disproof, just like the Flat Earther example.
This is not the case. With the flat Earthers, you can point to physically observable facts for your truth claims. With emotivism, we've established that moral statements aren't truth claims. Back to the veganism example from the beginning, the structure appears as if there is a contradiction. The premises
- Causing unnecessary suffering is unethical
- Eating factory farmed meat causes unnecessary suffering
look like they logically necessitate the statement "Eating factory farmed meat is unethical." But what do they actually prove? Similar structure to the Flat Earth disproof, but when none of the statements are truth claims, the last statement isn't necessary.
Since each premise is an emotional expression, the only reason that "Eating factory farmed meat is not unethical" as a conclusion would have to be changed is not because it's "objectively false" but because some people find inconsistency to be psychologically uncomfortable.
This makes Stevenson's first method of moral argument fundamentally the same as the others: not an epistemological tool for truth seeking, but another method of influencing the emotional attitudes of others.
The important distinction to make in argument is that epistemological claims can be checked against observation. Ethical claims cannot. Pointing out a moral contradiction is a way to convince others that one argument is less trustworthy when neither argument is truer than the other.
This is an early essay and the presentation is rough. I'm publishing this to maintain momentum but future essays will develop more nuance. Take this essay with a colossal grain of salt.
People treat logical fallacies in ethical debate the same way they treat irrationality in epistemological discussion. I don't have a problem with this being used in rhetoric, but it isn't inherently more honest than purely using emotional language to influence the audience. To conclude, C.L. Stevenson's three methods of moral argumentation are all tools to influence a final emotional attitude.
That is all, and until next time, I am out.