_ __ __ ___ (_)___/ /__ ___ ___ / / ___ _______ / _ \/ / __/ '_/(_-</ _ \/ _ \/ -_) __/ -_) /_//_/_/\__/_/\_\/___/ .__/_//_/\__/_/ \__/ /_/
The term "critical thinking" gets thrown around a lot in schools, but children aren't ever sat down and explicitly told how to think. Logic needs to be introduced in primary school and reaffirmed throughout middle and high school (secondary school in the UK). Knowing how to think logically is far more important than knowing how to calculate the area of a circle, how volcanoes work, or how to use a bunsen burner. And teaching it shouldn't be politically controversial because there's an important distinction between telling kids what to think versus teaching them how to think. Logic is all about how to think. That's something we all should want others to know how to do well.
The point of teaching the formal, symbolic logic starting at a young age is not so kids, teens and young adults become good at truth tables. The point is they'll internalize logic like any other concept. The pattern recognition part of their brain will automatically recognize valid arguments when they see them. It will also recognize invalid forms of argument and logical fallacies without consciously doing any heavy lifting. That's where the most value is in teaching logic.
When I studied philosophy in community college, I remember there was an art student. He had a great personality and was a very likeable person. Whenever he got called on to answer a question though, he was never able to produce the right answer. It was clear to me that he never learned how to think logically. I wondered what it must be like to be a young adult never having learned that. There are also plenty of functioning older adults out there that never learned how to think logically. To be clear, studying formal logic isn't a prerequisite for logical thought. What I find to be the case with nearly everyone without training in formal logic is that they have an intuitive sense of how to reason, but there's important pieces of the puzzle they're missing. That's what I'm going to focus on in this post, the things that those without experience in formal logic get confused about. In my posts, I try not to assume prior knowledge, so I'm going to explain a bit about logic before I explain some of those missing pieces. If you're already familiar with logic, click here.
Logic is the study of rules of inference. Rules of inference allow you to draw conclusions based on premises. In other words, starting with a statement A, you can conclude statement B. For example, the earth is round is a true statement. Therefore the earth is round or up is down is also a true statement. In fact, I could replace the statement up is down with any proposition Z and the earth is round or Z would still be true. I used the rule of inference "addition" to draw my conclusion, so I'm guaranteed that it's true no matter what Z is. I can apply another rule of inference to get humans have 3 legs therefore either the earth is round or up is down. That is also a true statement. It sounds strange because the normal way of understanding "therefore" is as a causal relationship. In this context, it's a strictly logical implication, not causal. Despite how strange it sounds, humans have 3 legs therefore either the earth is round or up is down logically follows from the earth is round.
To test your skills in logic, I suggest trying out some logic puzzles such as Knights and Knaves. If you get really ambitious, you can try your hand at The Hardest Logic Puzzle Ever.
Now that I've talked about what logic is, I want to talk about some of the important aspects of logic people commonly get confused about.
The first way to disprove an argument is by showing that one of the premises is false. The other way is showing that the structure of the argument is invalid. People are used to thinking of arguments in terms of "arguments for" and "arguments against". That's why it's easy to get confused here. It's the attitude "There's some good arguments for a proposition and some good arguments against it and it's my job to weigh the pros and cons". But, in logic, an argument is either sound or unsound. The property of soundness means that the premises are true and it has valid form. If the conclusion of an argument derives from valid rules of inference based on the premises, then the only way to disprove the argument is to show one of the premises is false. If all the premises are true and the form is valid, then the argument is sound and the conclusion is true. There's no "arguments for" and "arguments against", or "maybe it's wrong some other way". There's no two ways about it. No if, ands or buts. If an argument is sound, the conclusion necessarily follows.
A logical fallacy is an error in reasoning. It can be formal or informal. Formal fallacies have to do with the structure of an argument. If an argument has bad structure, it is invalid. Informal fallacies have to do with the content of an argument. In my experience, it's more rare for people to commit formal fallacies. This is because there are so many more ways to commit informal fallacies than there are ways to commit formal fallacies. There are only a few ways to structure an argument improperly, but there are virtually endless ways to get the content wrong since the content can be anything at all. Take a look at yourlogicalfallacyis.com. It's good to become familiar with informal fallacies by name and be able to call them out in realtime. To challenge yourself, try doing that during a live presidential debate. There's so many logical fallacies in those it's impossible to keep up, at least for me.
The thing people get confused about when they're unfamiliar with logical fallacies is they think fallacies are a minor problem for an argument, similar to the "arguments for" and "arguments against" I talked about earlier. They see the fallacy as the "argument against" part. That's completely the wrong way to think about logical fallacies. The presence of a single logical fallacy in an argument means that argument is toast. A logical fallacy is not a "counterpoint" to an argument. It fully invalidates the argument. An entirely new argument is needed to prove the conclusion.
It's important that you get it right if you do call out a fallacy. I often see people calling out fallacies that aren't really there. The tendency by amateur logicians to call out fallacies that aren't there might actually be just as prevalent as the tendency for amateur debaters to commit logical fallacies. That's why practice at recognizing fallacies is key. I'm not going to mention every logical fallacy, just the ones I perceive as the most common. I'll start with the fallacy fallacy.
Sometimes people think invalidating an argument by pointing out a logical fallacy disproves its conclusion. This is known as the fallacy fallacy. A conclusion is like a destination you want to reach. Premises are where you begin. And an argument is the pathway from the premises to the conclusion. There are many different paths you can take to go from origin to destination. Just because one path doesn't work, that doesn't mean other paths can't. In other words, true statements can be defended with false logic. Perhaps the argument is bad because the premises are faulty. In that case, you need to find alternative premises to make your argument. The other case is the logic is invalid, the form is wrong. In that case, you can keep your premises but you need to fix the form. In the worst case, your argument is unsalvageable and you need to use different premises and different rules of inference to get to your conclusion. But just because you can't make an argument for a conclusion doesn't mean the conclusion is false. Even if no one on earth can make a sound argument for a conclusion, that doesn't mean the conclusion is false.
The burden of proof is the obligation to supply evidence for a claim. The reason it's "guilty or not guilty" instead of "guilty or innocent" is because the prosecution bears the burden of proving guilt. You are innocent until proven guilty. The null hypothesis is innocence. But the concept of the burden of proof applies far outside the courtroom. It's important in philosophy and it often gets misused. Shifting the burden of proof fallacy occurs when someone makes a claim, then when you demand evidence, they demand you prove the opposite. See the gumball analogy for further explanation.
In some cases, it may not even be possible to provide evidence to disprove a claim, but that doesn't mean the claim is true. See Church of the Flying Spaghetti Monster and Russell's Teapot. In debates about the existence of god, shifting of the burden of proof is an extremely common fallacy committed by theists. "You can't prove god doesn't exist!". Crucially, the burden of proof lies on the one making the claim. If I claim "There is a god", I have the burden of proving it. If I claim "There are no gods", then I have the burden to prove that. If I claim "There are probably gods", then I have the burden of proving that there are probably gods. If I claim "It's possible for a god to exist", then I have to somehow prove that it's possible, that there's a greater-than-zero chance of it occurring. So on and so forth for every claim.
The term "evidence" in this context isn't limited to hard, physical evidence. In The Simulation Argument, Nick Bostrom demonstrates that there is a 1 in 3 probability that we are living in a simulation despite not referencing any direct physical evidence of a simulated universe. It would be hard to say what direct evidence of a simulated universe would even look like. His paper doesn't depend on that many external observable facts about the physical universe either. The assumptions he does rely on to make his argument are fairly uncontroversial, which makes his strong result all the more surprising. It just goes to show there are many ways to meet the burden of proof for a claim, not all relying on hard physical evidence.
There are several ways people get confused over the argumentum ad hominem. The ad hominem fallacy is a logical fallacy where you attempt to refute someone's argument by attacking their character. If you attack someone's character, that might harm their credibility. But, a person's credibility has nothing to do with the logical soundness of their argument. Soundness of an argument depends only upon the truth of the premises and the validity of the argument. I'm not saying credibility isn't important. It is. Credibility may influence your willingness to believe claims made by someone, but that's a separate issue. Your willingness to believe someone also bears no relation to the soundness of their argument or the truth of the claim they're making. The soundness of a logical argument is independent of the reputation of the person making it.
Yet another way people misunderstand the ad hominem fallacy is they think it's equivalent to being mean or sarcastic in an argument. An ad hominem fallacy occurs when someone attempts to disprove your argument by attacking you personally. If they attack you personally and disprove your argument separately, that's not an ad hominem fallacy. That's just them being rude. Yes it would be nice if people were compassionate to others all the time, but being rude in a debate doesn't count as a fallacy. See examples in the ad hominem fallacy fallacy.
Tu Quoque translates into "you too!". It's also known as the appeal to hypocrisy and whataboutism. This one is most often used in political debates between candidates to attack each other's credibility and (seemingly) invalidate their opponent's argument. The idea is if you can call someone a hypocrite, that invalidates their argument. Obviously it doesn't. We've been over that. The only way to invalidate an argument is by showing the premises to be false or the structure to be invalid. It might be a good strategy for "winning" a debate as judged by laypeople with no training in logic, but calling someone a hypocrite does nothing against their argument, even if they are in fact a hypocrite. I've never heard it explicitly said that someone's argument is wrong because they are a hypocrite, only implied. This basically goes back to a person's credibility being irrelevant to the truth of their argument.
There are several logical fallacies which fall into the category of what I call "bad heuristics". They are substitutes for using logic to make up your own mind.
This one is the most widely-known. It simply means that many people believe something, so it must be true. There may be an evolutionary/psychological pressure to conform to what everyone else believes since it's perceived as the safest option. Several studies have been done showing that if you put a test subject in a group where the majority believes something, even if it's completely irrational, the subject will often just go along with it. Roughly 85% of the world's population believes in some form of god or gods depending on how you ask the question. There's no evidence for the existence of any gods, so their beliefs are unfounded. In other words, just going along with what everyone else believes is a bad heuristic.
The genetic fallacy is whenever someone says something is good or true because it comes from a certain source. We can look at the media for example. While it is certainly true that some news sources are more reliable than others, the truth of an argument doesn't change depending on which news source makes it. This doesn't diminish the importance of having reliable sources of news. As a personal example, when Nick Bostrom releases a new paper, I make an educated guess that I'll find it interesting based on his previous work being interesting. But the new paper that I read won't be interesting because all his previous work is. It will be interesting because of the contents of the paper. If you pick good sources of information, then that's actually not a bad heuristic for truth. You will only end up with a bad heuristic for truth if you pick bad sources of information, such as Facecrook. Just remember that the source of information has no bearing on the truth of the information. I'm really beating this point to death, but it bears repeating. The only determining factors for the soundness of an argument are the truth of the premises and the validity of the argument.
The appeal to nature fallacy happens when someone says something is good, just or ideal because it's "natural". Two problems with that. For one, everything that happens is natural because we live in the natural world. But let's entertain the fallacy for a moment and define "natural" as things that aren't products of human intelligence. By that definition, coronavirus is natural. Natural disasters are natural. It's even in the name. Lots of horrible things are natural. "Unnaturalness" is often used to argue against homosexuality. Other species of primate also show homosexual behavior, so homosexuality is natural even in non-human animals. To sum up, appealing to nature is a bad heuristic since it's hard to define what counts as natural and many natural things everyone agrees are natural are not good, while things people call "artificial", such as vaccines, are good.
Taking the "middle ground" position between two extremes is probably an even poorer heuristic for truth than the bandwagon. People think taking the middle ground means they're unbiased. They think having an "extreme" position means you are heavily biased. They perceive the middle ground as "balanced" and fair. But how do you go about deciding where the extremes are? Popular opinion? If that's the case, then we're back at the bandwagon fallacy. If you have some other way of determining the extremes, then what is it exactly? What if the middle ground is ambiguous?
Being unbiased doesn't mean you take the middle ground on every issue. It just means you're using another poor proxy for the truth because you can't be bothered to think for yourself. There's no reason to think that truth has to lie somewhere in the middle.
Personal experiences and isolated examples can be a bad heuristic for interpreting the world around you if you extrapolate them beyond their application. I sometimes use personal experiences as examples of things I already know to be true. But I'm not saying those things are true because of my personal experiences. Quantitative scientific measures are more accurate than personal experiences because they are based on lots of data collected in a controlled way as opposed to individual isolated examples collected over a single human lifetime. To sum up this section, trust the statistics. It's not to say they can't be wrong, but they have numerical weight. As the saying goes, numbers don't lie.
That wraps it up for the bad heuristics. Now we can move from fallacies to something new: the limits of logic. This is more of a surprising result about logic that people aren't aware of rather than something people get confused about.
After you become really good at doing any type of logic, whether it be computational, mathematical, inductive, deductive, etc, you'll eventually wonder if it's possible for logic to prove everything. Turns out it's not. And we can prove it...using logic. Mind bending, right? But before we talk about that, I'll have to explain axioms. If you're already familiar with axioms, feel free to skip.
I've said that for an argument to be valid, the premises must be true. But how do we know the premises are true? We could make another argument to prove each premise, but then we'll just have the same problem we started with. We'll have to prove each one of the premises that we used in our argument to prove our original premises. It's an infinite regress. To resolve this, we need a starting point, an axiom. An axiom is a proposition that is taken for granted. It is assumed to be true without justification. There are various ideas about what axioms one should accept. Typically they tend to be kept as simple as possible. For example, take a look at the logical absolutes: The Law of Identity, Non-Contradiction, and Excluded Middle.
In the early 1920's, famous German mathematician David Hilbert put forward a proposal calling for the axiomatization of mathematics. He wanted to make all mathematical truths reducible to an agreed upon set of axioms such that all true statements could be proved, but no false statements could be proved. In 1931, one of the most significant logicians in history, Kurt Gödel, showed that no set of axioms is capable of proving all truths about the arithmetic of natural numbers. See Gödel's Incompleteness Theorems. Gödel used mathematical logic to show that there are some places mathematical logic cannot go. Boiled down, he proved that logic cannot prove everything. This is also true in computing. See The Halting Problem. The essence of the trick seems to be, no matter which logic you're talking about, to find a way to encode the liar paradox in the system. A prerequisite for that is somehow getting the logical system to talk about itself. Gödel found a very fascinating theorem and I would recommend for anyone interested to look more in depth at it.
That's all I've got for this post. I think I've packed in a lot of information and good examples to research. Even if you never learn logic, I believe by reading this post you get a sense of what logic is all about and how to at least recognize some common informal fallacies and misunderstandings. I tried to include plenty of useful external links. This post is barely scratching the surface though. For some readers, just scratching the surface is good enough. But for all I know, the next Gödel might be reading this. In 2011, a 25-year old math problem about superpermutations was solved by an anonymous 4chan user. If that doesn't show that cleverness can come from anywhere, I don't what does.
I hope you enjoyed the post. If there's anything that you think I should have covered in this post or that I should talk in the future, let me know about it.
Unless otherwise noted, the content of this site is licensed under CC BY-SA 4.0.
© 2019-2021 Nicholas Johnson