Look Inward First

I just happened upon this great post over at Tim Ferriss’ blog. You know I’m all about looking inward and working with what we have. This is a guest article written by Ryan Holiday, someone heretofore new to me, and it drills directly into something that underlies everything in this blog – we really do need to take the time to understand who we are and what we want if we stand any chance at all of finding sustained peace and happiness.

A quote…

Montaigne once used the analogy of a man with a bow and arrow to illustrate the importance of meditation and analysis. You have to know what you’re aiming for before it is even worth bothering with the process of preparing the bow, nocking the arrow and letting go. Our projects, he said, “go astray because they are not addressed to a target.” The idea is that an intimate knowledge of ourselves makes it possible (and easier!) to know what we need to do on a daily basis. He advised us to meditate on our lives in general, in order to properly arrange our day to day actions.

Good stuff.  Helps to remind us to focus on what matters.  Thanks to Ryan and Tim for that.

Advertisement

Have Popper, Don’t Need Quine

I’ve been reading Stuart Kauffman’s latest, Reinventing The Sacred, and it’s chocked full of mind-bending ideas. I’m planning to write a review soon, so I won’t go further than to bring up one issue I have with his thesis – a nit, but one worth exploring, if only as a good philosophical hand-waving exercise.

Kauffman argues that the reductionist approach to the natural world is seriously limited – there are many phenomena that are beyond our notions of modern law, and no matter how much we discover, over as many centuries as we can imagine, we’ll never come up with laws that can predict how some (actually, a great many) events will unfold. Unfortunately, says Kauffman, as modern science has been under what he calls the Galilean spell for decades (or more), this truth has been hidden from view. The Galilean spell is the idea that all things in the world are explicable in scientific terms, even if we do not yet have the knowledge to recognize or grasp those explanations. One culprit, accused of prolonging the spell, is Karl Popper.

The argument goes that scientists have, for quite a long time, gravitated toward Popper’s Critical Rationalism as the basis of their quest for truth in the natural world. After all, he is credited with the notion of falsification – the idea that the only good theory is one that has withstood attempts to disprove it (the more the better). Now, if we are to believe Kauffman that reductionism isn’t all it’s cracked up to be (and I do), then clinging to falsification as a sound methodological approach to science inhibits our ability to “see” what is right before our eyes. Suppose the falsifying evidence we cite is faulty itself. If we buy it, and decide that the hypothesis of interest is false, then we have failed in our search for truth. Not the best method then, right?

I should pause and state, for those unfamiliar with my work, that Popper’s approach to science and truth is fully ingrained in my thought processes. I agree with him on most things, and I have found his insights immensely useful in life. So to hear Kauffman, one of the true heroes of modern science in my book, criticizing Popper is unsettling, to say the least. But I give him his due and hear him out. (To be fair, this is a passing mention in the book. But we Popperians die hard, I suppose.)

Kauffman prefers the Harvard philosopher W.V.O. Quine (dude, get a first name), who gives us holism in science. The idea is that, in searching for truth, the thing we really do (which implicitly works, apparently) is “provisionally alter those statements of fact or other laws that minimally alter our worldview.” So, rather than simply accept falsifying evidence (suspect as it is), we weigh the bigger picture – including the so-called falsifying evidence – and decide where we come down on the matter of the hypothesis in question. That makes sense to me, but I have a hard time seeing where that’s any great advancement over Popper.

You see, Popper’s whole critical rationalism concept is based upon three ideas:

  1. Practical action requires us to choose between more or less definite alternatives – theories, if you will.
  2. You can never be sure that any given theory is correct. This comes from Kant and Hume almost directly.
  3. You can, however, rationally prefer one theory over another. This is Popper’s big contribution to logically acceptable truth seeking.

So, in that context, Quine’s (and Kauffman’s) issue is that rationally preferring one theory over another does not take into account enough variables to be reliable. We may lose the forest for the trees. I disagree.

The key to internalizing Popper (for me) is grasping the relativistic stance of preference. To prefer something implies that there are multiple things and that they relate to one another (or to a separate topic) in some discernible way. In other words, some are better than others. Popper, in good form, did not attempt to prescribe how that preference should be given, at least not exactly. He gave guidelines, which have been stretched by the likes of Quine and Kauffman, to invent the need for a separate practice called holism.

The most important guideline is the notion that we should prefer that which has withstood rational scrutiny over that which has not – thus, falsification emerges as a value in assessing preference. And like all values in the real world, lots of factors go into determining how a specific instance relates to them. Popper is just saying, “I’ll fare better acting upon an alternative (or theory) that has been put to the test over one that has not.” That says nothing whatsoever about what goes into the testing.

From Popper’s “The Problem of Induction” (Section X),

Let us forget momentarily about what theories we ‘use’ or ‘choose’ or ‘base’ our practical actions on, and consider only the proposal or decision (to do X; not to do X; to do nothing; or so on). Such a proposal can, we hope, be rationally criticized; and if we are rational agents we will want it to survive, if possible, the most testing criticism we can muster. But such criticism will freely make use of the best tested scientific theories in our possession. Consequently, any proposal that ignored these theories (where they are relevant, I need hardly add) will collapse under criticism. Should any proposal remain, it will be rational to adopt it.

So Popper is saying that we have to consider our competing theories in the context of everything we know. There’s your holism right there.

Quine’s big example of critical rationalism’s limitation is as follows: if I believe the Earth is flat and you believe it is round, we can devise a seemingly falsifying test. We’ll watch a ship sail into the horizon, and if the hull disappears before the sails, we’ll know the earth is not flat. But, says Quine, what if the ship sank? We may come up with many other tests, but every time, I will be able to doubt the evidence against my assertion that the earth is flat. Thus, critical rationalism fails in helping us logically discern a fairly recognizable truth. Not so fast.

Remember that Popper’s second idea is that we can never be certain that a theory is correct. The reason is, very simply, that our abilities to either conceive of a proper test or accurately assess the results of said test are often too limited. As I said, this comes from Kant. So Quine is using Kant against Popper, when Popper’s entire concept is based upon the very same ideas. I don’t know exactly what you call that, but whatever it is, it’s pretty lame reasoning.

Of course, Popper would be the first to say that any piece of evidence can (and should) be doubted. But note that he insists that we must take into account the best tested theories in making our value judgements, our assessments of preference. It’s all relative. We are to take the full picture and weigh all our options, just as Quine recommends. If the assertions we make in rejecting seemingly falsifying evidence consequently require some “non-minimal” alteration of our worldview, then we are rationally justified in discounting them, especially with regard to our other evidence and other theories.

My point is that the Popperian approach to truth is holistic at its core. Quine would stretch the idea that we prefer tested theories over untested theories to mean that we limit our evaluations to actual experimental data and that we don’t scrutinize our evidence. Popper said no such thing; quite the contrary.

Critical rationalism does not, therefore, in any way, preclude accepting the limitations of reductionism. To be sure, Kauffman is right on about the Galilean spell and the blinders it has placed on much of modern scientific inquiry. But the blame – even for only prolonging it – cannot be placed on Popper’s shoulders.
Incidentally, I (as yet) know little about Quine, but I’m hoping holism was not his signature contribution to philosophy. That would be like inventing diet water.

Disclaimer. I am not a credentialed philosopher, and as such, I am fully aware that I may be way off on this. However, as Popper says, “I may be wrong and you may be right, and by an effort, we may get nearer to the truth.”

Logical Fallacies Cheat Sheet

Print this out and carry it around with you.  Any time someone expresses a belief that seems a bit off, run through the list.  I’ll bet that in most every case, they’re falling prey to one or more of the fallacies listed here.  And, if you dare, reflect on your own beliefs.  I bet a fallacy or two will reveal itself.  Then what?

Are you going to throw the list away and forget about the whole thing?  No judgements here.  Just keep in mind that denying reality doesn’t make it go away.  It never lets up, so you will see it again.  Hopefully, it won’t hurt too bad.

(BTW – I got these from the website of a small college in Tennessee called Carson-Newman. )

~~~~~~~~~~~~~~~~~~~

There are basically four kinds of logical fallacies – fallacies of
relevance, component fallacies, fallacies of ambiguity, and fallacies
of omission.  The list is organized accordingly.

FALLACIES OF RELEVANCE: These fallacies appeal to evidence or examples that are not relevant to the argument at hand.

Appeal to Force (Argumentum Ad Baculum or the “Might-Makes-Right” Fallacy): This argument uses force, the threat of force, or some other unpleasant backlash to make the audience accept a conclusion. It commonly appears as a last resort when evidence or rational arguments fail to convince a reader. If the debate is about whether or not 2+2=4, an opponent’s argument that he will smash your nose in if you don’t agree with his claim doesn’t change the truth of an issue. Logically, this consideration has nothing to do with the points under consideration. The fallacy is not limited to threats of violence, however. The fallacy includes threats of any unpleasant backlash–financial, professional, and so on. Example: “Superintendent, you should cut the school budget by $16,000. I need not remind you that past school boards have fired superintendents who cannot keep down costs.” While intimidation may force the superintendent to conform, it does not convince him that the choice to cut the budget was the most beneficial for the school or community. Lobbyists use this method when they remind legislators that they represent so many thousand votes in the legislators’ constituencies and threaten to throw the politician out of office if he doesn’t vote the way they want. Teachers use this method if they state that students should hold the same political or philosophical position as the teachers or risk failing the class. Note that it is isn’t a logical fallacy, however, to assert that students must fulfill certain requirements in the course or risk failing the class!

Genetic Fallacy: The genetic fallacy is the claim that an idea, product, or person must be untrustworthy because of its racial, geographic, or ethnic origin. “That car can’t possibly be any good! It was made in Japan!” Or, “Why should I listen to her argument? She comes from California, and we all know those people are flakes.” Or, “Ha! I’m not reading that book. It was published in Tennessee, and we know all Tennessee folk are hillbillies and rednecks!” This type of fallacy is closely related to the fallacy of argumentum ad hominem or personal attack, appearing immediately below.

Personal Attack (Argumentum Ad Hominem, literally, “argument toward the man.” Also called “Poisoning the Well”): Attacking or praising the people who make an argument, rather than discussing the argument itself. This practice is fallacious because the personal character of an individual is logically irrelevant to the truth or falseness of the argument itself. The statement “2+2=4” is true regardless if is stated by criminals, congressmen, or pastors. There are two subcategories:

(1) Abusive: To argue that proposals, assertions, or arguments must be false or dangerous because they originate with atheists, Christians, Communists, capitalists, the John Birch Society, Catholics, anti-Catholics, racists, anti-racists, feminists, misogynists (or any other group) is fallacious. This persuasion comes from irrational psychological transference rather than from an appeal to evidence or logic concerning the issue at hand. This is similar to the genetic fallacy, and only an anti-intellectual would argue otherwise.

(2) Circumstantial: To argue that an opponent should accept an argument because of circumstances in his or her life. If one’s adversary is a clergyman, suggesting that he should accept a particular argument because not to do so would be incompatible with the scriptures is such a fallacy. To argue that, because the reader is a Republican or Democrat, she must vote for a specific measure is likewise a circumstantial fallacy. The opponent’s special circumstances have no control over the truth of a specific contention. This is also similar to the genetic fallacy in some ways. If you are a college student who wants to learn rational thought, you simply must avoid circumstantial fallacies.

Argumentum ad Populum (Literally “Argument to the People): Using an appeal to popular assent, often by arousing the feelings and enthusiasm of the multitude rather than building an argument. It is a favorite device with the propagandist, the demagogue, and the advertiser. An example of this type of argument is Shakespeare’s version of Mark Antony’s funeral oration for Julius Caesar. There are three basic approaches:

(1) Bandwagon Approach: “Everybody is doing it.” This argumentum ad populum asserts that, since the majority of people believes an argument or chooses a particular course of action, the argument must be true, or the course of action must be followed, or the decision must be the best choice. For instance, “85% of consumers purchase IBM computers rather than Macintosh; all those people can’t be wrong. IBM must make the best computers.” Popular acceptance of any argument does not prove it to be valid, nor does popular use of any product necessarily prove it is the best one. After all, 85% of people may once have thought planet earth was flat, but that majority’s belief didn’t mean the earth really was flat when they believed it! Keep this in mind, and remember that everybody should avoid this type of logical fallacy.

(2) Patriotic Approach: “Draping oneself in the flag.” This argument asserts that a certain stance is true or correct because it is somehow patriotic, and that those who disagree are unpatriotic. It overlaps with pathos and argumentum ad hominem to a certain extent. The best way to spot it is to look for emotionally charged terms like Americanism, rugged individualism, motherhood, patriotism, godless communism, etc. A true American would never use this approach. And a truly free man will exercise his American right to drink beer, since beer belongs in this great country of ours.

(3) Snob Approach: This type of argumentum ad populum doesn’t assert “everybody is doing it,” but rather that “all the best people are doing it.” For instance, “Any true intellectual would recognize the necessity for studying logical fallacies.” The implication is that anyone who fails to recognize the truth of the author’s assertion is not an intellectual, and thus the reader had best recognize that necessity.

In all three of these examples, the rhetorician does not supply evidence that an argument is true; he merely makes assertions about people who agree or disagree with the argument.

Appeal to Tradition (Argumentum Ad Traditio): This line of thought asserts that a premise must be true because people have always believed it or done it. Alternatively, it may conclude that the premise has always worked in the past and will thus always work in the future: “Jefferson City has kept its urban growth boundary at six miles for the past thirty years. That has been good enough for thirty years, so why should we change it now? If it ain’t broke, don’t fix it.” Such an argument is appealing in that it seems to be common sense, but it ignores important questions. Might an alternative policy work even better than the old one? Are there drawbacks to that long-standing policy? Are circumstances changing from the way they were thirty years ago?

Appeal to Improper Authority (Argumentum Ad Verecundium, literally “argument from that which is improper”): An appeal to an improper authority, such as a famous person or a source that may not be reliable. This fallacy attempts to capitalize upon feelings of respect or familiarity with a famous individual. It is not fallacious to refer to an admitted authority if the individual’s expertise is within a strict field of knowledge. On the other hand, to cite Einstein to settle an argument about education or economics is fallacious. To cite Darwin, an authority on biology, on religious matters is fallacious. To cite Cardinal Spellman on legal problems is fallacious. The worst offenders usually involve movie stars and psychic hotlines. A subcategory is the Appeal to Biased Authority. In this sort of appeal, the authority is one who actually is knowledgeable on the matter, but one who may have professional or personal motivations that render his professional judgment suspect: for instance, “To determine whether fraternities are beneficial to this campus, we interviewed all the frat presidents.” Or again, “To find out whether or not sludge-mining really is endangering the Tuskogee salamander’s breeding grounds, we interviewed the supervisors of the sludge-mines, who declared there is no problem.” Indeed, it is important to get “both viewpoints” on an argument, but basing a substantial part of your argument on a source that has personal, professional, or financial interests at stake may lead to biased arguments.

Appeal to Emotion (Argumentum Ad Misericordiam, literally, “argument from pity”): An emotional appeal concerning what should be a logical issue during a debate. While pathos generally works to reinforce a reader’s sense of duty or outrage at some abuse, if a writer tries to use emotion merely for the sake of getting the reader to accept what should be a logical conclusion, the argument is a fallacy. For example, in the 1880s, prosecutors in a Virginia court presented overwhelming proof that a boy was guilty of murdering his parents with an ax. The defense presented a “not-guilty” plea for on the grounds that the boy was now an orphan, with no one to look after his interests if the court was not lenient. This appeal to emotion obviously seems misplaced, and the argument is irrelevant to the question of whether or not he did the crime.

COMPONENT FALLACIES: Component fallacies are errors in inductive and deductive reasoning or in syllogistic terms that fail to overlap.

Begging the Question (also called Petitio Principii, this term is sometimes used interchangeably with Circular Reasoning): If writers assume as evidence for their argument the very conclusion they are attempting to prove, they engage in the fallacy of begging the question. The most common form of this fallacy is when the first claim is initially loaded with the very conclusion one has yet to prove. For instance, suppose a particular student group states, “Useless courses like English 101 should be dropped from the college’s curriculum.” The members of the student group then immediately move on in the argument, illustrating that spending money on a useless course is something nobody wants. Yes, we all agree that spending money on useless courses is a bad thing. However, those students never did prove that English 101 was itself a useless course–they merely “begged the question” and moved on to the next “safe” part of the argument, skipping over the part that’s the real controversy, the heart of the matter, the most important component. Begging the question if often hidden in the form of a complex question (see below).

Circular Reasoning is closely related to begging the question. Often the writers using this fallacy takes one idea and phrases it in two statements. The assertions differ sufficiently to obscure the fact that that the same proposition occurs as both a premise and a conclusion. The speaker or author then tries to “prove” his or her assertion by merely repeating it in different words. Richard Whately wrote in Elements of Logic (London 1826): “To allow every man unbounded freedom of speech must always be on the whole, advantageous to the state; for it is highly conducive to the interest of the community that each individual should enjoy a liberty perfectly unlimited of expressing his sentiments.” Obviously the premise is not logically irrelevant to the conclusion, for if the premise is true the conclusion must also be true. It is, however, logically irrelevant in proving the conclusion. In the example, the author is repeating the same point in different words, and then attempting to “prove” the first assertion with the second one. A more complex but equally fallacious type of circular reasoning is to create a circular chain of reasoning like this one: “God exists.” “How do you know that God exists?” “The Bible says so.” “Why should I believe the Bible?” “Because it’s the inspired word of God.” If we draw this out as a chart, it looks like this:

The so-called “final proof” relies on unproven evidence set forth initially as the subject of debate. Basically, the argument goes in an endless circle, with each step of the argument relying on a previous one, which in turn relies on the first argument yet to be proven. Surely God deserves a more intelligible argument than the circular reasoning proposed in this example!

Hasty Generalization (Dicto Simpliciter, also called “Jumping to Conclusions,” “Converse Accident”): Mistaken use of inductive reasoning when there are too few samples to prove a point. Example: “Susan failed Biology 101. Herman failed Biology 101. Egbert failed Biology 101. I therefore conclude that most students who take Biology 101 will fail it.” In understanding and characterizing general situations, a logician cannot normally examine every single example. However, the examples used in inductive reasoning should be typical of the problem or situation at hand. Maybe Susan, Herman, and Egbert are exceptionally poor students. Maybe they were sick and missed too many lectures that term to pass. If a logician wants to make the case that most students will fail Biology 101, she should (a) get a very large sample–at least one larger than three–or (b) if that isn’t possible, she will need to go out of his way to prove to the reader that her three samples are somehow representative of the norm. If a logician considers only exceptional or dramatic cases and generalizes a rule that fits these alone, the author commits the fallacy of hasty generalization.

One common type of hasty generalization is the Fallacy of Accident. This error occurs when one applies a general rule to a particular case when accidental circumstances render the general rule inapplicable. For example, in Plato’s Republic, Plato finds an exception to the general rule that one should return what one has borrowed: “Suppose that a friend when in his right mind has deposited arms with me and asks for them when he is not in his right mind. Ought I to give the weapons back to him? No one would say that I ought or that I should be right in doing so. . . .” What is true in general may not be true universally and without qualification. So remember, generalizations are bad. All of them. Every single last one. Except, of course, for those that are not.

Another common example of this fallacy is the misleading statistic. Suppose an individual argues that women must be incompetent drivers, and he points out that last Tuesday at the Department of Motor Vehicles, 50% of the women who took the driving test failed. That would seem to be compelling evidence from the way the statistic is set forth. However, if only two women took the test that day, the results would be far less clear-cut. Incidentally, the cartoon Dilbert makes much of an incompetent manager who cannot perceive misleading statistics. He does a statistical study of when employees call in sick and cannot come to work during the five-day work week. He becomes furious to learn that 40% of office “sick-days” occur on Mondays (20%) and Fridays (20%)–just in time to create a three-day weekend. Suspecting fraud, he decides to punish his workers. The irony, of course, is that these two days compose 40% of a five day work week, so the numbers are completely average. Similar nonsense emerges when parents or teachers complain that “50% of students perform at or below the national average on standardized tests in mathematics and verbal aptitude.” Of course they do! The very nature of an average implies that!

False Cause: This fallacy establishes a cause/effect relationship that does not exist. There are various Latin names for various analyses of the fallacy. The two most common include these types:

(1) Non Causa Pro Causa (Literally, “Not the cause for a cause”): A general, catch-all category for mistaking a false cause of an event for the real cause.

(2) Post Hoc, Ergo Propter Hoc (Literally: “After this, therefore because of this”): This type of false cause occurs when the writer mistakenly assumes that, because the first event preceded the second event, it must mean the first event caused the later one. Sometimes it does, but sometimes it doesn’t. It is the honest writer’s job to establish clearly that connection rather than merely assert it exists. Example: “A black cat crossed my path at noon. An hour later, my mother had a heart-attack. Because the first event occurred earlier, it must have caused the bad luck later.” This is how superstitions begin.

The most common examples are arguments that viewing a particular movie or show, or listening to a particular type of music “caused” the listener to perform an antisocial act–to snort coke, shoot classmates, or take up a life of crime. These may be potential suspects for the cause, but the mere fact that an individual did these acts and subsequently behaved in a certain way does not yet conclusively rule out other causes. Perhaps the listener had an abusive home-life or school-life, suffered from a chemical imbalance leading to depression and paranoia, or made a bad choice in his companions. Other potential causes must be examined before asserting that only one event or circumstance alone earlier in time caused a event or behavior later. For more information, see correlation and causation.

Irrelevant Conclusion (Ignorantio Elenchi): This fallacy occurs when a rhetorician adapts an argument purporting to establish a particular conclusion and directs it to prove a different conclusion. For example, when a particular proposal for housing legislation is under consideration, a legislator may argue that decent housing for all people is desirable. Everyone, presumably, will agree. However, the question at hand concerns a particular measure. The question really isn’t, “Is it good to have decent housing?” The question really is, “Will this particular measure actually provide it or is there a better alternative?” This type of fallacy is a common one in student papers when students use a shared assumption–such as the fact that decent housing is a desirable thing to have–and then spend the bulk of their essays focused on that fact rather than the real question at issue. It’s similar to begging the question, above.

One of the most common forms of Ignorantio Elenchi is the “Red Herring.” A red herring is a deliberate attempt to change the subject or divert the argument from the real question at issue to some side-point; for instance, “Senator Jones should not be held accountable for cheating on his income tax. After all, there are other senators who have done far worse things.” Another example: “I should not pay a fine for reckless driving. There are many other people on the street who are dangerous criminals and rapists, and the police should be chasing them, not harassing a decent tax-paying citizen like me.” Certainly, worse criminals do exist, but that it is another issue! The questions at hand are (1) did the speaker drive recklessly and (2) should he pay a fine for it?

Another similar example of the red herring is the fallacy known as Tu Quoque (Latin for “And you too!”), which asserts that the advice or argument must be false simply because the person presenting the advice doesn’t follow it herself. For instance, “Reverend Jeremias claims that theft is wrong, but how can theft be wrong if Jeremias himself admits he stole objects when he was a child?”

Straw Man Argument: A subtype of the red herring, this fallacy includes any lame attempt to “prove” an argument by overstating, exaggerating, or over-simplifying the arguments of the opposing side. Such an approach is building a straw man argument. The name comes from the idea of a boxer or fighter who meticulously fashions a false opponent out of straw, like a scarecrow, and then easily knocks it over in the ring before his admiring audience. His “victory” is a hollow mockery, of course, because the straw-stuffed opponent is incapable of fighting back. When a writer makes a cartoon-like caricature of the opposing argument, ignoring the real or subtle points of contention, and then proceeds to knock down each “fake” point one-by-one, he has created a straw man argument.

For instance, one speaker might be engaged in a debate concerning welfare. The opponent argues, “Tennessee should increase funding to unemployed single mothers during the first year after childbirth because they need sufficient money to provide medical care for their newborn children.” The second speaker retorts, “My opponent believes that some parasites who don’t work should get a free ride from the tax money of hard-working honest citizens. I’ll show you why he’s wrong . . .” In this example, the second speaker is engaging in a straw man strategy, distorting the opposition’s statement about medical care for newborn children into an oversimplified form so he can more easily appear to “win.” However, the second speaker is only defeating a dummy-argument rather than honestly engaging in the real nuances of the debate.

Non Sequitur (literally, “It does not follow”): A non sequitur is any argument that does not follow from the previous statements. Usually what happened is that the writer leaped from A to B and then jumped to D, leaving out step C of an argument she thought through in her head, but did not put down on paper. The phrase is applicable in general to any type of logical fallacy, but logicians use the term particularly in reference to syllogistic errors such as the undistributed middle term, non causa pro causa, and ignorantio elenchi. A common example would be an argument along these lines: “Giving up our nuclear arsenal in the 1980’s weakened the United States’ military. Giving up nuclear weaponry also weakened China in the 1990s. For this reason, it is wrong to try to outlaw pistols and rifles in the United States today.” There’s obviously a step or two missing here.

The “Slippery Slope” Fallacy (also called “The Camel’s Nose Fallacy”) is a non sequitur in which the speaker argues that, once the first step is undertaken, a second or third step will inevitably follow, much like the way one step on a slippery incline will cause a person to fall and slide all the way to the bottom. It is also called “the Camel’s Nose Fallacy” because of the image of a sheik who let his camel stick its nose into his tent on a cold night. The idea is that the sheik is afraid to let the camel stick its nose into the tent because once the beast sticks in its nose, it will inevitably stick in its head, and then its neck, and eventually its whole body. However, this sort of thinking does not allow for any possibility of stopping the process. It simply assumes that, once the nose is in, the rest must follow–that the sheik can’t stop the progression once it has begun–and thus the argument is a logical fallacy. For instance, if one were to argue, “If we allow the government to infringe upon our right to privacy on the Internet, it will then feel free to infringe upon our privacy on the telephone. After that, FBI agents will be reading our mail. Then they will be placing cameras in our houses. We must not let any governmental agency interfere with our Internet communications, or privacy will completely vanish in the United States.” Such thinking is fallacious; no logical proof has been provided yet that infringement in one area will necessarily lead to infringement in another, no more than a person buying a single can of Coca-Cola in a grocery store would indicate the person will inevitably go on to buy every item available in the store, helpless to stop herself. So remember to avoid the slippery slope fallacy; once you use one, you may find yourself using more and more logical fallacies.

Either/Or Fallacy (also called “the Black-and-White Fallacy” and “False Dilemma”): This fallacy occurs when a writer builds an argument upon the assumption that there are only two choices or possible outcomes when actually there are several. Outcomes are seldom so simple. This fallacy most frequently appears in connection to sweeping generalizations: “Either we must ban X or the American way of life will collapse.” “We go to war with Canada, or else Canada will eventually grow in population and overwhelm the United States.” “Either you drink Burpsy Cola, or you will have no friends and no social life.” Either you must avoid either/or fallacies, or everyone will think you are foolish.

Faulty Analogy: Relying only on comparisons to prove a point rather than arguing deductively and inductively. For example, “education is like cake; a small amount tastes sweet, but eat too much and your teeth will rot out. Likewise, more than two years of education is bad for a student.” The analogy is only acceptable to the degree a reader thinks that education is similar to cake. As you can see, faulty analogies are like flimsy wood, and just as no carpenter would build a house out of flimsy wood, no writer should ever construct an argument out of flimsy material.

Undistributed Middle Term: A specific type of error in deductive reasoning in which the minor premise and the major premise of a syllogism might or might not overlap. Consider these two examples: (1) “All reptiles are cold-blooded. All snakes are reptiles. All snakes are cold-blooded.” In the first example, the middle term “snakes” fits in the categories of both “reptile” and “things-that-are-cold-blooded.” It is what logicians call a “distributed middle term.” (2) “All snails are cold-blooded. All snakes are cold-blooded. All snails are snakes.” In the second example, the middle term of “snakes” does not fit into the categories of both “things-that-are-cold-blooded” and “snails.” It is an undistributed middle term. Sometimes, equivocation (see below) leads to an undistributed middle term.

FALLACIES OF AMBIGUITY: These errors occur with ambiguous words or phrases, the meanings of which shift and change in the course of discussion. Such more or less subtle changes can render arguments fallacious.

Equivocation: Using a word in a different way than the author used it in the original premise, or changing definitions halfway through a discussion. When we use the same word or phrase in different senses within one line of argument, we commit the fallacy of equivocation. Consider this example: “Plato says the end of a thing is its perfection; I say that death is the end of life; hence, death is the perfection of life.” Here the word end means “goal” in Plato’s usage, but it means “last event” or “termination” in the author’s second usage. Clearly, the speaker is twisting Plato’s meaning of the word to draw a very different conclusion. Compare with amphiboly, below.

Amphiboly (from the Greek word “indeterminate”): This fallacy is similar to equivocation. Here, the ambiguity results from grammatical construction. A statement may be true according to one interpretation of how each word functions in a sentence and false according to another. When a premise works with an interpretation that is true, but the conclusion uses the secondary “false” interpretation, we have the fallacy of amphiboly on our hands. In the command, “Save soap and waste paper,” the amphibolous use of “waste” results in the problem of determining whether “waste” functions as a verb or as an adjective.

Composition: This fallacy is a result of reasoning from the properties of the parts of the whole to the properties of the whole itself–it is an inductive error. Such an argument might hold that, because every individual part of a large tractor is lightweight, the entire machine also must be lightweight. This fallacy is similar to Hasty Generalization (see above), but it focuses on parts of a single whole rather than using too few examples to create a categorical generalization. Also compare it with Division (see below).

Division: This fallacy is the reverse of composition. It is the misapplication of deductive reasoning. One fallacy of division argues falsely that what is true of the whole must be true of individual parts. Such an argument notes that, “Microtech is a company with great influence in the California legislature. Egbert Smith works at Microtech. He must have great influence in the California legislature.” This is not necessarily true. Egbert might work as a graveyard shift security guard or as the copy-machine repairman at Microtech–positions requiring little interaction with the California legislature. Another fallacy of division attributes the properties of the whole to the individual member of the whole: “Sunsurf is a company that sells environmentally safe products. Susan Jones is a worker at Sunsurf. She must be an environmentally minded individual.” (Perhaps she is motivated by money alone?)

FALLACIES OF OMISSION: These errors occur because the logician leaves out necessary material in an argument or misdirects others from missing information.

Stacking the Deck: In this fallacy, the speaker “stacks the deck” in her favor by ignoring examples that disprove the point, and listing only those examples that support her case. This fallacy is closely related to hasty generalization, but the term usually implies deliberate deception rather than an accidental logical error. Contrast it with the straw man argument.

Argument from the Negative: Arguing from the negative asserts that, since one position is untenable, the opposite stance must be true. This fallacy is often used interchangeably with Argumentum Ad Ignorantium (listed below) and the either/or fallacy (listed above). For instance, one might mistakenly argue that, since the Newtonian theory of mathematics is not one hundred percent accurate, Einstein’s theory of relativity must be true. Perhaps not. Perhaps the theories of quantum mechanics are more accurate, and Einstein’s theory is flawed. Perhaps they are all wrong. Disproving an opponent’s argument does not necessarily mean your own argument must be true automatically, no more than disproving your opponent’s assertion that 2+2=5 would automatically mean your argument that 2+2=7 must be the correct one.

Appeal to a Lack of Evidence (Argumentum Ad Ignorantium, literally “Argument from Ignorance”): Appealing to a lack of information to prove a point, or arguing that, since the opposition cannot disprove a claim, the opposite stance must be true. An example of such an argument is the assertion that ghosts must exist because no one has been able to prove that they do not exist. Logicians know this is a logical fallacy because no competing argument has yet revealed itself.

Hypothesis Contrary to Fact (Argumentum Ad Speculum): Trying to prove something in the real world by using imaginary examples alone, or asserting that, if hypothetically X had occurred, Y would have been the result. For instance, suppose an individual asserts that Einstein had been aborted in utero, the world would never have learned about relativity, or that if Monet had been trained as a butcher rather than going to college, the impressionistic movement would have never influenced modern art. Such hypotheses are misleading lines of argument because it is often possible that some other individual would have solved the relativistic equations or introduced an impressionistic art style. The speculation might make an interesting thought-experiment, but it is simply useless when it comes to actually proving anything about the real world. A common example is the idea that one “owes” her success to another individual who taught her. For instance, “You owe me part of your increased salary. If I hadn’t taught you how to recognize logical fallacies, you would be flipping hamburgers at McDonald’s for minimum wages right now instead of taking in hundreds of thousands of dollars as a lawyer.” Perhaps. But perhaps the audience would have learned about logical fallacies elsewhere, so the hypothetical situation described is meaningless.

Complex Question (Also called the “Loaded Question”): Phrasing a question or statement in such as way as to imply another unproven statement is true without evidence or discussion. This fallacy often overlaps with begging the question (above), since it also presupposes a definite answer to a previous, unstated question. For instance, if I were to ask you “Have you stopped taking drugs yet?” my hidden supposition is that you have been taking drugs. Such a question cannot be answered with a simple yes or no answer. It is not a simple question but consists of several questions rolled into one. In this case the unstated question is, “Have you taken drugs in the past?” followed by, “If you have taken drugs in the past, have you stopped taking them now?” In cross-examination, a lawyer might ask a flustered witness, “Where did you hide the evidence?” or “when did you stop beating your wife?” The intelligent procedure when faced with such a question is to analyze its component parts. If one answers or discusses the prior, implicit question first, the explicit question may dissolve.

Complex questions appear in written argument frequently. A student might write, “Why is private development of resources so much more efficient than any public control?” The rhetorical question leads directly into his next argument. However, an observant reader may disagree, recognizing the prior, implicit question remains unaddressed. That question is, of course, whether private development of resources really is more efficient in all cases, a point which the author is skipping entirely and merely assuming to be true without discussion.

Contradictory Premises: Establishing a premise in such a way that it contradicts another, earlier premise. For instance, “If God can do anything, he can make a stone so heavy that he can’t lift it.” The first premise establishes a deity that has the irresistible capacity to move other objects. The second premise establishes an immovable object impervious to any movement. If the first object capable of moving anything exists, by definition, the immovable object cannot exist, and vice-versa.

So there you have them – every major fallacy known to logic.  Now go and think clearly.

The Endangered Ability To Think Logically

My fellow Americans, we’re in deep trouble.  Some of it is our fault; some of it isn’t.  It’s our fault because those of us who know better are content in our own little worlds to let things proceed on their current course.  But mostly, the problem that afflicts us today is a manifestation of how our species does business.  Our world has changed dramatically in the last few decades, and our genes are unprepared, to say the least.  The problem I am referring to is the endangered ability to think logically.

As Thomas Sowell tells us in today’s column, which is entitled, “Are Facts Obsolete?“,

Those who are in the business of teaching the young, whether in the  public schools or on college campuses, too often see this not as a responsibility to pass on what is known but as an opportunity to indoctrinate students with their own beliefs. Many “educators” and the gurus who indoctrinated them actively disparage “mere facts,” which they say you can get from an almanac or encyclopedia.

The net result is a student population that does not even know enough to know what needs to be looked up, much less how to analyze facts, so as to test opposing beliefs — as distinguished from how to gather information to support a preconceived notion that happens to be fashionable in the schools and colleges.

Yet people are considered to be “educated” after they have spent so many years in ivy-covered buildings, absorbing the preconceptions that prevail there.

This is a symptom of the larger problem.  Logic does not come pre-installed in the human mind.  If it ever gets installed, it has to be done deliberately.  The default human mind, the one with no foundation in logic, has no preference for facts.  Indeed, the human mind is about expediency, which often sits at odds with reality.  Of course, as we are a social species, so long as “the group” is in on the con, all is well.  That is, until the group runs off a cliff, which we are apt to do if something isn’t done…and soon.

But how to teach logic to people in a soundbite world?  How do you retrain a modern human mind (adult or child) to be skeptical, to begin with premises, and to objectively and properly analyze arguments?  This requires an investment in time, which seems to be the last thing people are willing to give up, especially if doing so might jeopardize the fabricated reality that feels oh-so-good.  There’s TV to be watched.  There are video game bad guys to be blown up.  It was not always so.

Back before the media was ubiquitous, people (at least some people) longed for new things to read.  The rate at which they consumed information was considerably faster than the rate at which they received new material.  So they took the time to read long discussions of various issues, and they read them multiple times.  As they discussed what they read with one another, logic was their best friend.  They could dissect the points made and argue them on their merits (or lack thereof).  Of course, this was around the turn of the 20th century.  A lot has changed.

The sport of argument is almost dead.  It was slain by the that irritating little meme that people have a right not to be offended.  Yes, political correctness has all but killed logical, constructive discourse in this country.  Now you can’t make an argument that affirmative action hurts the people it is supposed to help without being labeled a racist.  This is because some people stand to lose a great deal if you’re right.  I guess it has always been so – the powerful have always been able to muzzle the powerless when their words rang a little too true.

But now, muzzles are easy to come by and are fitted routinely by people whose influence has no discernible justification.  Shouldn’t I be able to mount a logical argument in the marketplace of ideas and not be vilified for the implications of the conclusions I reach?  I should, but that would require the masses to have a foundation in logic.  It would require them to know that there is a right way and a wrong way to come by belief.  It would require them to know that, so long as the argument is not ad hominem (against the man), it should be allowed, even if it isn’t pretty.

I wish I could snap my fingers and live in a world dominated by truly rational thinkers.  I often wonder what that world would be like.  I wonder if I’d be in the majority.  Yes, I think rationally, but I’m not naive enough to believe that I’m rational all the time.  Would I be one of those fringe people who went irrational when things didn’t go his way?  I hope not.  I’d count on my knowledge of logical fallacies to keep myself honest.  Hey, maybe that’s how I can help out with this problem.

Knowing all the major logical fallacies is an excellent way to check your mind against irrationality.  If you pull them out and peruse them in the context of your beliefs, you’ll often find that you’ve bought into something illogically.  Then, knowing that it is almost always best to be on the side of logic, you can begin the process of changing what you believe.  I’ve done this more than a few times over the years.  It’s not always pleasant, but few things worthwhile are.

So, click here for your lesson on logical fallacies.  Don’t say I never gave you anything.

Post-Modern Or Grasping At Straws?

I’ve been neck deep in philosophy of late, getting to know some of the most twisted minds of the last two hundred years. Stephen Hicks, Professor of Philosophy at Rockford College, Illinois (named, I think, after the cheekiest of all TV private investigators, Jim Rockford), wrote a book called Explaining Postmodernism: Skepticism and Socialism from Rousseau to Foucault. As the title suggests, the author traces postmodernism (that is, intellectual douchebaggery) from its departure from modernist thought to the present. It’s highly informative, with an unexpected twist or two, but ultimately I found it to be much ado about nothing.

First a twist – here’s a quiz. True or False: the nuttiest of today’s lefty academics are ideologically derived from Immanuel Kant. Most (including myself as recently as a week ago) would respond with a resounding NO. Kant, after all, is heralded as one of the key Enlightenment thinkers, right? Right…and wrong. Although Kant did a lot for reason in terms of advocating its usefulness in establishing logical relationships between entities, he dealt it a devastating blow in saying that reason could never get us in touch with reality.

The Kantian view is that reality, at least what we think of as reality, is something fabricated entirely by our minds. He was enough of a realist to believe that there is some kind of absolute truth, but he believed that our minds are simply incapable of getting anywhere near it. Instead, we create reality according the constructs and limitations of our grey matter. Space and time do not really exist; we create them. Reading this did not shock me – I’ve known for a long time that Kant saw limits to reason, and that he, along with David Hume, officially abandoned it by the end of their lives. However, I was shocked to learn that guys like Hegel, Shopenhauer, Kierkegaard, Nietzsche, and Heidegger all used Kantian anti-reason as a jumping off point for their ravings. Furthermore, that those ravings eventually became the basis for American (and much of European) leftist thinking.

Perhaps I should make a point here. Hicks’ objective, I assume (he never quite says), is to help us understand what informs the mindset of so many of the wackos in our midst, especially those who are pervasive in academia. Ostensibly, once we get this, we can construct arguments (or at least responses) that will be more satisfying than being frozen like a deer in headlights at the sheer lunacy of what comes out of their mouths. On this, I think he’s reaching, but only because this never happens to me, and because he’s giving most liberals far too much credit. First a little more background – I’ll lay out postmodernism’s main tenets and then tie them to contemporary liberal perspectives. (To be clear, my use of the term ‘liberal’ is meant to refer to a modern liberal, like say Barbara Boxer, not a classical liberal, like say Milton Friedman.)

  1. In terms of metaphysics (that is, what is reality?), the postmodernist is strictly anti-realist, which is to say that there is no such thing. Everything is a construct of the human mind. Somehow, these crazies have concluded that we live in The Matrix, but without the Matrix.The modern liberal embraces this wholeheartedly. They refuse to deal in fact and reality. To them, humanity can be perfected and all men are good, if only the systems that organize them were right.
  2. In terms of epistemology (that is, how do we know what we know?), the postmodernist believes in social subjectivism, which is to say it’s all good. Whatever and however you want to come by knowledge is just fine, since you’re creating reality in your head anyway.Here are the seeds of multiculturalism. If any way of approaching the world is as good as any other, then no culture is better than any other. Hence, the PCification of society.
  3. In terms of human nature, the postmodernist believes we are the results of social construction, which is to say that our social and cultural environment creates whatever nature we may have.Again, this is the liberal’s battlecry against exploitative capitalism, gender socialization, racism, blah, blah, blah.
  4. In terms of ethics (that is, who or what is the arbiter of right and wrong?), the postmodernist is a collectivist, which is to say that the individual is always secondary to the group, which can be defined by race, nationality, sex, or religion.Liberals think in terms of groups and abhor those who put the needs and desires of individuals ahead of them.
  5. In terms of politics and economics, the postmodernist is a socialist, which is to say, dumbass.

We can get at this one indirectly by noticing that our society has become more and more socialist over the centuries since 1776, and it has been the liberals, almost exclusively, who have made it so. We can also get at it directly by noting that most lefty causes are joined by communist and socialist groups right alongside the likes George Soros and Michael Moore. (Anyone checked out Camp Casey lately?)

So there you have it, the breakdown of the postmodernist mentality and its modern liberal cousin. One might wonder how it is that I disagree with Hicks when I seem to have validated his primary thesis. Fair enough. Here’s the deal – Hicks’ main argument is that people today who exhibit these thought processes are direct cognitive descendants of the aforementioned philosophers. Though he focuses on four contemporary and well-known postmodernists (Derrida, Rorty, Foucault, and Lyotard), the implication is that most leftists have this philosophical pedigree coursing through their veins. This is where we part ways.

There is a thread that runs all the way from Immanueal Kant to Ted Kennedy, and it isn’t the same philosophical contemplation and subsequent conclusion. It is very simple – none of these people had or have the stomach for reality. It truly is that simple. We don’t need to put on our propeller hats and get down and dirty with Kierkegaard to recognize that, across the board, from postmodernist philosopher to modern-day politician, the mindset is the same – if reality doesn’t look like I want it to, I will deny its existence.

Indeed, in the second preface to Kant’s, Critique of Pure Reason, he asserts, “I have therefore found it necessary to deny knowledge in order to make room for faith.” Boom! There you have it – liberalism in a nutshell. (Yes, I realize that libs aren’t heavy into Jesus. I’m talking about the notion of abandoning reality for something you like more.) There are interesting things that flow from this. For starters, if there is no reality and all knowledge is subjective, then there is no such thing as truth. That’s right. So while we pound our fists on the table about facts and honesty, the anti-realist liberal is calculating truth (or what we think of as truth) as a matter of convenience.

You see, as long as realists are in power, they will bash anti-realists over the head with it, and though there really is no reality, getting bashed over the head with faux-reality still doesn’t feel good. Sooo…the answer is to snatch power from the hands of realists, and rhetoric is the most powerful tool for doing so. You getting the picture here? I see folks, usually conservatives, getting so wound up over the dishonesty of liberals, but what they fail to realize is that the libs are playing a completely different game. It’s not about being right (there’s no such thing, remember); it’s about power. Plain and simple.

The problem is that too many people, though they most assuredly do not know it, buy into Kant’s (or Hegel’s, to be precise) ideas about reality – namely, that it doesn’t really exist. I have often wondered what Kant would have said if Bill and Ted had brought him back instead of Sigmund Freud. Given that science has advanced to the point that we can be pretty darned sure about reality until we get down to the quantum level, I wonder if Kant would have been able to find middle ground in his thinking. To him, it was either that the real world gives its impression to the human mind or the mind gives its impression to the real world. When faced with those choices, it’s easy to see how he concluded as he did. In any case, I am a realist, so I acknowledge that we have what we have – some folks deal in reality, and some don’t. Unlike Stephen Hicks, I don’t believe that most of those don’t have any philosophical basis for their approach. I think, in the immortal spirit of Nicholson’s character in, A Few Good Men, they just can’t handle the truth. They’re not postmodern, they’re just grasping at straws.

BTW – I’m not back, I’m still on hiatus. Really – don’t get your hopes up. This one just couldn’t wait.

The Rational Morality Debate

A recent post led to a fairly extensive thread that wandered into the subject of morality. At issue is whether morality can be rationally conceived, and whether it really makes that big of a difference. I think it can be and that it makes all the difference. Our welcoming wench, Alice, however, has finally got my number. Or does she…

Alice: Chris. You believe that there is a right way to proceed. You believe that free markets are always better than collective schemes. You believe that the only reason Hitler emerged is because of the Treaty of Versailles. You think the only way to insure having a good marriage is to move in together and have a trial run at it. You have a much clearer vision than I do.

I believe in the ebb and flow method, that there is rarely a clear path to anywhere and it is all of the myriad influences which are present which will produce the outcome.I believe in accidents. I think when things work out well, such as the formation of the United States, it’s an accident. Something which happened because of a confluence of events, not because of one or even a few men.

When you say it that way, my position sounds so Type A. More explanation is apparently needed. Perhaps a story.

I know a guy who has a brother. In his house growing up, parental discipline was pretty much non-existent. Nevertheless, both he and his brother have turned out fine – good jobs, family, stability, etc.. But it turns out that his two sisters are majorly messed up. There were never any consequences for doing stupid things when they grew up, and they are both now literally incapable of living responsible lives. They sign leases and break them. They buy cars on credit and end up having to have their parents pay for them. One even has two kids that are now being raised by my friend’s brother. It’s tragic.

There’s no question as to the cause of these girls’ misfortunes. Their parents simply failed them. They should have recognized that, though successful, well-adjusted people *sometimes* emerge from consequenceless homes, too often they do not. They ebbed when they should have flowed.

My position is not about some delusional prediction about what happens every time you do this or that – that would be quite contrary to my Kantian view of the universe – uncertainty is the starting point of all thought. It’s about probabilities and the stakes of mistakes.

I do happen to believe that free markets are always better than collective ones, but only because there has never been an example of a collective one that led to prosperity without coercion. I believe there are lots of Hitlers lying around this planet, and that the Treaty of Versailles created the conditions necessary for one to obtain absolute power. I don’t think the only way to have a good marriage is to move in together first. I believe that moving in together ahead of time dramatically increases the chances of the couple, should they end up marrying, going the distance happily. It’s an extended interview process – how is that interpersonal due diligence is so anathema to you? Is that rational?

In all of these areas, I believe the actions that are taken, based upon the prevailing morality – the person in question’s measure of right and wrong – have important ramifications on how things unfold later on.

This is no different than wearing a helmet when riding a motorcycle – if the goal (the moral) is to stay healthy, and you can assume there’s a reasonable chance you’ll wreck (your fault or not), and you can assume that hitting your head at speed will be disastrous to your health, then a helmet is the obvious choice. It’s not about a clear vision. It’s about being informed and having an idea where you want to go in life.

So my point in all of this is to say it is possible to rationally conceive of our view of right and wrong, and that this is extremely necessary because our choices and actions have larger consequences than we often imagine. And in a society increasingly obsessed with instant gratification, awareness of this is that much more critical.

And lest I ignore an important historical sidebar, Alice also has this to say:

Take the Treaty of Versailles for instance. That begot the Marshall plan. It wasn’t invented out of the blue, it came about because people saw that punishing the loser didn’t work too well.

This is what I mean by ebb and flow. People are only smart in retrospect. We ain’t psychic.

More proof to my point. The aspects of the Treaty of Versailles that caused the problems that created WW2 were the punitive ones – the ones that forced Germany to accept full responsibility for the war, the ones that forced Germany to pay exorbitant reparations, the ones that forced Germany to relinquish colonies and territories. None of these were present in Woodrow Wilson’s 14 Points, which was the US model for the Treaty.

In fact, Clemenceau (the French guy, for the historically challenged) and Wilson were quite at odds through the entire process of establishing the Treaty, heatedly so. The French, having been severely ravaged by the war, and because Clemenceau was a bulldog of magnificent proportions, won out in the end. Nevertheless, someone did know better than to do the Germans as they were done, and that someone was the leader of what has become the greatest nation on this planet. He was enlightened, in a sense, which means he understood enough about humans to know that the French need for revenge would end up coming back to bite them, and maybe everyone else. His morality and his knowledge of his species were the guide to his vision. Several million people would be alive today were it not for an ebb when there should have been a flow.

Lastly, in response to my assertion that individual human action has been one of the most dramatic forces that have shaped human history, specifically my statement that without George Washington, there would be no USA, Alice comes back with this:

…to that I would say, no King George, no USA. If England had acted differently and had been in a different financial position and had not imposed such heavy taxation, it is unlikely the colonists would have agreed to revolt against the mother country.

See, it was the confluence of events. AKA, an accident.

No, it was not an accident. You’re quite right that King George’s oppression of the colonies was the catalyst for the revolutionary war, but his attitudes and actions were not accidental, not by a long shot. They were a direct result of his morality, which was based upon inherent absolute power of the monarchy and the obligation of all English peoples to bow to it. It is well known that there were those in Britain who recommended just cutting us loose. George would have nothing of it – his pride and his vision of how things should have been (his morality) were being challenged. He, too, ebbed when he should have flowed. It was the widespread dissemination of enlightenment ideas by people like Thomas Paine and John Locke that alerted the masses to his error. Just as Thomas Paine risked death by writing Common Sense, so did the colonial army in defying and clashing with the British, and both because of their morality, the one that was rationally conceived by a new generation of intellectuals.

At every step of the way through life, there are choices to be made, forks in the road. Each path corresponds to a ripple through the future – some are big, some are small. It is our morality that guides us in choosing a path, which means it is incumbent upon us to conceive of our morality methodically through the use of reason. More importantly, it is incumbent upon us to reject moralistic ideals that do not stand up to rationally scrutiny (read, dogmatic morality). This is a lynchpin in the enlightened modern mind.

(Sorry for picking on you, Alice, but we simply outgrew the comments area. This is an important and clarifying difference of opinion, and if anyone can take it, I know you can.)

You Gotta Have Faith

Original Post (with many many comments)
In response to yesterday’s post, Freedomslave came back with an interesting comment, and I think it warrants a post of its own.

Now I hate bible thumpers as much as the next guy, and I don’t go to church (except on Christmas). But the one thing I know for sure is that you have to have faith. Your faith might be that when a species hits a point in its evolution that the DNA mutates and evolves into a higher form of life. Just like the bible thumper you need a certain amount of faith to believe that, epically with all the inconclusive DNA evidence that now exists and the lack of fossil evidence to verify it.

You have to have faith. With this, I wholeheartedly agree. This wasn’t always the case. I used to believe that faith is a crutch, kind of a get out of jail free card for when reality doesn’t go your way. In a lot of ways, I still believe this. I don’t subscribe to the notion that just because many big questions are still unanswered we have to use faith to believe in something. It’s like we’re saying we can’t get by without embracing some worldview, and our only options are all debatable as to their merit. This is simply false. We can do very well in life without buying into big-picture concepts that don’t add up logically. But it requires us to put aside our inherent need to explain our surroundings.

I’ve talked before about the evolution of hope and despair. The gist of the concept is that our minds have a built-in ability to assess our environment in terms of whether or not it bodes well for our plans, which in caveman days were simple – survive long enough to reproduce. Situations that bode well generate hope, which keeps us clocked in and active. Situations that look bad generate despair, which prompts us to explore our options and do something different. But before hope and despair can do their jobs, our minds have to make that assessment. Thinking about the hostile environment of ancient times, it’s clear that decisions had to be made – if you stood too long weighing every little option, bad things could (and often did) happen. Statistically speaking, then as now, it is almost always better to do something than nothing when your life is on the line. Thus emerges our need to explain our world.

But our modern world, as this blog routinely espouses, is nothing like that of our cave-dwelling ancestors. Indecision isn’t the perilous circumstance it once was. We have the benefit of nearly assured safety, and we have easy access to food and shelter. Nevertheless, the genes that make our minds are still cranking out models that insist upon satisfied curiosity. This, I am convinced, is why people buy into all manner of odd ideas. Anything to feel certain. And the concept of faith has been so sancitified that it offers the perfect excuse to settle on whatever floats your boat. I would argue, however, that faith isn’t all it’s cracked up to be, at least not most of the time.

For the most part, faith is exactly as I have always seen it – an excuse to believe whatever makes you feel best. In that case, it’s a fast path to intellectual laziness. If something requires faith to believe in it, isn’t it worth asking why having faith suddenly makes it believable? What’s the old Churchill saying: “If you say a dog’s tail is a leg, how many legs does he have? Most people answer five, but it’s four. Just saying a tail is a leg doesn’t make it so.” Or something like that. Anyhow, reality is what it is. In my book, there’s never anything to be gained by denying it. But…but…but.

As I said, I am actually now on board with the whole faith thing. I have been for two or three years now, but the only thing I have faith in is reason. As it happens, there’s really no other way. You see, reason will only get you so far. You can be the master of all masters at logical deduction and still reason will fail you. It will fail you when you get to the land of quarks and leptons. At the subatomic level, there’s no way to really measure what’s going on, and this makes all the difference when you’re trying to use reason to prove the world is as we think it is.

Think about how many physics equations use time as a variable. But what is time? Or, better yet, what is a second? We just assume that our standard units of measurement make sense, but do they? By definition, a second is the time needed for a cesium-133 atom to perform 9,192,631,770 complete oscillations. Fair enough. But how can we tell a cesium-133 atom from a cesium-132 atom? We certainly can’t pick the former out of a subatomic lineup. We use statistics and probabilities to tell them apart. Aye, there’s the rub. We’re guessing. Our guesses are good, mind you, but we’re guessing nonetheless. So here we are faithless, relying upon reason to guide us in our estimation of everything, and we can’t even get the most basic things right. This is where faith earns its stripes.

If I must have faith, and it appears that I must, it has to be solely in the notion that reason will not fail me, in the notion that even though logic holds up under the most dire of circumstances, I can’t expect too much of it. In the end, it was Karl Popper who helped me with this (me and David Hume, although Hume was long dead when Popper came along.)

David Hume worried so much about his problem of induction that he ended up rejecting rationalism altogether. His hang-up was founded in the idea that even though something (like the sun rising) has happened for 1000 days, it is illogical to suppose that it will happen on the 1001st. Since we’re only privy to part of truth of this world, we could have been wrong lo those 1000 days. Tomorrow, things could change, so it doesn’t make sense to make predictions. Ergo, rationalism doesn’t work. (Given Hume’s popularity in the old days, it’s no surprise that there was a decidedly anti-rational movement that succeeded his death and the Enlightenment. I believe they call it Romanticism. Yes, critics, the French Revolution might have also had something to do with it.) Popper, however, having seen the mental cancer that was irrationalism, took a different approach.

Popper conceded from the outset that making predictions based upon some supposed certainty that was obtained by way of reason was illogical. He acknowledged that certainty, in itself, is unattainable, but he also acknowledged that we have to do something in life. So we use reason to evaluate our alternatives and we choose the best one. In that way, we don’t ask too much of it, and we keep ourselves as tuned into reality as possible. The only thing required is a healthy faith in reason. That’s where I am these days.

I rejoice in the mystery of our world. I’m thrilled to know that there will always be things to be curious about. I’m thrilled to know that there’s always a chance that something big and heretofore established will come crumbling down in the face of new evidence. I also watch car chases – maybe it’s me. In any case, my explanation for this world is simple – it’s all explainable (not explained, but explainable). It’s up to us to chip away at it so that we can keep handing what we learn down through the generations. I really don’t need anything more than that, and I firmly believe that most people, if they’d take a deep breath and give it a try, wouldn’t either.

Books That Will Make You Think Differently About Yourself

The concept behind this site is fairly simple. Our genes are controlling us a lot more than we think they are, but this is not a bad news story. We can, if we understand what our genes are up to, take control and live according to our rationally conceived objectives in life. This is not an idea that I have come up with on my own (though I may be one of its most ardent proponents). I’ve just grabbed onto it because I think it is the key to getting the most out of our time here. If we know that emotions are the brain’s rapid response system, and we know that they evolved to react in certain ways to certain situations (social situations, in particular), then we have a leg up in the quest to think when circumstances require thought more than emotion. That, alone, I am convinced, would elevate the general happiness to levels that have never before been seen in mankind’s history. To that end, I’d like to propose the creation of a book list, an enlightened caveman curriculum, if you will.

Let me first draw some lines in the sand. There are countless books that can be said to enlighten humanity – the dictionary comes to mind – so we need some criteria for books that will fit properly into this. The first is this: a book on this list must deal directly with human nature. It may be based in science, such as genetics, or any other field of study that is represented on accredited college campuses. Anthropologists and archaeologists have learned a great deal about who we are as a species, so it makes sense to include their efforts in our pursuit of enlightenment.

Second, the book must invoke concepts about human nature in a prescriptive way. That is to say, it isn’t good enough to say that genes are selfish, which means our elaborate lives are the happenstance result of replicators replicating. (So The Selfish Gene , great as it is, is out.) The book has to say what the science and/or anthropology and/or archaeology prescribes for those of us looking for direction in life. We need to be able to practically apply what the academics have discovered.

I’ll start by adding three books that have been particularly meaningful to me, and I’d ask that suggestions to the list adhere to the same general format – tell what the background information is, and then tell what is prescribed, and how it benefits mankind. Over time, hopefully, we’ll have a nice list of books that all add credence and weight to the theme of this site. Of course, in the spirit of intellectual rigor, I’d welcome any recommendations of books that contradict the enlightened caveman concept.
These books are listed in no particular order.

  1. Mean Genes : From Sex to Money to Food: Taming Our Primal Instincts
    by Terry Burnham, Jay Phelan
    From the introduction:
    Our brains have been designed by genetic evolution. Once we understand that design, it is no longer surprising that we experience tensions in our marriages, that our waistlines are bigger than we’d like, and that Big Macs are tastier than brown rice. To understand ourselves and our world, we need to look not to Sigmund Freud but rather to Charles Darwin. The authors then go on to address the following list of topics: debt, getting fat, drugs, taking risks, greed, gender differences, beauty, infidelity, family, friends, and foes. In each case, they detail the ancient genetic strategies that are manifesting themselves in behavior and social phenomena today, and then they explain what shifts in thought are implied by the information if we are to improve our lives.I must admit that I was in a pretty solid state of panic when I read the introduction to this book. I was thinking that these guys had basically beat me to the punch. Fortunately, as I read on, I realized that there really isn’t very much overlap between my book and theirs. Yes, we’re both working off the same general premise. However, my book is far less tactical. I’m focused on changing the way we think from the inside out – by starting with how we think of ourselves and what matters in life and then moving on to how we think about our fellow man – all for the sole purpose of bringing happiness to our lives.

    Burnham and Phelan, however, call their book a manual for the mind, and I have to agree with them.For example, they explain that in ancestral times, it made sense to eat when food was available. Therefore, we are now a species that eats far more than it needs when food is plentiful (as it is in first-world countries). That means we have to consciously endeavor to control our intake of food. If we do not, we’ll routinely find ourselves letting our belts out. Think of how many people in this country don’t know this. The mass awareness of little tidbits like this could prolong and improve the lives of countless people. There are many, many others in this book.

  2. Consilience : The Unity of Knowledge
    by Edward O. Wilson.
    From Chapter 6: The Mind
    All that has been learned empirically about evolution in general and mental processes in particular suggests that the brain is a machine assembled not to understand itself, but to survive. Because these two ends are basically different, the mind unaided by factual knowledge from science sees the world only in little pieces. It throws a spotlight on those portions of the world it must know in order to live to the next day, and surrenders the rest to darkness.Wilson’s book is about reconsidering the way we teach and pursue knowledge. He argues that our schools break subjects apart (math, english, biology, etc.) for somewhat arbitrary reasons and that this works against the design of the mind, which is more comfortable with holistic approaches to learning. Consilience, he says, is, “…literally a ‘jumping together’ of knowledge by the linking of facts and fact-based theory across disciplines to create a common groundwork of explanation.” The idea is that we shouldn’t restrict ourselves to applying what we learn in computer-based neural networks to implementing better computer systems. We should ask what other phenomenon could be better understood by what we know about these inanimate, but elegant systems. It’s about synthesis, and this, to me, begs a mental paradigm shift.

    Wilson asserts that that the value of consilience is not something that can be proven with first principles or logical deduction. Its value is self-evident, as it has been chiefly responsible for most of the progress of our species. I can vouch for that in my own life. Any time I learn something new, I automatically ponder what this new information could bring to other things I’ve wondered about. The Heisenberg Uncertainty Principle, for example, has so many other applications that counting them would be tough, and I thank Wilson for helping me think differently, about myself and the world around me.

  3. The Science of Good and Evil : Why People Cheat, Gossip, Care, Share, and Follow the Golden Rule
    by Michael Shermer
    From the Prologue:
    Ultimate questions about social and moral behavior, while considerably more challenging [than questions about hunger and sex], must nevertheless be subjected to an evolutionary analysis. There is a science dedicated specifically to this subject called evolutionary ethics, founded by Charles Darwin a century and a half ago and continuing as a vigorous field of study and debate today. Evolutionary ethics is a subdivision of a larger science called evolutionary psychology, which attempts a scientific study of all social and psychological human behavior. The fundamental premise of these sciences is that human behavior evolved over the course of hundreds of thousands of years during our stint as hominid hunter gatherers, as well as over the course of millions of years as primates, and tens of millions of years as mammals.In this book, Shermer takes aim at morality and ethics by arguing that humans came by the two long before religion or any codified social rules existed. In Chapter 5, called, “Can We Be Good Without God?”, he addresses head on how we can rationally arrive at morality and be anchored to it as tightly (and rightly) as any religious person is to his or her morality. Throughout the book, the author calls upon all sorts of academic information, from evolutionary psychology to anthropology to sociology to make his points. And aside from the obvious benefits of seeing our tendency toward piety for what it is, he also brings out a really useful concept, using fuzzy logic to think differently about issues.

    Shermer makes the point that the human tendency to dichotomize, to think something is either this way or that, must be guarded against, because life is simply not black and white. Better to think in terms of fractions. For example, at any given moment, I may be 20% altruistic and 80% non-altruistic (selfish). Though, in the balance, I come off selfish at that time, it is incorrect to say that I am a selfish person. The situation may have called for selfishness. The bottom line is that circumstances have a lot to do with our morality. Being able to see people and ideas as shades of grey helps us to avoid moral absolutes that generally lead to division between people. This is a worthwhile message, to say the least.

So there you have it – three books that I think contribute to the enlightened caveman movement. There are more, but not too many, not to my knowledge. That’s why I’m doing this. I’ll finish my contributions in later posts. For now, I hope to learn about all the great books I’ve never heard of, books that will bolster my belief that here lies something big, something important.

What Is Consciousness? A Trip Into The Mind

Original Post (with comments)
I’m not trying to be a scientist. I’m really not. I’ve just read a wide variety of scientific topics, particularly those related to evolution, the brain, and thinking, and over the years and I’ve come to my own interpretation of, you might say, the gestalt of the mind. It’s sort of a general feel for the the physicality of it and how layers of abstraction are built upon that, a feel for its evolutionary history and the infrastructure it begat, and a feel for how all that translates into a wide swath of common behavior patterns. The probably sounds as arrogant and sure as possible. We’re inside my head right now, so bear with me. I’ll admit that if there are original ideas in my vision, they are the kind of originality you attribute to an editor. Nevertheless, if I’m being honest, my aim here is prove that my intuition is right. I really want it to be.

But I know that about myself. I’m conscious of it, and because of that, I’ve taken steps to insulate my curiosity from my bias. That’s why I’ve chosen critical rationalism as my method. I recognize up front that I can’t prove that I’m right, that I don’t have all the facts, and that my emotions could be, try as I might, confounding my conclusions. So I write; I throw out hypotheses and the evidence, shoddy as it may be at times, that I have for them. As time goes on, this gestalt is becoming clearer and clearer, which only means that I understand it enough to articulate it. I write more. The whole time, I’m hoping that people will come along and adjudicate my accuracy. (Of course, I’m hoping with arms drawn to my chest and clinched fists that it works out for me. That’d be great. I’d feel smart, or better yet, smarter.) Nevertheless, I have committed myself to finding out, one way or another, if I’m right. I figure the worst that can happen is that I’ll make a few adjustments and still end up with the satisfaction of feeling like I have a holistic, almost unifying, understanding of something seriously elusive.

The preceding two paragraphs just played out on a giant movie screen in my mind. And, as if experiencing a good movie, I was engrossed. I still am. And, like a movie, a lot of other things were and are going on that were and are escaping my attention. Interestingly, in thinking about the things that have been escaping my attention, I all of a sudden start noticing them. The sound of the heater. The visual flicker of the TV on mute. The sighs of my dog as he makes one of his countless tiny adjustments. The smell of the fireplace that still hasn’t been used this winter. My attention is flittering back and forth between the thoughts flowing from my fingertips and the surroundings I am still writing about. Scene after scene on a giant movie screen in my head. And this movie screen is, in my view, the key to consciousness.

I feel intrepid in this domain of consciousness, mainly because no one knows for sure what’s going on. In short, I like my chances on this. If I apply the knowledge I’ve gleaned from Stuart Kauffman’s work in, At Home In The Universe (self-organization theory), and apply it to the physical function of neural networks, and to the structural organization of the brain, and then I infuse all that into Daniel Dennett’s, Consciousness Explained, I come up with the following explanation.

Neural networks are the building blocks of mental organs. Some mental organs we share with other animals. They operate in the lower, simpler levels of abstraction, near our brain stem, serving to facilitate our basic survival and reproductive success. Examples would be autonomic body functions and basic emotions, such as love, fear, anger, sadness, and jealousy. These emotions are not feelings in the usual sense. They are physiological responses that elicit particular behaviors. Imagine that the mind is in a steady state when it is calm and nothing out of ordinary is perturbing it. Then, when something happens that requires a physical response, like say a tiger is approaching, these simple programs, these emotions induce physiological reactions, which prompt the impulse to assuage them, to get back to a steady state. Each physiological reaction elicits its own physical response. The collection of these programs is sufficient to keep us alive and reproducing.

They’re instinctive. Over eons of time, however, these survival programs have been co-opted and abstracted (via self-organization) into higher and higher levels of complexity, levels that call upon more and more information in their execution processes. The higher level networks are larger, more distributed, both vertically (in and out of lower levels and higher levels) and horizontally (pulling from a wider and wider body of data). They contain our cognitive programs and our complex emotions, and they store vast networks of information. The complex programs make it possible to override the basic programs, sometimes temporarily, just long enough to deliberate for a bit, sometimes permanently, allowing us to adopt a different course of action all together. The networks at this level also enable the use of logic and rationality. Then, and this is the best part, at the very top (figuratively speaking), all of these networks of networks self-assemble into the giant movie screen. Consciousness is upon us.

The movie, however, is really a gigantic closed-circuit TV. It’s as if a wide angle camera is mounted at the very top of this vast sea of neural networks in our brain, some of which are tightly coupled so as to resemble distinct entities (organs, you might say), while others, the majority, are stretched across multiple organs, serving as organs themselves. Interspersed throughout are countless relational and hierarchical databases of information. But the camera can only see so deep.

It doesn’t have access to the lowest levels, to the simplest of programs. It’s view is limited to the upper reaches of abstraction, where complex thought and emotions reside. Of course, the lower levels can manifest themselves in the upper levels (such as when we notice a loud sound), seeing as how they’re all connected, but the low-level data is edited at that point. The important thing is that where the camera is pointed is the result of a contest between competing information networks and the organs that exploit them.

Hordes of the complex programs below are shouting for their chance to be on camera. They’re always shouting. They’re always executing their programs at their highest voice. These mental organs are yelling out the input they’re receiving and the conclusions they’ve reached, which are often perceived as recommended courses of action. The heater is vying for my attention, and it has just gotten it. “The heater makes a low hum: think about my body temperature, think about the temp in the baby’s room, do nothing.” Before this, it was my concern for the words ahead that dominated the camera’s lens. It’s recommendation: read back over the last paragraph…

I’m back.

As I was saying, as the camera scans the networks below, it is drawn to the loudest network, and an interesting thing happens when the camera focuses on a particular network or set of networks – the shouting there intensifies. That means that when it latches onto it, it is held captive, if you will, staying on the screen until something distracts it off. That something might be a cognitive program that is ruminating over some past memories, or it might be the reverberations of a low-level emotional program that has perceived an itch on the arm. Whatever wins the competition gets screen time and the consideration of its conclusions and recommendations. It is the existence of the screen, the camera, and what passes through it that constitutes consciousness.

The beautiful thing is what happens when an amazing idea flashes across the screen – I can control the camera. I can control the camera! Free will is born. Now the conscious awareness, the camera, has turned to a remote spot in the data grid, that which corresponds to the concept of the self. High level programs instantly begin connecting to this new network, factoring the notion of self (including its newly discovered ability to control what appears on the screen) into their routines, into their conclusions, and into their recommendations for action. Suddenly, with free will at the helm, and a mind imbued with the awareness of self, the camera comes off of auto-pilot. The content on the movie screen becomes a matter of choice. But even then, the recommendations on the screen may not control the actions taken.

There are still low-level programs at work. They’re there all the time, perceiving, processing, and executing, just as they have in humans for countless centuries. And a key attribute of them is that they work very fast, so fast that they regularly spur us into action long before we realize why we’re acting or exactly what we’re doing. If a beautiful, sexy girl walks past a straight 16-year old boy, his eyes will saccade their way over her time and again before he ever actually thinks to stare at her. His low-level programs are doing their job. If he’s absorbed in a conversation, he may not even notice her, at least not consciously. His mind, however, knows she’s there. Similarly, if an intruder crashes through my door, it will not be free will driving my bus. Before the shape of his face ever passes over my movie screen, my body will be reacting. I will effectively be on auto-pilot, at least for a few seconds. But as the situation resolves, free will will once again take the helm, slowly but surely.

This is my conceptualization of the human mind, from neural network to consciousness. This is what pushes me insistently away from dualism. This is what makes me believe that understanding our lowest level emotions, by aiming the camera wherever they manifest themselves, is the key to harnessing and managing them. This is why I believe that enlightening the caveman is both necessary and possible. Our basic emotions – our fear, our quest for status, our affinity for cooperation (read: concurrence), and our sex strategies – have the advantage. They spur us to action while they’re below the level of consciousness, under the radar of awareness, unless we either inadvertantly develop high-level programs that override their recommendations or we deliberately scan the visible networks for evidence of their influences and we deliberately override them.

An example of the former would be a priest taking a vow of chastity. Even if he has no concept of human evolution and the sexual programming that resides down near his brain stem, the high-level programming that corresponds to his commitment to the cloth could easily suppress his response to a lovely female parishioner. (Unless he’s a…nevermind.) An example of the latter would be a sky-diver standing in the door of a plane. He realizes that it is perfectly rational to be afraid. He is aware of his elevated heart rate and sweaty palms, and he knows why they’re there. But he reasons that his parachute is safe and his training has prepared him, so he jumps. He deliberately overrides his lower-level survival programming.

There are two takeaways from this.

The first is that culture can tune our high-level programming, even if we never know it’s happening. School for young children does exactly this. There is no reason for this tuning to ever pass across a child’s movie screen. The more “cultured” the child becomes, the less the basic survival programs govern his or her actions. The reverse is also true. Children who are not instructed on how to be human beings in a modern world become an almost cartoon-like caricature of our cave-dwelling ancestors. You can see it on any busy playground.

The second thing, the important thing, is that the conscious intent to override basic emotional programming is extremely powerful. If we turn our camera upon our concept of self, and it includes an understanding of what is happening down below, on our screen flashes the idea that we can control much more than we ever knew – thus bringing more detail to the picture and a longer list of available options- regarding action and inaction. This is a good news story. Nothing is determined. We’re in charge. If we do not exercise this power, we leave our fate in the hands of our genetic heritage. But if we do, our genetic heritage becomes irrelevant.

The clock just passed across my movie screen. Recommendation: publish and crash.

I’m Feeling Your Pain – An Intro to Concurrence

Original Post (with comments)
Perhaps the most regularly recurring theme in this blog is the interplay between the quest for status and the human tendency to cooperate (both genetically driven) and our modern environment in leading to the behaviors we engage in and witness every day. That humans learned to cooperate is taken as a bit of an axiom in the study of hominid history, but something has been nagging at me for a while, and I’m just now getting to the point where I can articulate what I’ve come up with.

What if there is a genetically driven motivation that is larger than reciprocal altruism? I think there is. What if reciprocal altruism is just one manifestation (albeit a very critical one) of a heretofore elusive, but grand aspect of human nature? I think it is. This aspect of human nature is what I’ll call the need for concurrence.

Concurrence, in its most grand form, is perfect empathy. It is being able to mentally and emotionally relate to another person in a very deep way. It’s feeling someone else’s pain. It’s a profound connection between two people. Suppose the adaptation that Mother Nature found was an inherent desire to concur with other humans, and a consequence to getting to this deep emotional connection was the emergence of informal rules regarding favors done and favors owed. And lots more…but let’s back up for a moment.

In evolution, it’s always interesting to ponder the intermediates. In this case, we can imagine hominids like Australopithecus, who were not known for being big cooperators, and Homo sapiens, and we can wonder how natural selection bridged the gap. Did this human species of hominid just suddenly start cooperating, or did something happen before that? If I’m right about concurrence, then something did.
If we know that hominids who banded together to share resources and divide up duties fared better than hominids who did not, is it not reasonable to wonder what kind of primary emotion would produce that tendency for groups to come together? (When I talk about primary emotions, I’m talking about the ones you read about in books by Michael Gazzaniga and Joseph LeDoux, the basic emotional programs, like fear and the quest for status, that underlie our more complex emotions, like anger and jealousy.)

From what I’ve read, the answer would be the emotional tendency to cooperate. But I have a hard time imagining how that would work. Not that there’s anything wrong with that – there’s a lot I can’t imagine. However, I do not have a hard time imagining the emergence of a genetically-driven emotional drive to connect with another human. The cooperation part would simply be the fortuitous result, the one that natural selection seized upon, resulting in the reign of the human animal on earth.

So let’s suppose, just for fun, that I’m right, that there is an inherent human need for concurrence. Just think of how much it explains. Reciprocal altruism is only the tip of the iceberg. Concurrence could explain all sorts of social phenomena like, for example, that elated feeling at a rock concert when the whole place is glued to the same moment.

If the need for concurrence is a primary emotion, then it, like the others, is executed in different ways in different situations. In one-on-one situations, it can be seen as the pursuit of the direct emotional connection. In crowds, it can be seen as swimming in the same direction as the school, so to speak. Who can deny the visceral good feeling that comes from being in a crowd where everyone is focused on the same wonderful thing? If concurrence is real, then it explains that feeling – we’re pulled toward situations like that and we feel immense gratification when we encounter one. I know many people, and I am one of them, who appreciate big events (concerts, sporting events, etc.) for this reason every bit as much as for the name on the ticket. To be part of a happening, where everyone, for a short period of time, is concurring. To be part of a shared experience where a mass of individuals has been transformed into a collective entity, one that shows no signs of dissension in the ranks. This is human stuff. We are but moths to the flame.

But, as this blog vigilantly asserts, our primary emotions were not designed for this modern world. This means that, like status, concurrence has its downsides. Consider two teenage girls who are best friends. The desire, no, the need, for concurrence overrides the truth in many situations. If both girls are a bit heavy and are insecure about it, they can achieve deep concurrence by propping each other up with compliments to the contrary. Even though they know that the answer to, “Do these jeans make me look fat?” is, “No, your large ass makes you look fat,” they respond with, “No! They’re like totally cute.” The point is that, just as the quest for status often causes us to cut high-status people slack while we criticize low-status people, concurrence can distort truth when it is ill-advised in social situations. And on a larger scale, on the crowd scale, it can cause us to buy into fanatical causes.

For those for whom one-on-one interpersonal concurrence is hard to find, causes can act as a good surrogate. The feeling of swimming in the same direction of the school is like a hundred small-scale concurrences adding up to the effect of a deep one-on-one concurrence. (See Eric Hoffer’s, The True Believer.) The need for this distributed emotional connection, which, in this case, is the need to belong, trumps all else, logic and rationality included.

I’m just getting my arms around this idea and where I can take it, so I’ll stop here and come back with more as it develops. But I can’t help think that this will be the topic of my next book. The applications of this concept are mind boggling. And even if it isn’t true, even if the whole thing is nonsense, it’ll be a great exercise to find that out. Thoughts?