Wednesday, December 2, 2009

Brain of Theseus

Often, students and academics joke about the problems in their fields that keep them awake at night. Usually, I think, these are questions of philosophy but I've known people of other disciplines to be struck by insomnia when wondering about an archaeological site or the meaning of a poem, etc. For a while, I've been wondering about the meaning of this next problem. But until recently, I hadn't thought about it exactly this way. Now I'm not entirely convinced I'll ever sleep again. Convenient, as exams start next week and I have 25 pages of papers on ethics and psychology to write.

The Ship of Theseus
The ship of Theseus is an old philosophical problem of identity that goes like this: Suppose there's a ship, and every year, one plank of it is replaced. If the ship is made of, say, 200 planks, then in 200 years, we're faced with a problem: Is the ship which remains the same ship, or is it a different ship? And if it's different, at which point did it become different? Which plank was the last straw?

This is sort of fun, but it didn't become really interesting and terrifying until I remembered that neurons are being replaced at a constant rate, and that over a certain number of years (7? 12? I can never remember) every neuron in one's brain is replaced. Our brains are ships of Theseus. The problem is that we perceive our consciousness as being continuous. However, the mind is the brain. So the answer has to be that our consciousness is somehow connected to the structure and organization of the neurons. My identity is the exact shape of my brain. I maintain a constant, or at least continuous, personality because my brain is continuous across time, regardless of the existence or replacement of individual neurons.

Which brings us to an even bigger problem, as raised by my fantastic philosophy prof, Dr. Davies: If we took each old plank of the ship as it were being replaced and put it in a warehouse and then, 200 years later, reassembled the ship of Theseus, we would have two ships of exactly identical make. And yet they would not be the same ship. So which one would be the ship of Theseus, the one rebuilt of the original planks or the one that's been continuously sailed upon across those 200 years?

I mentioned my issue with neurons to Prof Davies, who responded simply: "Well that's easy, you aren't the same person from moment to moment." That's nice, and obviously he's correct, but that doesn't change the fact that I have the sensation of continuous consciousness.

Then I started thinking about science fiction. Imagine we could teleport. We would hop into a booth, which would scan our entire body (and, probably, the billions of bacteria that actually make up 90-99% of our body mass) and then transmit that data to another booth, which would decode it and fabricate a body exactly the same structure and initiate CPR to get the heart started. The beta body, possessing the same exact brain as the alpha body, would experience what? The memories and experiences and feelings leading right up until transportation, and then everything afterwords. In short, it would feel like waking up from a nap (on the other side of the universe). The beta body would have the sensation of continuous consciousness. What happens to the alpha body, then? If it disappears, it dies. The consciousness of alpha is not transferred to beta. Consciousness can't be transferred in any meaningful way. You've essentially cloned the body and mind.

But if consciousness can't be transferred, then how is consciousness transferred from my waking self yesterday to my waking self today, across the divide of all of 4 hours of sleep I got last night (and an all nighter tonight and tomorrow night, hooray)? The obvious answer, I think, is that it isn't. I'm not the same person who existed yesterday. His consciousness was gifted to me when I woke to the sound of my damn alarm this morning, but his consciousness ceased to exist when he went to sleep. In other words, we die each night when we go to sleep and are reanimated each morning. But I'm not the one being reanimated, it's a beta or gamma or however many iterations of the Greek alphabet are required after nearly 22 years of nights sleeping. When I go to sleep tonight, that's it. I'm dead.

I hope this doesn't trouble you as much as it troubles me.

Saturday, November 28, 2009

Postscript, a note on style

I wrote this for my Medicine & Ethics class. We have been reading a series of books on genetic engineering and posthumanism, most recently Cyborgs and Barbie Dolls by Toffoletti. I knew that if I continued to just flat out disagree with every book we've read in class so far (which, considering how many of them are religious in nature, is safe to say is the truth), he'd nail me again on my grade. So I knew I had to agree with the book to get an A. But I also really needed to say my peace about this filth. So I wrote a post-script to attach to my paper explaining why the whole thing was nonsense, then never turned it in. Then I sat down and actually wrote an essay about the book, which did get me an A.

The funny thing was that most of this is based off of a book called Fashionable Nonsense, which I brought into class one day and laid on my desk. When my professor walked by, he commented, "that's a good book."

"Yeah, I got it out for Toffoletti."
"That's good to know," he said.

Then later I saw him in office hours and discussed it with him further. I got him to apologize for calling Dawkins and Dennet "throwaway names." He also suggested that I write my final paper on mandatory euthanasia for the elderly. I promise you, when that paper is finished it'll be up here. I expect it to be glorious.


Anyway, here's the post-script. Sorry for lacking the citation on Karl Popper. I can't find the damn book. At any rate, he's famous for saying that. I can't imagine anyone would disagree that that was one of his major ideas.


----

Postscript, a note on style

While it is of course strictly irrelevant to the content of her work, something needs to be said about Toffoletti’s writing style in Cyborgs and Barbie Dolls (2007). From her reverent treatment of Jean Baudrillard, we know that Toffoletti is a product of the late 20th century French intellectual schools which seem to believe, or even outright profess that, as Peter Medawar put it, “…thoughts which are confused and tortuous by reason of their profundity are most appropriately expressed in prose that is deliberately unclear.” (Dawkins 2003) This certainly applies to Baudrillard and Toffoletti does the reader no favor by extensively quoting his ideas without providing particularly lucid explanations thereof.

But this style of writing, which appears to think that function follows form, stands in contrast with nearly every other form of academic writing in the following way: In other areas of academia, when a concept is particularly difficult, the prudent writer does her best to simplify it as much as possible. Bits of jargon from the field in question are given clear definitions at the outset; allusions, similes, and metaphors are made to areas which are already familiar to the reader and are done in such a way as to elucidate rather than obscure the topic under consideration; and examples are given as much context as is necessary to understand their significance. Toffoletti fails in all of these respects.

One might claim that Toffoletti is on firm ground, being that she’s writing in a style similar to that of the works on which she is basing her ideas. However, these writers are by no means blameless. Alan Sokal’s and Jean Bricmont’s (1997) Intellectual Impostures is an examination of the use of scientific terminology and jargon within postmodern writing. Jean Baudrillard, from whom Toffoletti draws the theoretical groundwork of her book, is given an entire chapter in which his writing is exposed as having at best a reckless disregard for the meaning of scientific concepts and at worst as having the intent to deliberately mislead the reader or prevent the reader from being able to disprove or even comment on the veracity of his statements. As a logical fallacy, this is known either as “Argument by Prestigious Jargon,” or “Argument By Gibberish (Bafflement).” I personally, as a writer, find these to be particularly heinous logical fallacies, though opinions on that vary, I’m sure.

As Richard Dawkins wrote in his review of the book,



But it’s tough on the reader. No doubt there exist thoughts so profound that most of us will not understand the language in which they are expressed. And no doubt there is also language designed to be unintelligible in order to conceal an absence of honest thought. But how are we to tell the difference? (Dawkins 2003)


Exactly. Toffoletti is writing a decade after large portions of her intellectual hero’s work were exposed as intellectually fraudulent. Given the enormous cultural and academic impact of Sokal and Bricmont’s exposition of Baudrillard et al, we must assume that Toffoletti is familiar with their work and the notion that her obscurantist writing style is now closely associated with academic charlatanism. Additionally, we know from her lucid and intelligent introduction and conclusion that she is perfectly capable of writing clear sentences which get her meaning across. After examining closely Baudrillard’s use of scientific and mathematical terminology, Sokal and Bricmont characterize a passage as “trite observations about sociology or history.” (Dawkins 2003) If Toffoletti is making similarly trite claims, claims which could be expressed rather simply (as she does in the introduction), then one is left to conclude that she is deliberately obscuring meaning.

At this point one must speculate why a writer hoping to reach an audience might do this. Dawkins (2003) thinks that perhaps writers like this “have ambitions to succeed in academic life.” This makes sense. It is ‘publish or perish,’ after all.

Two other hypotheses come to mind. The first is that these writers use obscurantist language in order to display their own erudition. I hope that isn’t the case, as it means a significant portion of academics are behaving like children. In any case, most people opening a book by a prominent academic for the first time begin by giving her the benefit of that doubt that, if she managed to get a Ph. D, she must be at least reasonably intelligent.

The second hypothesis is my own, which is that these writers deliberately fashion their works so that they are impossible to refute. Take for example Toffoletti, who writes that simulations have become more real to us than the realities that they are meant to represent. In a simple and clear scientific paper, it would be ridiculously easy to test this hypothesis using empirical methods. However, Toffoletti writes a book filled with obscurantist jargon based on the work of Baudrillard, notorious for his contradictions and tautological statements. Why? Because the ideas in her book can be applied to all instances and in all cases. As Karl Popper wrote (citation!!!!), any hypothesis must be available to falsification. Hypotheses that cannot be falsified (he lists psychoanalysis, among others, and Jacques Lacan, I’m looking at you) can also not be accepted (or ‘proved’ insofar as that’s possible within the philosophy of science). Toffoletti has written a book nearly immune to refutation, or at least based on theories which fit that description. A hypothesis which is immune to refutation is no hypothesis at all, but we wouldn’t even know that because it is still too difficult to parse her obscurantist prose.

But independent of the possible veracity of Toffoletti’s claims is the problem of Toffoletti’s writing style. It is patently unclear, obscurantist, and full of obscure or at least unfamiliar (to most readers) references. The result is that her book has the appearance of being erudite, polished, and airtight. But instead the book leaves a lingering question, an echo of the one voiced originally by Sokal and Bricmont, then later by Dawkins: Is the emperor wearing invisible clothes, or is he just naked?

Works cited

Dawkins, R. (2003) Postmodernism Disrobed. In A Devil’s Chaplain (pp 47.–53). Boston: Mariner Books.

Popper, K (date)

Sokal, A., & Bricmont, J. (1997). Fashionable nonsense: Postmodern intellectuals’ abuse of science. Retrieved from http://www.physics.nyu.edu/faculty/sokal/book_american_v2d_typeset_preface+chap1.pdf

Toffoletti, K. (2007) Cyborgs and Barbie dolls: feminism, popular culture, and the posthuman body. London: I.B. Tauris & Co Ltd.

Thursday, September 17, 2009

Sociobiology Vs. Evolutionary Psychology

In class two days ago, my professor drew two flowcharts to illustrate the differences between traditional sociobiology as created by E.O. Wilson in the late 1970s versus Evolutionary Psychology as it is understood today.

Here's the Sociobio flowchart:
This is a pretty clear flowchart in my mind, and it can account for the behavior of most animals, except for the most cognitively developed apes, plus probably dolphins, whales, and elephants.
Evolutionary psychology, on the other hand, must take cognition into effect (given the cognitive revolution, which was taking place about the same time historically as the creation of Sociobiology and thus could not have had a huge effect on its construction and application to humans), among many other things.

Here's Evo Psych:

I felt a sort-of religious awe when I saw this flow chart, which seems to account for and map out nearly ever aspect of psychology (excepting sensation and perception, and probably personality theory). What I particularly like about this chart is that it shuts up two major criticisms of Evolutionary Psychology:
  1. The classic Anthropologist's critique, which says that Evolutionary Psychology is reductionistic of human behavior. This flow chart is hardly reductive, especially as compared to the sociobiological map. I firmly believe that when anthropologists are critiquing evolutionary psychology, they're actually critiquing Sociobiology and aren't aware of the major differences.
  2. It also quiets Lickliter's and Honeycutt's critique, which says that EP is does not account for possible changes relating to human development. Clearly this is fallacious, as there has always been Evolutionary Developmental Psychology, and this clearly lays that out.
Anyway, I was so inspired by these charts that I simply had to share them with the world.

Stephen Jay Gould




















I know, I know: de mortuis nihil nisi bonum. But I couldn't resist.

Edit: About an hour after I posted this, my professor reminisced about the time he met Stephen J. Gould, saying, "He was a dick. He just had his own agenda and everyone else had to get out of the way. Hewasn't interested in discussing it or taking questions. " I feel less bad about making fun of dead people now.

Wednesday, September 16, 2009

Habermas’ use of Free Will in The Future of Human Nature

Sorry about the radio silence, guys. It seems that summer was less conducive to real scholarly work than I had hoped it might be when I created this blog.

I have a more light-hearted work coming soon, hopefully to be up within the next few weeks.

Anyway, this one comes from my Medicine & Ethics class. I just turned it in this morning, and I'm sincerely hoping that my professor never finds this blog. It would be easy to prove that I turned it in to him before I posted it here, but I'd still have to explain why I'm such a huge nerd.

Jurgen Habermas, in case I did not make this clear enough in my paper, wrote a book to explain why he thinks genetic modification of humans is wrong. Essentially, he believes that if babies' attributes are chosen--hair color, eye color, not at risk for childhood leukemia, you name it--then that child's free will has been taken away and ergo that child can never ever lead a moral life. I, er, take issue with this opinion here.

---

Habermas’ use of Free Will in The Future of Human Nature

Within his work, The Future of Human Nature, Jurgen Habermas creates an argument for where we as a society might draw the line regarding our impending ability to manipulate the genes and phenotypes of potential humans. For Habermas, that line is governed by whether or not one preserves the autonomy of the potentially modified future humans. Habermas’ inclusion of autonomy is grounded in the work of Kierkegaard and it is strongly tied to Kierkegaard’s ideas about Free Will. However, Kierkegaard died roughly a century before modern neuroscience and psychology were able to bring in the relevant data. This forces us to ask whether the evidence that has been gathered from modern psychology regarding the concept of Free Will is in line with, or in contradiction to, Habermas’ usage.

For Habermas, the answer to the question of how to live an ethical life is derived from Kierkegaard, who he says answered the question, “with a postmetaphysical concept of ‘being-able-to-be-oneself.’” (Habermas, 2003, 5) Though there may be other concerns beyond ‘being-able-to-be-oneself,’ they cannot be considered if one, in the first place, does not have one’s autonomy. Habermas’ other main distinction, between things which are grown and things which are made, is related to this. For Habermas, things which are grown maintain their autonomy, whereas things which are made have no control over the determination of their lives and cannot be considered autonomous, nor by extension capable of living an ethical life. If a parent makes any changes to the natural process of infant development, then they would be taking away said autonomy and therefore condemning such children to be incapable of living ethical lives. When Habermas writes about autonomy or an ability to be oneself, he is writing about the ability to live one’s life free from determinism, capable of making decisions and taking action upon them without influence from outside factors. What he fails to acknowledge or even mention is that there has been considerable debate about such an ability, which calls into question one of the major premises from which he constructs his argument.

Before moving into scientific evidences for the nonexistence of Free Will, and given that Kierkegaard himself was a theist, it is helpful to look at the religious history of the debate regarding Free Will. St. Augustine fully developed the concept as we now understand it, largely as a solution to the problem of theodicy (Bargh, 2008). If God is all-powerful, all-knowing, and all-loving, then one way to get him off-the-hook, so to speak, for all the evil in the world is to say that the existence of Free Will on the part of those to whom evil happens accounts for evil’s existence. In other words, that they deserve it. Not all Christian theologians have followed this line, however. After compiling the writings of Martin Luther, John Calvin, and Jonathan Edwards, David G. Meyers (2008) notes that there are three mandatory limitations to Free Will within Christianity. Namely, it cannot violate God’s foreknowledge (i.e. nothing can be done without His implicit consent, ergo all Will is limited), it cannot violate God’s sovereignty, (i.e. Will cannot be Free because that implies that God’s plans are dependent upon our decisions), nor can it violate God’s grace (because every good we are capable of doing comes through God in the first place). Given these dilemmas, it is remarkable that the concept of Free Will has persisted among theists, and yet it has.

Having come far since its formal beginnings in the late 19th century, Psychology is now in a position to comment empirically on the existence of many concepts from the Philosophy of Mind, including Free Will and Determinism. John Bargh (2008) wrote that what is truly surprising about the continuously deterministic findings of psychology is not that they seem to contradict Free Will, but “Instead, the surprise comes from the continuing overarching assumption of the field regarding the primacy of conscious will” (Bargh, 2008, 128). He notes that every other science takes a deterministic universe for granted and that psychologists often are surprised by deterministic results is embarrassing. He identifies three areas from which our actions are determined: Firstly, there are genetic determinants of behavior. Given the surety of evolution through natural selection, it would be absurd to assume that behavior is not also governed by genes. This connection is obvious in non-human animals, but its implications are being borne out in humans as well. Secondly, cultural determinants exist to guide us where genetic determinants would be too slow-acting to be beneficial. Thus aspects of our lives, such as language, are governed by the culture in which a child is raised rather than by a conscious will to learn to speak a specific language or to involve oneself in local customs. And finally, the sum of all experiences of a person’s life, that is, individual learning, adds an even finer level of constraints to what a person’s actions must be.

The most important research to date on Free Will seems to be Libet’s (1986) work on the neurophysiology of voluntary action. Libet set up three different timers which measured when a participant performed an action, when they chose to perform that action, and when their brains primed to perform that action. Libet found that his participants’ brains got ready to perform the assigned action between 0.3 and 0.5 seconds before they had the conscious experience of deciding to perform that action. The significance of this finding cannot be overstated. Half a second before anyone ever feels like having made a decision, that one’s brain has already made that decision on his behalf. With respect to our actions, it seems that our experience of Free Will is a latecomer.

Habermas’ argument, then, which is so firmly grounded on the autonomy of unborn children, loses its relevance entirely. Habermas’ main problem is that if a person were to determine in any way the genotype or phenotype of an unborn child, said adult would be condemning the child to a determined life, therefore a life in which the child cannot conform to Kierkegaard’s outdated notion of the good life. However, it has been made clear by empirical evidence that the child’s genotypes and phenotypes are determined regardless. Whether or not a child’s development is tampered-with makes no difference with respect to its own self-determination. Having removed the fallacious premise which supported Habermas’ conclusions, a new line must be drawn for the ethical dealings with proto-humans, a line which is consistent with modern data.




-----

Bargh, J. A. (2008). Free will is un-natural. In J. Baer, J Kaufman, & R. Baumeister (Eds.), Are we free? Psychology and free will (128-154). Oxford University Press.

Habermas, J. (2003). The future of human nature. Cambridge: Polity.

Libet, B. (1986). Unconscious cerebral initiative and the role of conscious will in voluntary action. Behavioral and Brain Sciences, 8, 529-66.

Myers, D. C. (2008) Determined and Free. In J. Baer, J Kaufman, & R. Baumeister (Eds.), Are we free? Psychology and free will (32-43). Oxford University Press.

Monday, July 27, 2009

Where I'm coming from

For a first entry, I'd like to give an idea of what this blog is about, in addition to where I'm coming from as a writer. I've decided to include an essay I wrote for a class called Human Nature last semester.

Before that, a little about me: I'm a psychology major and anthropology minor at a small, public, liberal arts university. I'm about to be a senior, and my major interest is in human evolution and behavior.

This blog is intended to be a collection of essays on topics that seem to me to pertain to just those things. I'll probably also include some things which will stretch those boundaries, but that shouldn't be a big deal. By "probably" I mean, "I already have plans to..."

So by way of introduction, here's an essay I wrote on my perspective about knowledge, science, morality, and our place in the world:


Introduction

Apparently alone among the entirety of life on the planet Earth, the Homo sapiens is capable of asking fundamental questions about his own origin, his nature, and the origins and nature of the world around him. Given that our humble beginnings are exactly the same as those of our houseplants, it is actually pretty impressive that we are able to ask those questions, and even more impressive that we have established relatively certain answers about them. But I’m getting ahead of myself. We do not know any of this for sure; not anything, not yet.

Method

Before any of the most important questions anyone can ask can be addressed, the means through which that knowledge will be attained must be selected. As far as I can see, there are four options: Instinct, Faith, Rationalism, and Empiricism. For some kinds of knowledge, instinct can get us pretty far. Surprisingly far, in fact. Instinct will let us know whether or not a painting is aesthetically pleasing, whether food tastes good, whether or not to make a decision. Ap Dijksterhuis’ research[1] into unconscious decision making seems to indicate that, at least for males, unconsciously-made decisions frequently are better (when there is an objective measure) than consciously thought-out decisions. Evolutionary Psychology has provided some more insights here. For example, food aversions in pregnant women have been shown to be strongest against foods in which there are harmful teratogens.[2] In some cases, it seems that trusting one’s gut feeling can be fairly beneficial. But it doesn’t get us very far in terms of the fundamental questions about the nature of the universe. For example, it might feel like the Sun revolves around the Earth. We’re very limited here.

Our next option is that of faith. We can turn to religion to answer questions about where we came from, why we’re here, how we should behave, and where we’re going. Religion usually has answers to all of these and this would be a world of benefit, but something should be gnawing at the back of our minds. Our gut feeling should say that there is something wrong here. What faith lacks is any reason to accept one of its answers over another one. Religion as a discipline has a staggering lack of agreement, and the only thing that causes one religious person to accept their religion’s answers is that they were born into them. Even when a person converts to a religion later in life, they do not do so for reasons of faith. They choose a religion because it seems to make sense to them, because they have a gut feeling that it is right. Converts are appealing to intuition, not faith. Only those who are born into a religion accept it purely on faith. When looking for the truth about the universe, though, we need to find one truth by which we can understand everything. Religion is inadequate because we would become lost in a sea of possible answers, each with the same amount of justification. Rationalism might just get us closer to this truth.

In a nutshell, one might describe rationalism as, “well, we’ll sit around and think about it.” This is pretty useless by itself, but with a set of rules and formalities to the logic, a considerable amount of progress can be made. Even within the formal rules and discussions, there is a possibility that a contradiction will occur. And there is a further possibility that the rules of logic will not disqualify either of those propositions. At this point, we find ourselves back where we were with religion. How do we pick one or the other? We need one answer, and choosing what to believe is never going to lead to a satisfying result. What we need is solid evidence, real-world proof.

The only answer is Empiricism. Through direct observations of reality and the establishment of concrete causal relationships between phenomena, we can be as sure as possible that what we are describing is the truth about ourselves and the world around us. Empiricism lacks the major flaw of faith and rationalism: there is a principle whereby one hypothesis is more likely to be correct than any of the rest. The principle of parsimony, a sine qua non of the scientific method, states that all other things being equal, the simplest solution is the correct one. Without this principle, there is nothing to recommend one hypothesis as more plausible than the next, and it would become a logical necessity to rule out invisible fairies, in fact an infinite number of possible explanations, for every effect being observed before a plausible explanation can be tested. In this way, empiricists can pick the most likely two or three hypotheses, test them, and determine if they hold up to observation. Clearly this is the best choice of all of the methods of determining the truth about the universe, as it is less susceptible to bias than the rest, shows real relationships better than any other method, and has a means through which it can determine just one theory about the nature of the universe. There is a bias here in that the only hypotheses that can be offered are limited by our humanity and by our culture. But these hypotheses cannot rightly be ruled out by any of the three previously discussed methods. Only empiricism can solve empiricism’s problems.

The Researcher

But we have another problem, summed up nicely by the old joke: “What did one snowman say to the other?”[3]Our ability to gain knowledge about anything, especially ourselves, is moderated by the main instrument we use to determine that knowledge: ourselves. As long as our senses are capable of picking up ‘noise’ and we are able to be fooled by sensory experiences that are not really there, as long as we are capable of being fooled by things like apophenia, our chief instrument is imperfect.

I’ve often heard physicists same something along these lines: “Have you ever tried to explain mathematics to your dog? It could be that the nature of the universe is similarly entirely beyond our grasp as mathematics is beyond the grasp of dogs.” I find this to be an enormous cop-out. For one thing, the analogy is terrible. A dog is not capable of being taught mathematics simply because a dog is not capable of understanding the language with which one might teach it mathematics. That’s a very prejudicial way to start any kind of instruction. Even more than being a bad analogy, it’s a completely needless speculation. It is in fact entirely possible that we will never understand the whole truth about everything, but to couch that in terms of inabilities due to human nature is to potentiate a self-fulfilling prophecy.

As useful example here is Richard Dawkins’ explanation of the ‘middle-world.’ Humanity has evolved to live within the world that it inhabits, which is somewhere between the most ‘micro’ possible world (among elementary particles, or perhaps among strings?) and the most ‘macro’ possible world (among galaxy clusters, perhaps?). Therefore our reasoning abilities are most suited for use in this middle-world. But that does not by any means mean that we are incapable of understanding the very large and the very small. It just means that doing so will be more difficult than understanding the medium-sized.

And we have learned quite a lot about the micro and the macro, in spite of our limitations. And though what we have learned about the behavior of the very large is not presently compatible with the behavior of the very small, there has been a lot of progress towards a Grand Unifying Theory. It is entirely possible that the GUT will never be fully realized, but the trends of the history of science and particularly physics suggest that the GUT is right around the corner.

And in a sense, it almost does not really matter if what we are learning is the absolute perfect truth about the nature of ourselves and the universe. Any sufficiently satisfactory answer, any answer that cannot be bested by any other answer, is really going to be good enough. If there is a problem with our theory of everything, we’ll search for a better one, a more refined explanation, a la Kuhn’s paradigmatic shift model. If a better theory of everything comes along, we’ll adopt it. For the time being, the possibility that ours might not be right, that there is possible more truth than what we can experience, is more meaningless speculation. If we cannot experience it as a flaw in our system, then as far as we are concerned it does not exist.

For example, and as a matter of intellectual honesty, I cannot rule out that God exists behind all the reality that we do experience. But if He does, then He is not a part of reality and is therefore incapable of influencing it. He might as well not exist. As will be demonstrated later, my consciousness is entirely physical. If there is something supernatural[4] out there, it cannot in any way influence what is natural and material. If God exists, there cannot be evidence for him in the sense that his existence will never have an effect on anything we can experience. His existence, having no bearing on our lives, is useless and irrelevant.

In this way, we can dismiss any speculation of the existence of something we cannot comprehend. If we can experience it, it might be difficult to comprehend. If we cannot experience it, then to us it does not exist.

So what do we know about the nature of the universe and ourselves? What has empiricism taught us?

The Universe

We have learned that everything that can be known and understood started with the big bang. Anything that might have existed before the big bang is by definition incapable of leaving direct evidence behind. We can (and do) speculate as to what might have existed before the big bang, but this is in the realm of metaphysics, not science, because it cannot be empirically tested or observed.

The big bang scattered matter more or less evenly throughout the universe. Gravity acted on all the uneven parts and pulled them together into larger and larger clumps until those clumps of matter became so dense and so hot that they began to fuse themselves into larger bits of matter. Stars are born. The stars churn away for millions of years until taking one of a few paths, depending on weight: they can shrink and start fusing larger atoms into even larger atoms, they can implode and become black holes, or they can supernova and scatter everything they have made across the universe. The big bang produced only hydrogen. Everything on the periodic table from helium on up was produced and scattered by smaller bangs, including nearly every little bit of you.

This matter having been scattered across the universe is present in the next generations of stars as they form from dense, hot clumps of matter. And sometimes, they are too far away from the nucleus of that clump to be included in the star, but close enough to be caught in orbit. Eventually these smaller clumps can become planets. And depending on the size of that clump and the size of the star and the distance between them, there is a narrow belt, a distance from that star on which a planet can form that can support life as we know it.

Life

We have learned that in the right conditions, with the presence of the right chemicals and a little bit of energy, the basic fundamental building blocks of life can form. These can grow in complexity until at some point a replicator is born. The replicator is capable of producing another copy of itself, with occasional imperfections. The vast majority of the time, these imperfections disallow the replicator from replicating itself as successfully as its brethren. Occasionally, one of these imperfections actually improves the ability of this replicator to reproduce itself, and such an imperfection will soon become common among the replicators.

One of the most necessary results of these imperfections is the creation of a more refined, more complicated apparatus for the reproduction process. In this way, cells are created, which in turn beget more complicated cells, which beget groups of cells which will evolve in function and quantity and complexity to become the bodies of the plants and animals and fungi and all the life we can see. Even more life exists that we cannot see. Depending on who one asks, the human body, in terms of mass, is made up of anywhere between 90% and 99% of bacteria. If you took away everything that was human, there would be a human shaped blob of similar density and shape left standing in its place. [5]

Life is everywhere, and evolution accounts for not just its physiology but also its behavior. Even our cultural actions are biological in origin, since culture-acquisition evolved as a means of adapting in real time to specific environmental parameters. A recent study showed that songbird calls, the structure of which is thought to be culturally transmitted, will develop the same way even when that transmission is inactive.[6]When looking for the most ultimate cause of human and animal behavior, we must point to evolution.

This explanation for the existence seems to bother a lot of people, who often, in addition to simple misunderstandings of the nature of evolution, disqualify it as being based on chance or think that the design hypothesis is more elegant. The truth is that evolution isbased on chance. The problem is that the people whom this bothers have no understanding of statistics. Evolution has been at work on this planet for a long, long time. The number of replicators that have existed is enormous. Within this enormous sample size, and given the nature of the replicators, the odds that no random mutations will occur at all are much lower than the odds that a beneficial random mutation will occur. Evolution is based on chance and the chances are very, very high.

The design problem is based on another misunderstanding. They argue that the complexity of life on this planet demands that design be the cause. But consider this: if the human genome were converted from base 4 to binary, it would still be less complicated than the software I’m using to write this, Microsoft Office, which was designed. Because evolution works by tiny movements, the results are much more elegant than things which were designed. In addition, 99% of the species that ever existed are now extinct. To paraphrase Sam Harris: this fact alone would seem to rule out intelligent design.[7] That humans have evolved is not the least comforting hypothesis. To think that humans were designed by such an inept creator should make us more uncomfortable.

But from humans all the way back to the big bang, two uncomfortable truths are certain: everything that has happened has simply happened, without intentionality and without purpose. The big bang, the creation of life…these things happened without reason. By extension, our lives exist without reason.

Determinism

But before I address this fully, there is another uncomfortable truth which demands our attention: almost every thing that happens along the way from the moment you read this going all the way back, nearly every smallest possible detail, has been determined by the state of the entirety of the universe the immediate moment before it. There is a regress that goes back all the way to the big bang. Therefore, from the most ultimate (as opposed to proximate) perspective ever, every single question can be answered: “because of the initial conditions of the universe and the random placement of matter at the moment immediately after the big bang.”

This includes every action undertaken by human beings. Free will as an origin of human or nonhuman action is not even a plausible hypothesis. If every brain state is caused by the position of potassium and sodium in neurons in the instant before that brain state, there is no room for the possibility that one brain state has anything but a chemical cause. The idea that the conscious part of the brain is an un-caused cause of other brain states would require that brain chemistry change without a physical cause. What proponents of free will are asking for is that a sensory experience (consciousness) has a material effect on the brain. Essentially, when one invokes free will, one is invoking magic. We can rule that out as being a highly implausible hypothesis, and psychological research would agree that free will is not possible in the slightest.[8]

Determinism is rule of law in all situations in the middle world and in the macro world. In the most ‘micro’ world, however, indeterminism is the rule. Quantum physics demonstrates that a particle can behave in any number of possible ways, and assigns each a probability. This is not a practical failing, but a theoretical rule. It’s not that we are unable to determine which is going to happen for sure, it’s that there, by definition, is only a range of probable outcomes. Quantum indeterminism has often been invoked as a possibility for the existence of free will in humans, but this gets it wrong on two counts. First, the more particles there are connected and interacting with one another, the smaller the likelihood of any improbable but possible outcome is. It’s possible that if you held your hand up against a wall right now, it could go through the wall. But due to the number of particles involved, it will not happen.[9] In the human brain there are billions of atoms at work. The likelihood of one acting with quantum-style indeterminism is thus diminished by powers of billions. The other reason why the “quantum free will” hypothesis is ridiculous is that even if quantum indeterminism is at play in human behavior, what is being described is not free will: it is randomness. Our options are either determinism or randomness, but not free will. There is not a lot to suggest randomness as the source of human behavior, so we are stuck with determinism.

This seems to bother a lot of people, who set great store in their ability to control their lives. They can hardly be blamed. Most people are insecure enough to think that if they cannot control everything, then everything will go horribly for them. They often find their attempts to control the world around them to be very frustrating. I have a better solution: don’t bother. When you realize that what happened did so because no other outcome was possible, when you realize that your own actions are beyond your control, it allows you to sit back and enjoy the ride.

Meaning

As stated previously, the fact that all these determined happenings did so without an initial purpose makes people uncomfortable, perhaps even more so than the realization that none of their actions are of their conscious origin. I, on the other hand, consider this to be quite liberating. It’s a similar sensation to when one realizes there’s no God: it feels a little overwhelming at first, but then you realize that it doesn’t matter what you do. Just as atheism leads to a life free of the fear of sinning (which can be traumatizing, since much sin is thought crime and, as demonstrated, we cannot control our thoughts), the meaninglessness of life frees us from the obligation to do anything. If life had a purpose, we would be more or less required to fulfill it. But it doesn’t. We allow our lives to go in whichever way they were determined without stressing the fact that we feel like something else ought to happen. So what should happen?

Morality

If life is meaningless, and if we have no control over our actions, than what does that say about morality and the rightness of our actions? From here, we go back to evolution.

Evolutionary psychology has demonstrated that behaviors which benefit those closest to us can be beneficial to us as well. Even though the gene is selfishly motivated, altruistic-seeming behavior to others is not entirely out of the question. Prisoner’s dilemma games demonstrate that peaceful cooperation with others is usually the best way to get what’s best for ourselves in the long run. This is somewhat limited, though. Are we only ethical to those in our immediate family?

It turns out the answer is no. Evolution has not encoded us with a surefire way of telling who is related to us and who isn’t. It has only given us approximations. One of those approximations is proximity itself. By being close and familiar with those around us, we have created a larger surrogate family. This idea is called the “expanding circle” and it’s still expanding. As our world becomes more globally integrated, people feel more compelled to be altruistic to those in need all over the world. The future of morality actually seems quite bright. The more integrated our society is, the less amorality we will experience. And this seems to be the case: in his studies of the history of violence, Stephen Pinker has concluded that violence worldwide has been decreasing dramatically as the world has become more globally oriented. [10]

Conclusion

We’ve come so far. Through empiricism, we’ve learned so much about the nature of the universe and about the origins of our own behaviors. We’ve even ruled out that anything we’re too limited to experience could be worthwhile. I can’t help but feel optimistic about what I’ve discovered about the nature of the world: There is no God, there is no purpose, and we have no control over our actions. We may not have free will, but we have other freedoms: We’re free from expectation. We’re free from frustration at lack of control. On top of all this, the evidence suggests that people can only be getting nicer and nicer to each other. At this point, it shouldn’t matter if life has meaning and there is no free will. We’re better off without them. Considering how far we’ve come, the future is looking pretty bright.



[1] See “Think Different: The Merits of Unconscious Thought in Preference Development and Decision Making” (2004) and “A Theory of Unconscious Thought” with Loran Nordgren (2006)

[2] See, “Pregnancy Sickness as Adaptation: A Deterrent to Maternal Ingestion of Teratogens,” (1992) by Margie Profet in “The Adapted Mind”

[3] “Smells like carrots!”

[4] Definition: Does not exist

[6]http://www.wired.com/wiredscience/2009/05/songbirdculture/

[7] In Letter to a Christian Nation, 2006.

[8] See “Free Will is unnatural” by John A Bargh (2008) in “Psychology and Free Will.”

[9] A musing on the nature of infinity: If you put your hand up against a wall and pushed for an infinite amount of time, you most assuredly would get through the wall eventually. Good luck!

[10] See http://www.youtube.com/watch?v=ramBFRt1Uzk