Empathizing with Evil: Learning from Elliot Rodger without Forgiving

Last week, I wrote an article on a conflict of my moral inclinations catalyzed by the UCSB shooting and Elliot Rodger.  I questioned how we balance various fundamental rights against the risks that allowing those rights might precipitate.

Elliot Rodger, his actions and the justification he believed he had for those actions, raise another question: how do we think about someone who does such terrible things?  This is a question both of morality and strategy.  Morality insofar as we should ask, “What is the right way to think of such a person?”  Strategy insofar as you believe that our conception of a person—the legacy that we allow him or her to have in the aftermath of such an act—is a crucial aspect of the ongoing trend of mass murders.  Fortunately, the morally right way to understand such a person is also the most strategically advantageous tack to take in diminishing the likelihood of future similar acts.

I have often championed empathy as the most admirable emotional practice.  However, empathizing with evil is too often confused with supporting it, and this conflation deserves clarification.

What Elliot Rodger did was a terrible thing.  He killed innocent people because he felt slighted by women—both specific women and the sex in the abstract.   Women, he believed, refused to have sex with him because they were attracted to thugs and idiots.  There are awful people who not only empathize with his pain, but also cheer his actions.  Men who say things like, “Media doesn’t acknowledge the majority of males’ contentment with current sexual dystopia… It’s all about HATING WOMEN.”  Elliot was not alone in his point of view.  He represents a microcosm of society that views life as fundamentally unfair because women are attracted to some men and not to others.

There is also a larger group of people, some of whom could reasonably be described as mainstream, that say they can understand his hurt and that, while it does not excuse his actions, it’s an unfortunate circumstance that he was in.  There is a very big problem with this—not with this attempt at understanding per se, but because the attempt at understanding is undertaken so lazily. 

It is unfortunate that someone should feel rejection, especially perpetually so.  But empathizing with this specific emotion should not be confused with feeling bad for Eliot, nor should it make us understand his actions.  Elliot’s actions did not come from rejection alone, but from a sense of rejection in combination with an extreme sense of entitlement: it was not just that Elliot felt women didn’t like him, it was that he felt he was being denied affection and intimacy that he was owed by women.

The unavoidable logical implication of this belief is that women wrong men when they are not attracted to them.  If every Elliot Rodger that exists feels wronged by women who deny them, then their collective argument is that women should be attracted to each and every man that desires them.  How can this be true unless one believes that every woman’s existence is justified because of her ability to please men?  This is objectification in the strongest and most despicable sense. 

As far as I can tell, Elliot’s objectification was exacerbated by a profound narcissism.  Not only are women hurting him by failing to please him (and therefore not fulfilling their role), but as a result of his pain, other people deserve to suffer.  The narcissism becomes all the more stark when we realize that Elliot also hated the men who succeeded sexually.  Suddenly his objectification of women is made clear for what it is: not an honest (if deranged) belief in their inferiority, but an act of mental contortion to dehumanize anyone who makes him feel bad about himself.  And they deserve that role because of how they hurt him.  And eventually they deserved to die.

Some might rightly call what I laid out above an act of (admittedly, amateur) psychoanalysis.  I would call it empathy.  I started from a point of commonality between Elliot and myself: I could understand his sense of hurt and rejection, because I have experienced it myself.  Then I tried to pick apart where his argument and actions deviated from any motivation I have experienced.  I wondered, “If we have both felt this hurt, why did he want to kill people and yet I have never felt that drive?”  I discovered he felt he was owed a blissful life.

We might start another thread of exploration to understand where his extreme narcissism came from.  I don’t know nearly enough about him to do that justice.  But anyone who tries to point out how Elliot’s actions were in some way understandable because they have felt similar rejection, or because they believe women are attracted to certain types of men, should be fully aware of exactly what they are agreeing to.  It is not just a sense of hurt that drives someone to do what Elliot did.  Stopping there is sloppy reasoning and downright dangerous.

Which brings me to the strategic advantage in this exercise of empathetic understanding.  Most people hear about an atrocity like what Elliot has done and call him a monster, and implicitly argue that anyone who tries to contend otherwise is fraternizing with the enemy.  This must be because a monster is an abomination, whereas a person exists among us and acts for reasons.  If Elliot was a monster, then his victims were killed by pure evil, like in the storybooks we read growing up.  If he is a person, then maybe his victims are in some way complicit.

I reject this dichotomy outright.  Striving to understand a person who commits terrible acts does not mitigate his responsibility for those actions.  Too often the media conflates these notions: “He played violent videogames, so those are the real culprit.”  We do not need to decimate personal responsibility in order to arrive at helpful lessons.  We are, all of us, little more than the combined influences of all our past experiences, but we are still the ones who are responsible for what we do.

I do not refuse to call someone like Elliot a monster because I want to protect his memory from harm.  I refuse to call him a monster because I believe that, if any good can come from such terrible incidents, it should be understanding what causes these things to happen, so that we can strive to prevent them in the future.  By empathizing with Elliot, I was able to dissect his ‘great manifesto’ into what it actually was: a deranged justification for extreme objectification rooted in narcissism.  It is the one weapon we have against those who would flock to Elliot’s banner, like the young man on a message board Elliot frequented, who said, “he would have had a boring […] life then died of cancer […] without ever leaving a mark […] he is famous 4 ever now.”

When condemned as a monster, Elliot becomes a martyr to those who would agree with him, to those who revel in feeling like the world just doesn’t understand, that some day we will see the truth.  When we try to understand him we can both strive to create a world that does not nurture beliefs like his and mitigate his martyrdom by revealing his grandiose arguments for what they really are.

And yet I must admit, researching Elliot and the community that supports him was not easy.  Both for my own sanity, and to impart hope in the face of this hatred, I share with you the words of soul and jazz poet Gil Scott-Hebron—a shower for the soul after crawling through these moral sewers: “To give more than birth to me, but life to me […] God bless you mama, and thank you. […] My life has been guided by women, but because of them, I am a man.”

Mass Shootings and Terrorist Attacks

I was recently thinking about the latest tragedy to be added to the too-long list of mass killings in our nation’s history.  I had just finished reading this Washington Post article about the desperate plea of a grieving father, to a nation that seems to be indifferent to these repeated atrocities.  I shared in his outrage and, had I been present, would have joined in on the chant of “Not one more!”

I don’t much care for guns.  I think it represents a collective lunacy that a sizable portion of our nation’s populace think the right of citizens to buy assault weapons (without having to wait too long to take them home and shoot them) should outweigh even a single preventable human death.  I think the notion of protecting gun rights to safeguard our ability to overthrow a tyrannical government is little more than childish and embarrassing.  “Some of our nation’s people will live considerably shorter lives than they otherwise would (and many who care about them will have their lives ruined) because I think that there’s a chance we might fuck up this country enough that there won’t be any way to make it better besides killing a lot of people.”

I’m being glib because I want to highlight just how seriously I took the other side of this argument.  Then I began playing the philosopher and sought out any inconsistencies in my morality.

I have repeatedly argued against government programs that appear to ignore Constitutional restrictions and overstep sacrosanct boundaries of privacy and freedom in the name of guaranteeing our safety from a terrorist threat.  During these debates, I’ve often cited Benjamin Franklin, who said, “Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety.”  It seems a truly paranoid fear that drives an increasingly securitized surveillance nation, an ongoing national hysteria that has allowed for the militarization of our police and intelligence forces.

I am forced to confront a conflict in my beliefs.  The very same argument that I find so repulsive in the instance of gun rights is the argument I would make in the case of protecting ourselves against terrorists: we should not give up necessary and inviolable freedoms because a psychotic few threaten the safety of a statistically miniscule portion of our citizenry. 

Now, one might argue that, statistically speaking, one threat is more realistic than the other.   The terrorist threat is worldwide, and the United States government believes it has a mandate to combat terrorism across the globe.  According to the U.S. State Department, there were 6,771 terrorist attacks worldwide in 2012, resulting in 11,000 deaths and 21,600 injuries.  Given a global population of 7.1 billion in 2012, the likelihood of being killed or injured in a terrorist attack was .00046%.  In the same year, there were 16 mass shootings in the United States, with 151 deaths or injuries.  Given a 2012 US population of 313 million, the odds of a US citizen being killed or injured in a mass shooting were .000048%.  In other words, it was 10 times more likely for someone (admittedly, worldwide) to be killed in a terrorist attack than it was for someone to be killed in a US mass shooting.

Maybe you think that it is not our responsibility to prevent terrorism worldwide, so we should only consider domestic terrorist attacks.  The numbers don’t come out all that differently since 2000.  Maybe you think that crunching these numbers to distinguish the moral permissibility of these two cases is silly.  Maybe, but what we’re talking about in each instance isn’t all that different, so it can’t be some sort of consequentialist tradeoff between human rights and human lives that is doing the work.

Maybe it is the nature of the rights.  I do not see the value in having a gun, especially the sort of weapon that allows for indiscriminate killing and destruction.  I do see the value in ensuring that an overreaching government cannot know everything about its citizenry.  But isn’t that the same fear that drives Second Amendment enthusiasts?  That a government will become too powerful if certain safeguards are not maintained to prevent it?  In one, those safeguards are a Constitutional protection of privacy rights and freedoms of association, travel, and speech.  In the other, those safeguards are weapons designed to prevent a militarized tyranny.

I have to admit, I am at a loss at distinguishing between the two in a way that doesn’t devolve into my own personal preference and cultural upbringing.  As bizarre as it sounds, I do not think an argument against gun rights can rest on the balancing of freedom against the safety of our citizens from mass killings (or else we are forced to accept the marginalization of liberties to ‘protect’ us from terrorists).  If we insist on that type of argument, it must rest on the ~30,000 gun deaths that happen in the United States every year (.009% chance of dying from such an incident).  As shocking as a mass killing is, it cannot be the moral driver of our ban—it is only a psychological stimulus for debate.  But even arguing from gun deaths in the abstract devolves into a statistical balancing act.  Why is .0004% not a permissible threshold to marginalize certain liberties, but .009% is?  How likely does a threat have to be for it to warrant certain sacrifices?  How do we prioritize certain liberties over others?

I still have an instinctual moral compass that tells me a society that allows for guns (especially guns designed for highly efficient killing) is a misguided society.  And yet I am coming up short in finding a consistent argument for my beliefs.  Maybe consistency isn’t that important.  Maybe hypocrisy in morality is in some ways inescapable.  I’d rather not accept that, though.  As hard as it is to swallow, I think a ban on guns must come from a notion that guns are themselves a moral wrong, that the killing of another human being may never be justified.  This is a much harder argument to make.  And an argument for another day.

Free will is probably an illusion, but you should be really happy about that.

I’ve been thinking about free will and fate lately.  What can we control?  What can’t we control?  What mistakes are we destined to make?  Why do we repeat them?  It’s one of those maddening quagmires that people who find themselves habitually spinning their wheels should probably avoid. And yet here we are.

In most philosophical discussion on the subject, these questions fall under the category of ‘determinism’—in other words, what is determined?  Most introductions to determinism explain it in very mechanistic terms, using something like billiard balls: ball a is hit in such a way that it will strike ball b in such a way that it must move in the proper direction to hit ball c, and so forth with ball d, etc. 

The physical gets muddled up when we introduce conscious beings: most notably, people.  It’s harder to see what determinism means for us.  If we grant that the mind is just the brain (the thoughts and moods we experience are neurons and chemical interactions), then there are, at some level, physical explanations for the entirety of phenomena commonly experienced as thought.  Ascribing to a theory of deterministic effects on conscious beings—let’s call it psychological determinism—means that every interaction, stimuli and influence a person experiences serves to structure her brain in such a way that she will act in a specific way given a certain set of circumstances.  Drawn to its strongest conclusions, this sort of determinism suggests that we have no real control over our actions and the choices we make.  In other words, that we have no free will.  Our brain is simply the end result of (truly) countless individual events tracing all the way back to whatever origin led to our existence.  Fate.

This is probably concerning.  It forces us to question our understanding of what an individual is, whether our choices are illusions, and whether we should be responsible for the actions we make.  Neuroscientist and Philosopher Sam Harris published a book on the topic, suitably titled Free Will, in which he combines physiological revelations (such as the fact that EEG studies demonstrate the brain initiates actions that supervene upon ‘decision-making’ as much as 300 milliseconds before consciousness is aware of making the decision) with philosophical inference.  As Harris argues, “Free will is an illusion. Our wills are simply not of our own making. Thoughts and intentions emerge from background causes of which we are unaware and over which we exert no conscious control.”

In the New York Times review of the book, Daniel Menaker suggests, “However correct Harris’s position may be — and I believe that his basic thesis must indeed be correct — it seems to me a sadder truth than he wants to realize.”  Menaker is concerned with what these revelations mean for the notion of humanness: what is character or bravery if I am not the origin of my actions?  What does that ‘I’ even refer to in the context of such a revelation?  This is not an uncommon concern.

I’d like to look at things through a different lens, though.  If each of us is not responsible for decisions (in the way we would commonly conceive of responsibility) because they were predetermined based on our genetics and our previous interactions that served to forge our personality, then the totality of human choices and actions are all based on causal chains, and these chains share a common causal ancestor.  This origin point makes for an incredibly strong catalyst towards a communitarian approach to society, to viewing ourselves as a part of ‘humanity’ in the strongest possible sense.

Granted, our understanding of what it means to be an ‘individual’ may have to change a bit, because each identity is entirely crafted by a combination of genetics and the experiences shaped by the influences of those around us.  But I see the revelation as uplifting, not as a sad truth.  We are still unique.  There is no one with your same genetic makeup that has experienced the same chain of influences.  The possibility of what you bring to the table is unknowable in such a complex system, so the mystery of life is as real as ever.

Yet in accepting a slight shift in our understanding of what we are, this very same understanding reinvigorates our sense of commonality in a beautiful way that should absolutely not be seen as harming the value of the ‘self’.  Just as individual cells have their own purpose, they become necessary parts of something far more complex when viewed in the context of a human being.  So too should we view persons in the context of humanity.

And this answers so many ethical questions.  It explains why we have obligations to other members of our society for reasons other than ‘immutable truths’ or, for example, a merely resigned preference over the state of nature.  It gives meaning to the notion of humanity, because it describes us as beholden to a true commonality in the same way most major religions do.  It should, I think, inspire universal gratitude.  We could not be who we are without the contributions of those before us and around us.  These revelations, viewed together, guide us to an incredibly comforting discovery: the noblest truths for how we should act come from what we are.

Good things don’t come to those who wait. You have to claim them.

My friend Anastasia recently shared an article with me on “Why you shouldn’t settle in your 20s”, with the addendum that “this is the reason everyone hates our generation.”  The central claim of the article, as far as I can gather, is that there is a significant pressure placed on people in their twenties (especially young women) to settle down and find a partner; but this paradigm is outdated due to societal changes and the advances of women. 

That may be a reasonable argument, but the path the author traverses from there meanders so far afield from this understandable starting point that it begins to feel like her initial claim is little more than a college-educated rationalization for self-absorption.  Which is, I think, what Anastasia was getting at.  Plus, as my friend Carolina pointed out, the author goes by the name of ‘itskalesbitches’, which isn’t really a good sign.

The author—Kaleigh—argues that we cannot find the one we love without knowing ourselves.  I agree one-hundred percent.  About a year and a half ago I wrote in this blog that “to say “I love you” is to make a declaration of one’s own identity” because the nature of love is a recognition of another as possessing values that reflect our own sense of self.

Which is why I found the jump Kaleigh makes from there so confusing.  Get blitzed, she suggests, even if you have work; sleep with the hot guy or girl across the bar, even if you don’t know his or her name.  “Why date one person when you could date five?” she asks.  I assume Kaleigh intends these activities to provide us insight in our path to self-discovery: we need to make the most of life in our youth, have the most varied experiences, in order to figure out who we are.

I have several problems with this assertion and its tenuous relationship to Kaleigh’s main claim.  First, it assumes that self-exploration is best achieved in a very specific way: through a confined set of experience types that conveniently supervene upon what attractive people in their twenties have been doing for a long time.  She portrays these experiences as offering variety, in presumed opposition to the confining nature of a single partner.  Sure, literally speaking, in terms of numbers of partners there is more variety.  But Kaleigh misses two crucial points.

First, Kaleigh assumes that this breadth of experience offers more opportunity for self-exploration than traditional relationships.  I can’t really understand how there is much opportunity for variety in partying with the fairly homogenous types of people any set of night clubs in Los Angeles, Vegas or New York might offer.  If it’s really about discovering new things about yourself by understanding a world you don’t know, how does spending weekend after weekend at the Gansevoort or SkyBar accomplish this?

For the sake of argument, let’s grant that there is variety here, even within a group of twenty-something socio-economic cousins.  I still cannot see how these superficial interactions provide insight into who we are or who we should become.  I have had a handful of formative experiences in my life and, while almost all of them were catalyzed by other people, I honestly cannot think of a single one that was brought about by a superficial relationship.  I’ve never had a one-night stand that made me think at all differently about the way I view the world, or made me understand some flaw in myself, or something I was proud of.  They have never made me grow as a person. 

In fact, I’d say the collection of hookups in my life has done little more than stunt my self-exploration by preoccupying me with meaningless ego-pats at the expense of true discovery.  In contrast, I’ve had relationships (both romantic and platonic) that have fundamentally altered the way in which I think about my life and the world around me; because they weren’t about ego, they were about mutual discovery and exploration through challenge.  But that only ever came after my guard was down, after I had opened myself to the possibility that this other person could teach me something valuable about myself.

This notion of hookups-and-ego brings us to the second problem with Kaleigh’s argument: the mindset it will invariably engender in those who follow it cannot help but reinforce a self-absorption that has become the scarlet letter of our generation.  Kale herself argues “our 20s are meant to be our selfish years.”  What?  Says who?  When you conceive of yourself as entitled to live selfishly, your ability to live empathetically (surely a key ingredient in having a successful relationship) may very well atrophy.  When your romantic explorations become more about you exploring you than about your partner, they cease to be romantic at all. 

This mindset relegates another person to an accomplishment, or even just ‘an experience worth having’, which eviscerates human capacity for meaningful interaction, because everything loops back to a focus on the self.  If this is the experiential playground meant to help us discover who we are and train us for adulthood, then we cannot help but be creating adults whose capacity to prioritize others (like, for instance, a family) has been amputated by being habitually relegated to the background.

Assuming this isn’t just coming from an entitled girl looking to feel okay about that entitlement, I’d like to explore what other catalysts might have driven this philosophy.  (And I don’t mean that flippantly.  I really hope there’s more to it.)

She asks us to “notice how the divorce rate in our parent’s generation is the highest it has ever been,” and I’d suggest that this is the real source of her apparent apathy to romance in the present tense: it is scary, doomed to failure, and the potential source of a great deal of heartache.  Much better to put it off until we are real adults.  In other words, it isn’t actually apathy—and it certainly isn’t a new form of romantic idealism: it’s fear.

But it’s a largely baseless fear.  While it’s true that, by and large, the divorce rate is higher for men and women who marry at a younger age than those who marry later, I’m always floored by the fact that everyone assumes this has a necessary causal link to some notion that people who marry later have stronger marriages.  There is absolutely no reason that this divergence in rates could not be caused by a greater willingness for twenty-somethings to break off a marriage that is broken, since they have so much more time to start over. 

Why did the divorce rate increase so much over the last fifty or one hundred years?  Because people were getting married younger?  No, they had always been getting married at a young age.  The divorce rate increased because it became more socially acceptable to get divorced, whereas in the past the status quo had been to endure a failed marriage for social reasons.  But here’s the crucial fact that Kaleigh neglects in her ‘statistical analysis’: the divorce rate for college-educated women who have their own source of income and marry at age twenty-five is less than twenty percent.  That’s the very group Kaleigh is talking to, and she’s totally misleading them about the reality of the statistics.

While her data about divorce rates and the causal assumptions she makes regarding that data might be off, Kaleigh is right about one thing:  we grew up in a generation whose parents didn’t endure broken marriages, and as a result we have seen that love is not a fairytale, and often ends in heartache.  Even those whose parents stayed together cannot help but be affected by osmotic pressures from our peers’ collective sense of hurt.  But if this knowledge has indeed bred a fear of commitment, then that fear has caused us to tack too far in the opposite direction.  This brings me to by far my biggest problem with Kale’s argument: it ignores the possibility that such a philosophy of life will have negative consequences on our ability to recognize the very opportunities it purports to prioritize.

If our mindset is that we’re too young to meet ‘the one’, then we risk ignoring an ideal opportunity when it comes around because ‘now just isn’t the right time’.  A fear of missing out on this ‘experiential’ path prescribed to us in our twenties could end up making us miss out on completely different, amazing opportunities.  Clinical Psychologist Meg Jay, who specializes in psychological trends of modern twenty-somethings, argues in her book The Defining Decade that our 20s are the most important time in our lives for planning careers and forging important relationships.  Jay claims that the conceit that ‘thirty is the new twenty’ has trivialized what is actually the most transformative period by “robbing us of our urgency”.  People who think they have a decade to do whatever they want will essentially procrastinate, and they won’t make crucial advances in establishing footholds in their chosen career paths, or in finding the right partner.

Of the hundreds of twenty-somethings Jay has worked with, she describes that, time after time, those patients of hers who followed something akin to Kaleigh’s prescribed life path invariably felt like they had wasted a great deal of their life, that they were nowhere near where they wanted to be in their career, and that they had simply chosen whomever they had happened to be with when all of their peers started getting married.  This reality paints a far less rosy portrait than what Kaleigh seems to think will happen to her.

None of this is to suggest we need to settle down right away, at all.  I agree with Kaleigh: we should absolutely refrain from living our lives in accordance with societal pressures—either those pressures that tell us to marry at a young age, or those that tell us we cannot be ready for a healthy marriage in our youth.  What I am saying is we should not mark off a time of our life as somehow not counting.  Moreover, we should recognize that true exploration cannot come from setting out on a path we have defined for ourselves: new discoveries don’t come from following your five-year plan.  Be open to opportunities when they make themselves available.  Even if it isn’t what everyone tells you to do.

On Our Generation, Ke$ha, and the French Existentialists (I Can’t Believe I’m Typing This)

I was listening to Ke$ha the other day (yeah, I’m not embarrassed—want to fight about it?), and it got me thinking about something that I’ve been puzzling over for a while.  That’s right.  Philosophical treatise on quandaries posed by Ke$ha.  Blasting off.

While superficially preoccupied with good times at the club, Ke$ha’s lyrics all have a peculiarly common conceit, one that speaks to my generation at a surprisingly deep level.  They highlight the way we have chosen to address a recurring existential question.

Writing in the midst of World War II, French philosopher Albert Camus claimed that philosophy must concern itself with only one question.  He wrote, “There is but one truly serious philosophical problem, and that is suicide. Judging whether life is or is not worth living amounts to answering the fundamental question of philosophy.”  Camus was an existentialist; that is, he was concerned with how the individual should view life and find meaning in a world that has no intrinsic purpose.  This was the ‘absurd condition’.

So, what the hell does Ke$ha have to do with this? (And, for the record, I get bitter and resentful every time I find myself begrudgingly typing that goddamn dollar sign into her name.)  Her lyrics are emblematic in voicing my generation’s answer to Camus’ existential worry.  Yes, I’m serious.

Let’s take a look at some of her lines.  In “We R Who We R” (Jesus Christ, Ke$ha, I’m trying to maintain some credibility here…can’t we use real words?), she sings, “Tonight we’re going hard, just like the world is ours.”  In “Die Young”, she autotunetastically suggests that we “make the most of the night, like we’re gonna die young.”  In “C’mon”, she spits mad fire with (sorry, I ran out of ways to say ‘sing’), “I wanna stay up all night.  I wanna just screw around.  I don’t wanna think about what’s gonna be after this.  I wanna just live right now.

There’s an ambivalence here, a carefree “I don’t give a shit, I just want to party” overtone.  Of course, Ke$ha isn’t alone.  Miley is there with her (“It’s our party, we can do what we want”), alongside countless other pop stars.  A lot of people will tell you that pop and club singers write from this point of view because it speaks to their target audience: teens and young adults who want to feel rebellious against their parents and the establishment.  That is such a goddamn copout.

These lyrics don’t just contain apathy to responsibility, they question the prospect that there is a meaning in life greater than enjoying the moment; there is a desperate abandon to them that represents our own desperation.  The generations of the 50s and early 60s struggled through their absurd condition via suburbia.  The late 60s and 70s, in response to Vietnam and the Cold War, sought meaning in ‘free love’—but this freedom in love was purposeful, insofar as it represented a powerful declaration of identity, not apathy.  Post 9/11, post-2008 market crash, our generation has dealt with its existential funk by, well, not dealing with it. 

Our absurd condition is real, visceral, and yet largely ignored.  On the one hand, we equate living life to the fullest with ‘going hard’ and ‘screwing around’, while simultaneously admitting that we ‘don’t wanna think about what’s gonna be after this’ and, as Miley reminds us, that ‘we can’t stop’.  Of course we can’t, because what would be left?

Don’t believe me?  Let’s look at one final (admittedly long) lyric from Ke$ha’s song “Crazy Kids”: “This is all we’ve got and then it’s gone (you call us the crazy ones).  But we gonna keep on dancing ‘till the dawn.  ‘Cause you know the party never ends, and tomorrow we gonna do it again.  We the ones that play hard, that live hard, that love hard, we light up the dawn.” 

Ke$ha places herself in conflict with those who ‘call us the crazy ones’—presumably those who would suggest that staying out late, blacking out and hooking up with random strangers does not a meaningful life make.  But even in so doing, she acknowledges that it is only through necessity that we find ourselves habitually chasing these good times: it’s all we’ve got, and then it’s gone.

So what’s the problem?  In a meaningless world, why not cling to what good times we can find?  All memories are fleeting and end in death anyway (sorry guys, just channeling the French here), so why not make the most of it?

Well, that’s the problem, actually: the implicit claim that this is ‘making the most of it’.  Ke$ha calls it ‘living hard’, and she’s far from the only modern pop star to use the term to describe this sentiment.  Nor is this cultural conceit present only in music.  It pervades our culture, especially for those of us in our teens and twenties.  We brag about how drunk we got at last night’s party—I certainly texted my friend to tell him I had had over twenty drinks on New Years’ Eve (sorry, mom).  We post pictures of our late nights and tell our friends about our random hookups.  We feel like we are somehow missing out if we are not part of this culture.  We feel bad if we had a quiet night in and see all of our friends posting raging pictures on Instagram. 

But I don’t think this feeling is just about being cool.  I think, in some deep, subconscious recess, it’s about whether or not this is a good answer to Camus’ existential question.  Anyone who feels bad when they secretly wonder why they don’t love raging as much as their friends do is, I think, asking themselves what is wrong with them that they aren’t making the most of their youth.  But is ‘living hard’ making the most of life?  I think the answer is no, and that’s the problem.

In The Myth of Sisyphus, Camus retells an ancient Greek story about a man who is punished by the gods with eternal labor: he will forever be forced to push a giant boulder up a mountain, only to have it roll back down again.  For Camus, Sisyphus is the absurd hero: he is representative of all men in that he must labor in full knowledge that his work is meaningless.  Sisyphus’ tragic fate is that he knows he will have to start all over again; for the rest of us, it is the knowledge that daily toil will only end in death anyway.  So why struggle?  Why not just live hard?

Camus’ answer is, I think, the right one.  No matter what fate the gods may place on him, Sisyphus is still the master of the way in which he endures his struggle.  In the end, Camus concludes, “The struggle itself toward the heights is enough to fill a man’s heart. One must imagine Sisyphus happy.”  We find joy in life because the struggle of life is itself meaningful and, at times, joyous.  But this is not the same as living hard.  The struggle of life is crucial, and living hard is—as all of the lyrics admit—an escape from the struggle, a denial of the struggle’s meaning and beauty.

What is the struggle?  Putting oneself out there.  In friendships, in careers, in loves.  Being willing to try and fail, and to get up and try again.  There cannot be any risk (and hence no reward) with a morality that professes that “only God can judge ya, so forget the haters”.  The pleasures and relationships such a mentality breeds are necessarily fleeting.  But they also do not allow for disappointment, which is an attractive siren song.  There is no meaningful sense of rejection in a failed hookup attempt; no fear of loss from one denial when a sea of attractive, anonymous possibilities present ample opportunity.  Besides, he was just a hater.  Besides, she was just a slut.  But even if there is less to lose, there is certainly less to gain as well.

Which is not to say we have to give up drinking, or smoking, or reckless abandon.  Occasional acts of self-destruction can provide a helpful sense of freedom.  But this is the freedom of the suicide, as I like to think Camus would call it.  It is an escape from the existential struggle of humanity, not a meaningful confrontation with it.

We should not strive for ‘living hard’, and nobody who misses its value should feel bad about that.  It is a distraction from a life that can, at times, be too much.  And that’s totally okay.  We need breaks.  Sisyphus’ walk back down the mountain, his brief respite, is necessary too.  But let’s stop thinking it’s what we should be doing to have a full, young life.

The Problem is Choice

A few exceedingly kind people have asked to read my undergraduate honors thesis that I recently completed.  While I’m not exactly sure why someone would put himself through reading some admittedly dry philosophy, I’d certainly be happy knowing someone read the damn thing without being on my committee.  So, here it is.

Fuck Boston?

I originally named my blog “Reason and the Beast” as indicative of my desire to bridge a gap I perceived between academic philosophy and… everything else.  But today I want to employ ‘the beast’ part of the title as an excuse to go on a bit of a rant and, well, unleash the beast.  I’m sitting on the train right now, and I’ve come across an article on Gawker. 

When I first opened Hamilton Nolan’s article ‘Fuck Boston’, I was anticipating an ironic mockery of a great city, that maybe lambasted us for yet another sports win, but ultimately arrived at a cute little cathartic admission of brotherhood between rival cities (as far as I can tell, Nolan lives in New York).  Even as I got further in—to the “Fuck your undeserved underdog attitude” and “Fuck your tendency to claim all of Irish immigrant culture as your own” bits—I was hoping, hoping, that the article was going to take a clever turn, a wink and a nod between a writer and his readership. 

If Nolan ever meant to get there, his ride must have gotten sideswiped on I-90 by a Masshole or two, because as far as I can tell, the article finished burning in a ditch, covered in petrol.  And to be clear, I mean that figuratively, in the sort of way that implies, “You’re a bad, uncreative writer, a sorry excuse for a journalist and the sort of comedian I expect to see performing at an open mic in the Southborough Denny’s on a Thursday afternoon.”  I hope that came across.  Writing can be so difficult.  Nolan certainly knows what I’m talking about.

I used to be understated about my love for Boston, just silently enjoyed the sparkling Charles on a sunny autumn day; walked Newbury Street and didn’t make a single smug comment about the quaint and eclectic collection of shops without the breakneck hustle of Fifth Avenue.  But you know what?  I’m pissed now.  So I’m going to channel that aggression into the really angry love this city is famous for.  And since you insisted on making a comment about our accents (and because I’ve always wanted to pull a Good Will Hunting) I’m going to write the next bit in a Boston accent.

Why do I fahkin’ love Bahston?  Fah ev’ry reasahn ya—yeah, okay, this was a bad idea.  Just do me a favor and read the next bit in a Boston accent.

I love Boston for every reason you hate it.  I love it because the weather makes no fucking sense; because we have blizzards in April and I occasionally have to wear a t-shirt in November.  I love it because we still think we’re the underdogs after winning three Super Bowl titles, three World Series Championships, and a Stanley Cup, all in the last ten years.  I’d mention the Celtics championship win, but that almost seems silly for a team that has seventeen under their belts.  I love Boston because of our confusing mixture of intellectualism and boisterousness. 

I love Boston because we are making unparalleled strides in scientific research, engineering and medicine; because our absurd number of incredible hospitals are a beacon of hope for so many sick people.  I love it because I’m proud of the fact that so many of the world’s best ideas have come and continue to come from this little city of 600,000 people.  From breakthroughs in embryonic stem cell research, to the social network that dominates way too much of our time.  From Robert Frost’s poetry to Matt Damon’s shaky-camera action movies to John Rawls’ Theory of Justice.  (Utah, you can keep Mitt.)

I love Boston because we understand that freedom-fighting and terrorism are not the same thing, and not just because throwing British people’s tea in the Boston Harbor is pretty damn ironic.  We were a city that started a revolution, that sparked a fire, which, for the first time, burned bright the truth that people are inviolable creatures whose innate characteristics demand certain rights and liberties.  And we did it without using a guillotine.  So, no, I’m not going to give it up.  Two-hundred years later, we are a city that came together in the face of inhuman anger and a mutated, xenophobic idealism.

Fuck Boston?  You, good sir, are an asshole.  (I’m italicizing that, dear reader, because I really want you to lean into it.  Really feel the force of it.  The guy who stole my slice of Nochs a few nights ago was an asshole; the ignorant prick who jumps on the bandwagon of irrational bitterness for a city I love is an asshole.  To give a bit more context, Bashar al-Assad—the Syrian dictator who allegedly used Sarin gas on his own rebelling populous—is an asshole.  Style is a crucial part of getting your point across.)

As for the people who tweeted about how the next Boston bombing should be at Fenway, or how we only won the World Series because of the Marathon Bombings (you know, those explosions of molten shrapnel and flesh-searing heat that indiscriminately injured 264 people, eviscerating limbs, devastating families, and ending lives…those things): I just can’t.  I can’t respond because I can’t fathom.  Nolan’s got an endearing stupidity going for him, so I can have fun with that.  But this…

Instead, I’m going to quote from the speech Harvard president Drew Faust gave at my graduation last spring.  Describing the incredible reactions of bystanders at the Boston Marathon finish line, Faust said:

Amid the calamity, there appeared streams of people running toward the chaos, toward the explosions. The first responders — police, firefighters, the National Guard; the raft of doctors, nurses, and EMTs; the trauma surgeon who had just completed the Marathon and “rushed in” by heading straight on to the operating room at MGH. The volunteers, the bystanders — women, men, young and old — running toward the unknown, risking their own safety to see if they could help. […]

Not everyone is prepared to run toward an explosion. But each of you is exquisitely suited, and urgently needed, for something. […]

Go where you are needed. Run toward life.

For all the things I’ve described about Boston, I think this is the part of this city that makes me proudest.  Thankfully, there isn’t always a senseless catastrophe that requires these beautiful acts of heroism and sacrifice, but Boston has always been a city running toward.  From our sports fans to our researchers to our drivers with an oddly urgent desire to get wherever the hell they’re going. 

Maybe it’s this determination, this enthusiasm for life, that pisses everyone off so much.  I’m okay with that.  Boston, keep running toward.

Oh, and if you see him, tell Hamilton Nolan I said, “Fuck you too.”

An Ignorant Troll?

Ann Coulter recently did an AMA on Reddit where she was generally pretty offensive, dodged most of the questions of any content, or otherwise just touted her own greatness.  As a result, there wasn’t much to comment on from a philosophical/logical perspective, but there was one question and answer that really demonstrated (for me, anyway) the degree to which it’s problematic when people either (a) distort the truth to achieve a vision of reality in keeping with their political ideology, or (b) talk authoritatively when they have no fucking clue what they’re talking about.  I think that’s like the second time I’ve sworn on this blog, but what can I say, Ann, you bring out the worst in me.

Okay, so here’s the passage in question:

Do you believe in the separation of Church and State? If not, how can you determine which religion is the correct basis for laws?

AnnCoulter_:

Are you Ed McMahon trying to pitch me a softball? it’s not only not “explicitly” there, it’s not “implicitly” there either. Lots of states had established religions during after the passage of the 1st amt, which says “CONGRESS shall make no law respecting an establishment of religion.” I.e. congress could neither establish a religion, nor interfere with the states doing so. Read it again (or I should say, for the first time.)

So, Ann’s version of history is sort of right, in the way that you’re sort of telling the truth when you tell your teacher you had to hand in your paper late because your grandfather died… but neglect to mention he bled out storming the beaches of Normandy in 1944.  Truths: (1) There were indeed states that had religious establishments that were more or less official to various degrees when the Bill of Rights was passed. (2) The text she listed is an accurate representation of the relevant portion of the 1st Amendment.  (3) The 1st Amendment, upon ratification of the Bill of Rights, applied only to the federal government.

Okay, that’s all well and good, but logical arguments are not sound unless their premises are true and complete.  See, Ann sort of glossed over the rest of US history and Constitutional law between 1791 and 2013.  There was this thing called the Civil War and the passage of the 13th and 14th Amendments.  The 13th Amendment barred slavery, but when it became clear that most of the southern states were going to make life as hard as possible on the newly freed black population, the 14th Amendment became necessary.  Here’s what the relevant part of the 14th Amendment says:

No State shall make or enforce any law which shall abridge the privileges or immunities of citizens of the United States; nor shall any State deprive any person of life, liberty, or property, without due process of law; nor deny to any person within its jurisdiction the equal protection of the laws.

There’s a lot of history to the 14th Amendment’s jurisprudence in US courts and, since I’m not yet a lawyer I’m not going to speak authoritatively on subjects that I don’t have expertise in (perhaps Ann should follow my lead on this).  But the highlights are fairly straightforward: the phrase “nor shall any State deprive any person of life, liberty, or property without due process of law” has come to be known as the Due Process clause.  This applies to the states, not the federal government, and there is a strong history of Supreme Court cases that have interpreted the Due Process clause of the 14th Amendment to incorporate the fundamental rights of (most of) the Bill of Rights to bind state governments as well as federal governments. 

And this isn’t, like, novel or particularly academic; it’s kind of straightforward.  It’s why state and local police have to abide by the 4th Amendment’s protection against unreasonable search and seizure.  It’s why the Supreme Court is able to overturn what they deem to be unreasonable firearm regulations enacted by state legislatures.  It’s out there and part of society.

Ever wonder where the phrase ‘separation of church and state’ came from?  It’s not in the 1st Amendment.  It’s actually in the opinion of Everson v. Board of Education, the Supreme Court case decided in 1947 that incorporated the 1st Amendment’s ‘establishment clause’ (the bit about religion) to apply to the states as a result of the 14th Amendment.  Now, Ann can feel free to disagree with this.  I’m sure a large part of the country does, and that’s their prerogative.  But as it currently stands, Everson is still good law in the United States, and it most certainly happened (sixty. years. ago.).  So either learn about it if you’re going to be a condescending prat, or stop twisting history to fit your message. 

So, I don’t know.  Is Ann ignorant or just a troll?  If she really didn’t know this stuff, I hope she has a good researcher for her books.  Even if Ann has never taken an intro to constitutional law class (which would be odd for a political commentator and ‘political theory’ author), it would take about 10 minutes on Wikipedia to find it all. 

But then again, I probably just have a liberal bias.

Healthcare Hypocrisy

Throughout the lead up and duration of the government shut down, I’ve been thinking about the motivations of those whom I believe to be responsible for the debacle (e.g. Freedomworks and similar groups, who published a Blueprint to Defund Obamacare, the talking points of which comprise many of the sound bites you’re likely to hear from the vocal Republicans and Tea Party Patriots who voted for the shutdown).  I thought I’d briefly describe what I view their stated philosophy to be, and why it (i) should be unappealing to their constituents and (ii) demonstrates internal conflict in their normative framework (if you’re feeling like this is just a particularly flowery way of calling them hypocrites, well, you’re not wrong).  I’m not going to get into the practical stuff—the more cynical side of why this is really happening—I just want to poke holes in the moral claims being made, because it’s fun…and simultaneously sad that nobody seems to want to talk about it on the national stage.

The simplest way to state the far right’s stated goal is this: we must shrink the size of government because large government interferes with liberty.  The standard response from the left has usually been that the right wants large government in certain areas (e.g. military spending, national surveillance, abortion bans, gay marriage and substance control) that map onto their own moral views about what is good, they just want government to stay out of the way in other areas (e.g. environmental protection, banking regulation, mandated healthcare coverage, and gun control).  It seems haphazard, one might suggest, to claim that moral legislation is justified on the grounds of ‘sanctity of life’ in the case of abortion, but preserving the inherent value of human life by ensuring all people have access to care when they get sick is an overreach of government authority.  It seems convenient, one might suggest, to claim that the regulations which would be most expensive to businesses should be the same instances in which the government has overstepped its bounds (e.g. environmental protection, banking regulation, and mandated healthcare coverage).

In order to quash the counterarguments from the left, the right has championed the moral value of personal responsibility: you want to be free not just because it feels good, but because it’s the righteous man’s burden to be responsible for his actions.  This is a great way to keep a hold of constituents who start to question the reality of the dream they’ve been fighting for—any feelings one might have that this random application of ‘liberty’ is not all it’s cracked up to be is because of an embarrassing weakness on your part; if you feel like you’re being taken advantage of, it’s only because you want a handout that doesn’t belong to you; if you work hard, you can achieve the American Dream.  As Steinbeck said, “Socialism never took root in America because the poor see themselves not as an exploited proletariat but as temporarily embarrassed millionaires.”

So what’s the problem here?  Isn’t there something to be said for personal responsibility?  Absolutely.  It’s not that personal responsibility doesn’t have value, it’s that the conservative right has falsely portrayed personal responsibility as existing in necessary tension with empathy.  One particularly hilarious example of this was Fox News anchor Megyn Kelly’s 2011 tirade about the value of maternity leave.  As Jon Stewart so eloquently put it, “I just had a baby and found out [government-mandated] maternity leave strengthens society.  But since I still have a job, unemployment benefits are clearly socialism.”  One begins to wonder if ‘personal responsibility’ isn’t just a more palatable way of saying, “I got mine.”

Here’s the thing though, you can believe that liberty has value, that people should be responsible for their own destiny, and still believe that a state is a community that benefits from mutual participation and protection.  There’s a really great example of people who live out this belief every day, and it has almost universal American appeal: this is why we have a military.  The US armed forces is a group of people who put their lives at risk to ensure that other people for whom they would otherwise not be responsible can live safely.  A person can be responsible for the safety of his fellow countrymen and still believe in the value of personal responsibility.  Sometimes a plane crashes into a center of commerce, and it’s our responsibility as a nation to come together, help those who were immediately affected by the tragedy, and do everything we possibly can to make sure it never happens again.

But the vast majority of the time it’s not a plane piloted by suicide bombers, it’s an unexpected cancer or a diagnosis of heart disease, followed closely by unemployment.  If someone can give me a good reason why that’s any different, I’ll buy them a drink.

With God on Our Side

Two days after the tornado that ravaged Moore, Oklahoma had dissipated, the traditional debates had already returned to the forefront.  The liberal message was one clamoring for further disaster relief and concerns of climate change; while conservative pundits focused on the tragedy, trying to avoid the pageantry of politics on this landmine of a topic.  Ignoring this wise advice, Senator Jim Inhofe, a Republican from Oklahoma suggested that federal tornado relief was not the same as federal hurricane relief—which he had opposed in the aftermath of Hurricane Sandy, supposedly because of the pork included in the bill.

Another perennial thread unspooled in Facebook posts, blogs and Op-eds (even though most of the major media outlets recognized it for the quagmire that it is): the Oklahoma tornado, which killed 24, including 9 children, was part of God’s plan.  Just as Hurricane Sandy had been before it.  Just as the 2011 earthquake that rocked Japan and destroyed the Fukushima nuclear plant, killing nearly 16,000, had been before that.  Just as the 2004 Tsunami, which rocked the Indian subcontinent, killing over 230,000 people, had been before that.

In one particularly misguided debate, an anonymous prophet clarified his position—that it was okay to assist those who had survived the tornado, even if it was part of God’s plan—by suggesting that “the ones he wanted to die are dead”.

After the comment made my stomach turn, it reminded me of one of my favorite songs—“With God on Our Side” by Bob Dylan.  In this seven-minute, haunting folksong, Dylan describes growing up in 1950s America, where he was taught ‘that the land that he lived in had God on its side’.  Through several verses, he chronicles an array of regrettable events in American history: from the British slaughter of Native Americans during colonization, to the American Civil War, to World War II, the invention of chemical and nuclear weapons, and Vietnam.  In each snapshot, Dylan explains a recurring sentiment, which he best summarized while describing the confusion he felt learning about World War I as a child: “The reason for fighting, I never did get—but I learned to accept it, accept it with pride; for you don’t count the dead when God’s on your side.”

After I listened to Dylan and cooled off a bit, convinced again that there are reasonable and beautiful ideas in the world, my inner philosopher took over.  I started thinking about the logic behind the idea that God has a master plan for all of humankind, and that this master plan can involve the suffering of those who might otherwise not deserve such treatment.

Giving up oneself or offering another for the purpose of a greater good, usually intending some sort of harm to occur, is generally termed a ‘sacrifice’.  Jesus is doubtlessly the most famous and universally admired sacrifice (except by Ayn Rand). 

But we must keep in mind that, at some level, he sacrificed himself—he was in on it, as it were.  Offering up another as a sacrifice without consulting them is generally seen as uncool and/or barbaric.  So I find myself irked when someone tells me a loved one or an innocent child was called back to heaven as part of God’s plan, because he needed another angel.  (That, and the idea that God never gives us more than we can handle.  I am reminded of Tig Notaro’s now-famous standup comedy performance, when, after describing how, in the span of six months, she had developed pneumonia, Clostridium Difficile, and breast cancer, her mother died and her girlfriend walked out on her, she imagines God watching her from above saying, “You know, I really think she can handle a bit more.”)

Okay, so I don’t find this particular belief and consolation appealing.  It doesn’t work for me, but it does for some people.  And that’s fine.  It doesn’t, in-and-of-itself, have any major internal contradictions.  But the idea of forced sacrifice, where the sacrificed party isn’t part of the decision, almost certainly belongs to a certain school of philosophy known as consequentialism.  Basically, the idea here is that the moral worth of an action is determined by the value of the consequences it brings about.  Killing someone might be permissible if it saves five lives.  Sending a tornado through a town in Oklahoma might be permissible if it is part of a grand plan we cannot comprehend.  This theory stands in stark contrast to non-consequentialist theories (if the name didn’t give that away), where other factors—like the inherent worth and inviolability of humans—are considered.  Non-consequentialists generally don’t believe you can kill someone to save five people.  The person has a right not to be harmed in such a way.

Now, one might believe that the scope of this plan is limited: that God works through nature and miracles, but not through men.  This would certainly help deal with free-will concerns, but I don’t think this particular strain of Christianity can make use of this expedient.  I return to Dylan’s “With God on Our Side”: that God made America the greatest country on earth is certainly a mainstream theme in American circles of discourse.  But America is a nation of people, founded by people, through a declaration of war against the British Empire and (later) a codified set of laws established among former colonies through fierce negotiations.  Perhaps God guided their actions, but the actions and laws of people created America.

These two views—that God has a plan that makes sacrificing some acceptable, and that God can work through men and women to fulfill this plan—are not inherently evil, or bad, or degrading to human value, even if I don’t like them.  Consequentialism is a perfectly reasonable ethical framework, believed by many very careful philosophers.  But it cannot work in tandem with a non-consequentialist theory that bespeaks the inherent, ultimate and inviolable nature of human life.  This is where the contradiction arises.

In other words, one should not believe that God’s plan is so great and beautiful that inexplicable and countless tragedies can be part of the equation without ruining its splendor, while at the same time believing that human life has absolute value.  One cannot believe that a soldier is being guided by God’s will in killing his enemy, and at the same time believe that a mother has necessarily perverted God’s love in her decision to abort the fetus that will become a baby for which she is not ready.

If God’s plan allows for sacrifice to achieve its glory, and if God’s plan is mysterious, and if God can work through men and women, then any act, no matter how heinous or righteous, can be a part of God’s plan.  Once we recognize this, we realize that this plan alone does not give us the moral tools we need to govern our actions.  We cannot judge the events of nature or the acts of men—or indeed our reactions to either—on the basis of faith in a power that is beyond our comprehension.  Maybe God exists, maybe he does have a plan, but that shouldn’t enter into our moral equation.  

Only Us

Two bombings happened on April 15th, 2013.  One was on Boylston St. at the finish line of the Boston Marathon.  The other was in Baghdad, where twelve coordinated explosions killed many and injured more.

I describe these two attacks without political agenda.  Unlike many writers, I am not suggesting that we compare tragedies in some sort of morbid and perversely callous pissing contest, as if a body count was necessary for understanding pain.  Nor am I arguing that foreign catastrophes are covered at a disproportionately low rate because we just don’t care, as if America was a self-involved teenager.  Unlike the Guardian, I am not suggesting, “[W]hatever rage you’re feeling toward the perpetrator of this Boston attack, that’s the rage in sustained form that people across the world feel toward the US for killing innocent people in their countries.”  I do not think any of the acts of kindness and heroism and charity exhibited in the immediate aftermath could possibly be characterized as rage.

This is the wrong way to look at what has happened.  It is the same divisiveness that misinformed whatever hatred prompted such monstrous acts of lunacy on April the 15th.

Rather, I describe these two attacks to highlight our similarity.  I live across the river from Boston and I’ll always consider Boston my home city.  I have never been to Baghdad, but I know how the people of Baghdad felt on Monday.  They felt scared, confused, worried about the safety of their loved ones.  They tried to make sure the people they cared about were okay.  They looked to the news to give them something to latch onto, some way to understand what was happening around them.  They felt angry when their warmest words of comfort were empty and useless in consoling the people whom they loved.

I know these things because they are how I felt and what I did in Boston on Monday.  I had friends running the marathon and friends cheering on the sidelines.  The people I care about and the thousands around them celebrating Marathon Monday were enjoying an activity of camaraderie and living in the most vibrant sense of the word.

Some might suggest that, in attempting to universalize these emotions, I am being insensitive to the tragedy at home.  That this is a time to focus on us; that this is an American heartbreak.  It is not.  It is a calamity of humanity, because any such acts are in direct opposition to what we as a species must stand for.  They are in irreconcilable conflict with the human purpose.

It does not dilute our pain to suggest that others share it.  It need not be ‘us’ and ‘them’.  Indeed, there is only ‘us’.  There are only the victims and the countless people who care about them.  The perpetrators of these abuses have forfeited their right to count amongst humanity, because their goal is not just immoral, it is inhuman in both method and methodology.

If it is ever possible for some good to come of senseless tragedies, perhaps this can be it.  Perhaps instead of blaming whatever incidental faction, ideology, religion, video game, or book it was that ‘caused’ this violence, we can recognize that, at the end of the day, it was just a handful of people, misguided by the only evil that can cause such hatred: an ignorance of what it means to be human; to love humanly and live humanely.

The Other Kind of Consent

Tomorrow morning, I’ll be having a routine medical procedure done.  This normally wouldn’t have much philosophical worth, but there will be a moment of dialogue that is crucial for our growing understanding of what the doctor/patient relationship is, could become, and in what new light we might learn to value this relationship.  

This moment also (and entirely coincidentally…) happens to be what I’ll be spending the next year of my life working on in the process of completing my senior honors thesis in the field of bioethics.  So there’s that.

See, right before the doctor puts me to sleep, she will read through a series of risks associated with the procedure, ask me if I understand them and their weighted value in comparison to the rewards of having the procedure done, and ask me to sign a form indicating this understanding and giving my consent.

This wasn’t always how things worked, though.  The notion of informed consent is a fairly novel idea in the history of legal requirements in medicine.  Prior to 1957, with the decision Salgo v. Leland Stanford Jr. University Board of Trustees and the more formalized opinion in Natanson v. Kline, there was no legal notion of informed consent.   Medicine was, for a long time and without much contention, a field in which doctors passed out treatments without much explanation or justification.

Which isn’t to say that they were tyrants in any damaging sense of the word.  Rather, the body of opinion rested firmly on the idea that the doctor/patient relationship was one in which the doctor prescribed treatment based on her informed medical opinion to a lay patient who, not understanding the situation himself, trusted the opinion of his physician.

We can see this belief peek through in the Hippocratic Oath (which in my younger years I called the Hypocritical Oath, a far more confusing notion).  The relevant bit goes like this:

I swear by Apollo, the healer, Asclepius, Hygieia, and Panacea, [now that’s how you start an oath…]: 

…I will prescribe regimens for the good of my patients according to my ability and my judgment and never do harm to anyone.…

The key terms here are “for the good” and “never do harm”.  This is known as the beneficence clause of a doctor’s oath.  

That was the traditional occupation of doctors: to do good by their patients and never to harm them.  And the medical community has always implicitly construed these goods and harms as having to do with bodily health.  Who can blame them?  

But as we’ve progressed as a society, our views of harms and goods have become more complex.  We realize that sometimes the desires of a patient are not so simple as ‘to survive’; that they may wish to live their last days with dignity, or in blissful ignorance.  The individuals and the situations vary.

Doctors are medical practitioners, not moral arbiters.  Their position in guiding medical diagnoses and prognostic options should not be conflated with a special insight into the right choice.  Affirming this point in his book How We Die, surgeon Sherwin Nuland recounts his history in practicing medicine:

More than a few of my victories have been Pyrrhic.  The suffering was sometimes not worth the success…. [H]ad I been able to project myself into the place of the family and the patient, I would have been less often certain that the desperate struggle should be undertaken.

Which is not to say that the fight is itself undesirable, but rather that an understanding of what that desire represents and could potentially mean is vital to a patient’s valuing his autonomy and making an informed decision.  This idea, this simultaneous weighting of autonomy and beneficence as cohabitants in a reasonable relationship, is informed consent.

The desire for informed consent arises from a non-parity in the respective knowledge bases of the patient and the doctor.  As framed in the landmark decision Arato v. Avedon, this non-parity concern evolves into a moral demand for informed consent in three steps:

1) Patients are generally not knowledgeable of medicine and the medical sciences, and therefore do not have comparable knowledge to that of their physician.

2) Yet, an adult of sound mind both has the right and obligation to exercise control over his own body and to determine whether and which medical treatment he should submit himself to.  

In combining these two premises, we arrive at an obvious conclusion: 

3) The patient depends on his physician and trusts that he will honestly convey the information upon which he relies during the course of the decision-making process as well as all of the relevant risks and rewards of the proposed.  As a result, the physician has an obligation to provide this information.

Today, this may seem a fairly uncontroversial conclusion.  Yet, as we examine the question, it becomes less and less simple.

In The Cancer Ward, novelist Alexander Solzhenitsyn poignantly captured the concern that arises from informed consent.  When a patient challenges her doctor’s right to make unilateral decisions on the patient’s behalf, the doctor gives a troubled but certain answer, “But doctors are entitled to the right—doctors above all.  Without that right, there’d be no such thing as medicine.”

A more critical examination of this concern can be found in Thomas Duffy’s article “Agamemnon’s Fate and the Medical Profession: from the New England Law Review, where he argues, “Paternalism exists in medicine to fulfill a need created by illness.”  That is, it is not the doctor that is limiting the patient’s autonomy, but a necessary characteristic of a situation constructed by the illness to which both doctor and patient must respond as best they can.

But this carries an implicit thesis: that the physician still knows best (at a moral level).  How can this be so when there is still so much doubt in medicine?  In the words of Dr. Brian Goldman during his TED Talk Doctors Make Mistakes: Can We Talk About That?, “If you take the system… and weed out all the ‘error-prone’ health professionals, well… there won’t be anybody left.”  

Or, as Dr. Alvan Feinstein said in his book Clinical Judgment:

Clinicians are still uncertain about the best means of treatment for even such routine problems as… a fractured hip, a peptic ulcer, a stroke, a myocardial infarction… At a time of potent drugs and formidable surgery, the exact effects of many therapeutic procedures are dubious or shrouded in dissension.

Or, because of Dr. Gregory House, the now infamous desire to solve The Riddle, as Dr. Sherwin Nuland elaborates:

[A surgeon] allows himself to push his kindness aside because the seduction of The Riddle is so strong and the failure to solve it is so weak.  [Thus, at times he convinces] patients to undergo diagnostic or therapeutic measures at a point in illness so far beyond reason that The Riddle might better have remained unsolved.

Given all of this, I cannot help but think it unwise and unfair to demand moral guidance from our physicians in addition to medical prognoses.

And indeed, sentiment has already shifted in many regards in this direction.  The Presidential Commission conducted a survey in`1961 in the Journal of the American Medical Association, which found that 90% of doctors did not inform patients of cancer diagnoses.  Sixteen years later, in 1977, 97% of doctors surveyed routinely disclosed a cancer diagnosis.  The times, they are a’changing.

But the situation is not so simply addressed.  The questions are incredibly complicated.  Let me offer you an example, crafted by Dr. John Arras in his essay “Antihypertensives and the Risk of Temporary Impotence: A Case Study in Informed Consent.”

In this thought experiment, a patient with hypertension, for whom diet and exercise has failed as a remedy, seeks medical assistance from his Primary Care Physician, Dr. Kramer.  Dr. Kramer generally prescribes “a common diuretic, hydrochlorothiazide, as the second line of defense [after diet and exercise]” for hypertension, due to its cheapness and effectiveness.

The drug had a potential side effect of causing temporary impotence in 3-5% of men who took the pill; the impotence would resolve upon completion of the treatment or discontinued.  Dr. Kramer wonders if she should tell her patient about this risk, considering that he is a newlywed and may find this a particularly problematic time to be experiencing such issues; she reasons that he may be willing to pay extra for a more expensive drug that would not cause this problem.

Dr. Kramer consults with another physician who suggests, “The risk is quite low, entirely reversible, and consider this: if you share this possible side effect with your patient, this little bit of truth is likely to make him extremely anxious about what could happen….  Telling him about the risk of impotence could actually make [him] so worried that he would become impotent at your suggestion.”

Here we have an instance of apparently direct conflict between beneficence and autonomy.  What should the doctor do?  

Consider a less trivial situation, where a patient has been diagnosed with Hepatosplenic T-Cell Lymphoma, an almost-always fatal condition.  Is a doctor obligated to tell the patient, even if treatment is not an option?  What if the patient does not want to be informed?  Or has a heart condition that may be exacerbated by the knowledge?  How do we weigh these concerns?

Moreover, is truly informed consent even possible?  It is a commonly recorded psychological phenomenon that people undervalue the risk of actions.  Take cigarette smoking.  The ‘it won’t happen to me’ belief is ubiquitous: we understand there is a statistical risk, but dissociate ourselves from the statistic.  

How can we actually subvert this common psychological move?  And, if it turns out we cannot, does that force us to recalculate the balance between beneficence and autonomy?  If an individual cannot accurately assess his own risk, should we leave the choices to those who are dissociated enough that they can?

These are difficult, troubling questions.  But these questions have yet to be satisfactorily answered, and they need to be.  As Dr. Pauline Chen argued in her New York Times essay, in its current form, informed consent is often a theater act:

Pete looked away from me and stared at the consent form. Yet even as I watched his brows knit together, his eyes widen then wince, I kept on talking. I had gone into my inform consent mode — a tsunami of assorted descriptions and facts delivered within a few minutes. If Pete had wanted me to pause and linger over something, I never knew. He couldn’t get a word in edgewise….

Pete signed the consent. But as he took the pen to paper, I couldn’t help noticing the tremor in his hand and the pall that had suddenly descended upon the room and our interaction.

The common lingo among physicians is  ’to consent the patient’.  Linguistically, it is not an actively forged relationship between patient and physician, it is an action performed on the patient, a legal requirement that must be completed before getting down to business.  We need to do so much better.

These questions push us to the limit of what ethics can grapple with.  They cannot be answered in a brief article.  They demand of us careful consideration.  Or maybe I’m just bigging-up my honors thesis…

Anyway, I suppose I don’t really have a conclusion this week.  Sorry.  I don’t know what to tell you.  I’ll be trying to come up with satisfactory answers to these questions over the next year.  

I’ll let you know when I figure it all out.  BRB.

The Guilty Ones

Towards the end of this past semester, I was at dinner with one of my professors, and found myself debating at some length a question of morality.  I’m sure most of you are familiar with Sophie’s Choice—a book, a movie, and a dilemma: you’re a mother with two children and are told to pick which one will die and which will live, or else both will be killed.

Philosophers have a similar thought experiment that removes a bit of the complicated sentiment that Sophie’s Choice is so rife with, broadly called “Trolley Problems” (one of the most famous thought experiments—this list and the descriptions are actually quite good, despite what the ‘z’ in the domain name might indicate).  The thought experiment my professor offered me in this particular discussion is slightly different, but the general premise holds.

You are alive during Manifest Destiny era America, and you and twelve fellow settlers are traveling west in hopes of finding some nice land that doesn’t belong to you.  You do not know any of your companions, as you signed on to the trip at the last minute.  In the middle of the night, a band of Native Americans descend on your caravan and tie up all thirteen of you before any resistance can be offered.  The chief of the tribe rides up to the group and lectures you about being Western Imperialist Asses.

Then he has you untied and brought before the group.  His warriors stand behind each of your twelve companions.  He hands you a rifle and pulls up one of your companions, whom he tells you to kill.  If you do, the remaining twelve of you will be set free to go home and live out the rest of your Entitled-White-Man lives.  If you do not, all twelve of your companions will be killed.  It is important to note that, either way, you will survive.  This Chief is very clever; he doesn’t want you to be motivated by a selfish desire to live.  

Before we talk about what your options are here, we need to talk about a spectrum of moral culpability that moral philosophers use to explain the justification, or lack thereof, of an action.  In simplest terms, the spectrum of culpability goes like this (from most culpable to least): inexcusable, understandable, excusable, justifiable, and praiseworthy.  (These categories are not always mutually exclusive, because some of them operate slightly independent of the others, but this spectrum will do for our purposes.)

An inexcusable act is one that we belief to be absolutely and abhorrently wrong, like shooting up a crowd of innocent people for selfish reasons.  No real discussion here.  Guy’s just awful.

An understandable act is one that is still inexcusable (insofar as it must still be punished as morally wrong), but one about which we can nonetheless recognize a common ground and empathize with the motivations of the perpetrator of the act.  Like hunting down the man who killed your wife.  We have to say the act is wrong, but we kind of get why you did it.  

An excusable act is one that is both understandable and somehow warrants the disregard of normal moral and legal standards.  For example, if you were walking down the street and happened upon Osama bin Laden, totally helpless and at your mercy, it would be excusable for you to kill him if you knew he would otherwise escape prosecution or punishment and your motivation was to bring him to some form of justice.  It would normally be wrong to kill a defenseless person in retribution like this, but because of the chance of his escape and the gravity of his crimes, our justice system would not charge you with murder and you would be hard pressed to find someone who thought you did the wrong thing.

A justifiable act is a bit different, but the distinction is subtle.  A justifiable act is not just one in which we set aside general morality, but one for which the scale actually tips such that we believe you have indeed done nothing wrong.  If someone has a gun drawn on you and clearly intends to kill you, you are justified in shooting him first.  There is no immoral act to ‘excuse’ because we already believe killing in self-defense to be justified, as a rule.

A praiseworthy act is stronger still.  A praiseworthy act is one in which you have actually done something laudable; an act that might, in isolation, be wrong, but because of the circumstances makes you a ‘better’ person because of it.  Killing someone who is in the midst of a shooting spree, and thereby preventing many immediate deaths, is a praiseworthy act.

Now that we have painted these distinctions, we can come to the question at hand.  My professor argued that you would be excused in killing one of your companions in order to save the other eleven of you.  I agreed.  The problem we had was with the converse: she believed that you would be justified for not acting at all.  I disagreed.

Just to be clear: the Chief tells you to kill someone to save twelve, and we both agree that you are excused of wrongdoing in this act of murder.  But I believe that, furthermore, it would be inexcusable for you not to act.  I believe that not acting makes you complicit in the death of the thirteen.

Why should this be?  I think the reason lies in your motivation for not acting, so let’s try and see if we can explain what that motivation might be.  More people clearly die if you do not act.  Eleven is greater than one.  The math checks out.  So your motivation cannot be to save life.  The motivation is that you do not want to be the person to pull the trigger.  I believe this to be an inexcusably selfish motivation.

Let me explain.  I believe that your motivation for not pulling the trigger is that you do not want to live with the guilt of the act of what you perceive to be the killing of a defenseless and undeserving victim.  This guilt may come from a belief that what you are doing is wrong, and this may be a justified guilt if you believe that the act of killing is wrong.  But not acting will be to avoid this guilt, and that is a selfish act.

What I am talking about, then, is mandated sacrifice (in situations where the stakes are high enough).  And no matter how you cut it, the stakes are always high enough in this example.  Even if you had to kill eleven to save one, the stakes are still high enough, because you are still saving a life.  Your guilt does not balance the matter.

My professor argued that the motivation for not doing so is that you do not want to make yourself complicit in an immoral act, and it is your belief that you are doing the right thing that guides your choice (not the guilt), so your inaction is justifiable.  

But you are complicit either way.  If you do not act, the others will die.  Death will result from either choice, so your complicity is unavoidable.  In one, you do not pull the trigger, yes; but why should this matter?  We have already established that there are justifications for killing, so it cannot be that we think killing under any circumstances is inexcusable.  The problem is you do not want to be the one to do it.

To highlight and defend my point, let’s turn briefly to an actual trolley problem.  Five people are tied to a train track with a trolley approaching.  On an alternate track, one person is shackled.  You have a switch at your fingertips which will allow you to move the trolley from the track where the five are to the track with one, thereby saving the five and killing the one.

In another example, five people are once again tied to a track.  Except this time you have no switch at your disposal.  Instead, you have a very fat man that you can push off of a bridge and onto the track.  This will kill the fat man, but save the five people.  (I didn’t actually come up with this, so if you think I’m being insensitive, direct your grief to Judith Jarvis Thompson.)

Neuropsychologist Joshua Greene conducted a study showing that different sections of the brain operate in these different scenarios, a phenomenon he attributed to “emotion” getting in the way of the more immediate and real pushing of the fat man (as opposed to the somewhat sterile and distant act of flipping a switch).  And both fMRI imaging and the numbers back this argument: more people were willing to flip the switch than push the fat man.  

The number of victims in the respective scenarios don’t matter to us as much as the emotions of the act, so I do not think it is a strong deontology that is preventing you from firing the gun in the Indian Chief example. 

We come back to guilt.  You cannot get over the fact that you killed someone.  But I believe this cannot possibly be weighed against the life of another person.  Not acting is immoral because it leads to more death; even if you will feel worse about acting, you must.  You must bear that burden.  This is mandated sacrifice.

In my first article, I cited these trolley problems as being symptomatic of the trend of philosophers to be out of touch with what they need to be discussing with people.  Is it hypocritical, then, for me to bring them up now?  Am I retreating into the ivory tower?

I don’t think so.  I’m trying to illustrate a larger point here. It may be that you feel bad about doing something, either because it will hurt someone you care about, or perhaps just because you are just too close to the situation.  That doesn’t mean that you are excused from acting, that morality passes by or that the right thing has suddenly changed to accommodate sentiment.  Morality is not so lenient.

It is a point summarized by Isaac Asimov in a rather elegant quip: “Never let your sense of morals prevent you from doing what is right.”  

“What is love?” (Baby, don’t hurt me…)

It is a simultaneously heart-warming and curious fact that the #1 query on Google for 2012 was, “What is love?”   ‘Heart-warming’ because it reminds us of the extent to which this emotion demands prominence in each and every one of our hearts.  ‘Curious’ that we should be turning to a computer to help us find the answer.

I’m guessing the people asking this question were referring to romantic love, what the Greeks would have called eros.  So that’s what I’m going to be talking about today.  At some point, I’ll turn to storge—familial love—and philia—the love between friends.

If you do Google “What is love?”, the first hit you’ll receive is a pretty abysmal article in Psychology Today filled with vaguely inspirational epigrams like, “Love is bigger than you are,” and, “Love is inherently free,” and, “Love cares about you because love knows that we are all interconnected.”

Word-vomit aside, what bothers me most about these answers is the notion of love as an other, exterior to the lover(s) and the beloved.  A third party, like a little cupid taking pot-shots from behind some mistletoe.

But if we work through this problem phenomenologically (a philosophical approach where we start from phenomena—our own personal experiences and recollections—and work out conclusions from this beginning, as we would in any other logical argument), I believe we will find quite the opposite to be true.

Think about your last relationship.  First of all, he was scum, and you can do better.  

Good, now that that’s taken care of: Yes, there was probably a stimulating spark at the outset.  You stayed up late at night waiting for him to call, or play a boombox outside your window, or do whatever it is the kids are doing these days.  Snapchat?  God, I hope not.  But the cliché wisdom is that this is not love, that this is infatuation, and that love takes time.  

This is nothing new.  What I would challenge is the idea that the experience arrives from an external source.  That it happens to us.  This makes us victims, not active participants.  

Love is internal in origin.  It has, by its nature, exterior counterparts, but these are stimuli, catalysts for a personal experience.  To the extent that it is involuntary (to fall in love), it is only because of a conflict of wills, a separation of desires and the self.  It is not because someone slipped you a love potion.  They don’t exist.  Except for Michael Bublé albums.  That stuff works every time.

Here is the crux of the matter, then.  If love were an external force, it would be just that, forced upon us.  Instead, we interact with it, we cultivate it, and we reassure it.  It is a reflection of our values, of our selves, in another.  The origin is internal, it is the action that is external, and this is secondary to the sense of self that comes from within, guiding the act.

Another hit you’ll find in your Google wanderings amounts to a scientific understanding of love.  Neuroscientists tell us that love amounts to biologic chemicals like oxytocin, vasopressin and dopamine entering the bloodstream.  This chemical release provides a reaction akin to a smoker lighting up after a long day without a cigarette.  And that, the scientific literature tells us, is the reduction of love at the biological level.

No, no, no.  This relationship is an inversion of the ordered causality that constitutes love.  

Consider a drug addict who desires morphine.  We would not say that the narcotic flowing in the bloodstream is the desire.  We would say the desire leads to the drug’s injection, and the effect of that desire is the high.  The desire prescribes the action.  One might say the absence of the drug precipitates the idea of the desire, and this may hold in the case of addicts; but if we were to extrapolate the argument to people at large, this would embrace too strong a materialism, and reduce too quickly to full-on determinism (a philosophy that suggests people have no free will).

I won’t accept that.  The idea ‘love’ is not descriptive of the feeling of dopamine or oxytocin; it is prescriptive of the event.  To love someone is to hold the concept of love and apply this concept as a specific idea of loving this person.  And it is this synthesis, this volitional action, this recognition that the fabled “ohmygod I love him” and-drawing-hearts-on-my-notebook moment applies to this person that is the trigger, the causal link to the chemical derivative.  

In other words, you love someone because of the concept of love, and this person best helps you realize the idealization of that concept.  He helps you realize self-ascribed values in a counterpoint; he helps you ascribe value to yourself.  The chemical reaction is just that, a reaction to this awareness.

Nonetheless, it would clearly be misguided for me to try to convince you that love is an entirely personal endeavor.  That would be a wholly masturbatory practice (and I mean that figuratively…).  The counterpoint is a necessary condition; the beloved must exist.  But it would be equally mistaken to say that we are beholden to love, and that we have no choice in the matter.

We do have a choice, and this choice is a beautiful opportunity.  It allows us to demonstrate our values in the identity of another person.  To say, “I love you,” then, is to declare a realization of one’s own identity.

42 (Or: Attempts at Deriving Experiential Meaning)

It’s been a while.  Though I’m sure the absence has gone by largely unnoticed on your end, I’d like to offer my apologies to that single reader who surely must be out there, checking back each week, only to find that no essay has been posted.  She’s a real special woman and I’m sorry to have let her down.  I’ll make it up to you, mom.

Anyway, as it turns out, writing a two thousand-word essay every week, on top of a senior year course load, is occasionally difficult.  Who knew.  I wasn’t always happy with the arguments I offered you, or else thought that I just didn’t explain them well enough; but time would often prevent further revision.  All to say, I’ve decided I’m going to forego weekly regularity in favor of quality.  I’m going to share an idea with you only when it’s ready.

Okay, let’s dive into something more interesting.

I’ve previously talked about being an agnostic atheist, and what I believe to be the opportunity for the ‘beautiful commonality of opposites’: a sort of spiritual understanding and respect that I would argue is a deeper form of multiculturalism than those assemblies you went to in high school.

If such a possibility has any chance of becoming a reality and not some Utopian ideal, we need to make further strides in understanding the notion of ‘meaning’—that is, the macro, big-picture, ‘what is life?’ notion of meaning.  Experiential meaning: the sense of purpose we get out there in the world, living our lives.  I’m not looking for a metaphysical meaning, floating out there in the ether, because (as you’ll see) I think that pursuit is meaningless.  Huh, defining the meaning of meaning…  Who’d have thought?

There’s an ill-informed assumption that I want to discourage.  Since such a vast majority of people derive meaning from God (a way that may very well be entirely justifiable), a common claim is that a life without God is somehow meaningless.  I want to show you why I believe this idea to be wrong, or at least based on misunderstood premises.  

To start, let’s look at what a believer of any denomination of any major religion with an anthropomorphized, omniscient and omnipotent deity (so, most importantly, the three Abrahamic religions: Christianity, Judaism, and Islam) thinks about meaning, and how meaning is derived from God.  (I’m not a theologian, so this should not be seen as a critique of a particular religion, where I would be woefully under-qualified.  However, I am confident in my understanding of the arguments of what such a God, in the abstract, means for us, relationally speaking.)

In questioning how a life without God can have meaning, we are accepting the implicit premise that God gives life meaning.  But how?  God is all-knowing and all-powerful and, given his anthropomorphic nature, he loves us.  So, if he’s always right,  and he can make everything as it should be, his love for us gives us value—that is, God’s loving us provides us with meaning since his loving us indicates our state and nature as deserving love.  And that makes us feel all warm and tingly.

In response to this justification of meaning, I’d like to offer you an argument from analogy (which is just a philosophy term for offering a simpler example that allows us to get to the meat of what we’re trying to discuss).

Put yourself back in high school for a minute.  You’ve been dating your high school sweetheart for two years—let’s call him Joe.  You guys are seriously in love.  Like, people-don’t-understand-me level in love.  You love Joe so much that the fact that he loves you makes you feel better about yourself.  

Then college comes around.  Life gets in the way.  You two break up.  Joe starts dating some floozy named Cinnabuns.  You hook up with Astronaut Mike Dexter (because you go, girl).  But maybe a few years down the road he comes back around and tells you he still loves you and wants to get back together.  Aside from some serious pride at ‘winning the breakup’, and maybe a side of Schadenfreude at discovering Cinnabuns gave him an STD, you feel nothing in recognition of this proclamation of love.  The meaning and self-worth you once found in receiving his affection no longer surfaces.

Despite the portrait I’ve painted of Joe, let’s assume that he probably hasn’t changed all that much; that he is still the same person, at the very least.  If anything, your perception of him has changed (perhaps as a result of maturity, or evolved expectations).  

This is the important part.  The sense of worth you received was a reciprocal (and self-fulfilling) effect of the worth you placed in him.  Once your esteem of him dwindled, his esteem of you ceased to provide you with self-esteem.  Say esteem again.

In other words, you provided your own sense of worth and meaning.  He was a real and actual conduit of that meaning, and you would likely cite him as the source (surely we feel down on ourselves when it seems there is no one who believes in or cares for us); but, as we have seen, it is a necessarily relational meaning that has been derived.  That’s why they call it a relationship.

I’m sure you can see where I’m going with this.  Without making any claim about the nature of the God someone believes in, or whether or not that God exists, (and certainly not comparing God to Joe) we can better understand the nature of the meaning derived by a believer: it is an experiential meaning that we feel as a result of the worth that we have placed in our deity.  And this, again, shouldn’t be seen as a critique of the quality of the belief or the deity in which one believes.  Rather, it is merely an explanation of the nature of meaning derived from an outside source.

Rousseau, (the Enlightenment Era philosopher, not the character from Lost), once made this pithy remark: “God created man in his own image. And man, being a gentleman, returned the favor.”  We don’t need to be quite so snarky about it, but if I were to summarize my argument, I would say that God has only been able to give man experiential meaning insofar as man, being a gentleman, has returned the favor.

A religious person might indeed take offense to this depiction of meaning derived from faith.  Perhaps she would respond that the meaning is actually imbued in us through divine grace: we receive meaning as a gift from God.  

This is a tough one to critique.  To be honest, I don’t really know what it means.  It makes sense at a poetic level.  But how do we actually conceptualize the state of being imbued with meaning?  

I’ve got another argument from analogy that may resolve the tension.  As a child, your parents raised you to accept certain values and beliefs, to perform certain acts.  They gave these things to you, instilled them in you through repetition and a combination of gravitas and ethos, and then taught you how to live them out in the real world.  It’s not quite divine instillation, but it’ll work.

Now, I want you to think about those values.  Let’s say your parents taught you to love those around you, to be generous, to be compassionate to those worse off than you, and to practice the piano for an hour a day (they say ‘write about what you know’…).  Whatever comes to mind, meditate on the values you have, and why, exactly, they are valuable to you.  

I think you’ll find that the values you hold dear have worth because of your belief in them, because you enact them in your life, and because of the effects they create.  The source of the values and beliefs—the traditional process of parents educating children—has experiential worth, certainly.  But you will be hard-pressed to show that the values are good because your parents gave them to you.  They are good because of the worth we give them and because of the effects they engender, not because of where they came from.  Moreover, I believe an argument can be made that there is more value in self-volitional actions than imposed beliefs (i.e. continuing to learn the piano because you love music as opposed to mechanically hitting the keys to make your parents get off your back).

Again, our religious interlocutor might suggest that this only holds true to the extent that the analogy holds up: since God is all-knowing and all-powerful, he creates the original value in a way that parents, who are merely messengers and teachers, do not.  

I do not find this criticism to bear much strength.  I am not talking about a metaphysical sense of meaning.  I am talking about experiential meaning.  There may indeed be a heaven and hell, or there may not.  So the values that exist out in the world may have independent meaning based on the fact that living by them will get you into heaven while not doing so will damn you to hell.  But that doesn’t explain how a belief in God gives us meaning that we experiencein this life.  And remember: we’re trying to explain how life without God can still have meaning, so experiential meaning is the only ground on which we can have any commonality.

This, of course, leads us to the main (and probably obvious) snag.  If meaning is self-motivated, why not just invest meaning in terrible but advantageous values?  Why not accept Objectivism and declare selfishness to be a virtue?  Why not take a reductionist approach to the problem and embrace full-blown relativism; where it’s okay if a society kills ugly children, because those are the values that that society has settled on (and hey, the kids are really ugly)?

A refutation of each of these philosophies will have to wait for another day.  What I’ve hoped to show is simply that there are several options available for arriving at meaning: religion, existentialism, humanism, relationships with Joe, etc.  We should not set a polarity between religion on the one hand as providing us with beautiful, pure meaning; and all of the other options presenting only a bleak landscape filled with nothing but relativistic, empty existences.

Rather, if meaning is self-referential (and therefore self-created), each of us has the freedom and responsibility to act in accordance with some perceived good.  This may be God.  This may be a belief in Kant’s Categorical Imperative (more commonly known as the Golden Rule).  This may be another form of deontology or a refined rule consequentialism.  You know, if that’s your thing.

Positing self-imbued meaning does not mean that all bets are off and it’s every man for himself.  Rather, it places upon us a great freedom and responsibility to choose for ourselves the sources from which we receive meaning, the values we hold dear, and the nature by which we guide our relationships.  

To return to our analogies: recognizing that the feeling of esteem you get from your boyfriend (or God) is actually self-created doesn’t mean the quality of your partner doesn’t matter.  It doesn’t mean you should go date Chris Brown (or worship Cthulu).  He still seems like a kind of crappy person, and the partner (or squid-like devourer of worlds) you choose to journey with is still immensely important for the type of esteem you’re likely to get.  

Likewise, recognizing that the worth you get from values arises from living them yourself (and not from the source, full stop) doesn’t mean that your parents (or God) are suddenly worthless.  It just creates a greater responsibility for you, as a conscious, volitional being, to stop slamming the keyboard like an obnoxious toddler and seek value in practicing the piano on your own terms, even if your parents are making you.  In other words, don’t just run through the motions because you were told to.  And don’t be the kid who insists on playing drums.  

If I’m right, we are the moral arbiters of our immediate vicinity (read: the observable universe).  Which is really scary and a huge responsibility, but also pretty flipping cool.  We can write our own melodies and learn at our own pace, but we’re working towards being able to play a song that is so beautiful our parents will feel like they got their money’s worth for all those years of lessons.  In other words, we can still find a heaven on earth, but we’re going to have to build it here, with our own values, our own meaning. 

So, to quote Ralph Fiennes in the latest (and awesome) installment of the James Bond franchise: 

Don’t cock it up.