Making an IMPACT against Civil Forfeiture Abuse

A couple nights ago, thirty people gathered in the sleek offices of a venture capital firm in Manhattan.  It wasn’t a board meeting, though.  It was an inaugural night two years in the making: the launch of IMPACT, a non-profit incubator and think-tank that has been the brainchild of Matthew and Michael Kopko and Nic Poulos.  The vision was straightforward enough.  Leverage our collective networks to bring together a group of people with the talents and resources to affect change in some area involving the public good. A few months ago, Mike asked me to join in helping launch IMPACT.  I was glad to join the team. 

As we planned the first event, it ended up taking on an unusual format for a non-profit.  Five people would pitch the group on an idea they were passionate about, and the group would vote on which pitch they thought was most important to them.  The winner would have a chance to convince each member of the IMPACT community that they should dedicate their time and resources to helping this cause—and, ideally, lay out a plan to accomplish the goal.  Think of it as a non-profit Shark Tank.

As the planning progressed, I decided I wanted to pitch an idea of my own.  So I submitted as one of the five pitches.  Before the event, Mike made a prediction.  “I think you are going to win,” he said to me.  “But I think you will struggle to turn your vision into an agenda of action items that people can act on.”

Mike was right on both counts.  The group voted for my pitch to combat civil forfeiture abuse in New York City, but I ultimately struggled to develop a concrete list of goals that would allow us to make immediate traction.  While I am very good at communicating a vision and convincing people of it, working through the nitty-gritty of accomplishing that vision is a weakness of mine. 

I am now in a position where I can leverage a highly competent and connected network, but I need to establish a plan to do so.  In keeping with the spirit of IMPACT’s crowdsourcing, democratic approach, I’d like to share my pitch with you.  If you have any interest in it, and think you can help, please contact me.  I think it has great potential and I would hate for it to flounder because of my own shortcomings.  I could use your help.

I’d like to start with a story.  Jennifer Boatright and Ron Henderson were driving through Texas with their two children, on their way to buy a used car.  They were stopped by a police officer for “driving in the left lane for more than half a mile without passing.”  Subsequent to the traffic violation, their car was searched for drugs. 

No drugs were found, but the cash they had saved to buy the car was found.  Because Boatright and Henderson were driving from Houston, “a known point of distribution of illegal narcotics,” to Linden, “a known place to receive illegal narcotics”, the officer believed the money was involved in the narcotics trade, and arrested the family.

The local DA told the couple that they could either face charges for money laundering and child endangerment – which would drive their children into foster care – or they could sign over the cash to the city, in which case they were free to go.  Which forces one to wonder: if the authorities truly believed the children were in danger, why would they release them to their parents for a few thousand dollars?

What happened here is known as ‘civil forfeiture’, originally a federal program to seize the assets of criminals, especially those involved in the drug trade.  This was useful because the owners of these valuables—and the perpetrators of the crimes—were often out of the country and therefore unavailable for prosecution. 

See, you do not need to be found guilty of a crime to be victim to a legal civil forfeiture.  Something akin to ‘probable cause’ is a sufficient standard.  Which makes sense in its original intended use.  But as state and local authorities began adopting their own versions of the policy, its use has broadened and grown unregulated.

As a reporter for The New Yorker described, many of the judges, police officers and attorneys she has interviewed, “expressed concern that state laws designed to go after high-flying crime lords are routinely targeting the workaday homes, cars, cash savings, and other belongings of innocent people who are never charged with a crime.”

The problem is ubiquitous.  In 2011, the state of Georgia acquired nearly $3 million, more than half of which was comprised of items worth less than $650—in other words, not the estates of cartel lords.  An Oklahoma DA hired Desert Snow L.L.C., a private company drug-interdiction task force, to manage and acquire forfeitures.  Think of them as a domestic Blackwater.  Although they weren’t law officers, they regularly received 25% of seizures.

A lawyer who specializes in these cases claims, “Forfeiture cases like these are almost impossible to fight. It’s the Guantánamo Bay of the legal system. One of the main problems…is that you’re not assigned a lawyer, it being a civil and not a criminal case.  Most people can’t afford lawyers, and that gives the government a tremendous advantage.”

Looking at the local level, the NYPD also participates in civil forfeiture, arguably to an abusive extent.  85% of civil forfeiture cases in New York City never result in criminal charges.  Estimates by the Office of Management and Budget suggest that New York will raise $5.3 million from civil forfeiture in 2014.

So how do we fight the abuses of civil forfeiture?  I know we have several lawyers in the room, and I am sure that their legal expertise could be a useful guide.  But I would suggest that we not start our fight in the courtroom.  Civil forfeiture abuse has been a problem for a long time.  It has a slew of class-action lawsuits in its wake.  And it’s still around. 

Rather, I would recommend that we establish a fund and a committee to oversee that fund.  The purpose of the fund will be simple: those who believe they have been the victim of civil forfeiture abuse can plead their case.  If the committee decides an abuse has occurred, it can issue a grant at the value of the victim’s provable losses, or some percentage thereof.

The solution is effective on several levels.  It restores the lost assets to victims of abuse, but it also sends a clear message: the citizens of New York will not tolerate abuse by our public servants, and we will constructively take matters into our own hands to remedy those wrongs.  Imagine the light such a story could shine on the problem: a new generation of New York investment bankers, lawyers, entrepreneurs and venture capitalists combatting police corruption in our city by restoring the stolen property of our poorest and most vulnerable citizens.

That’s my pitch.  I would appreciate your feedback and support.

Know Thyself: The Importance of Self-Reflection

I have the ancient Greek translation of “know thyself” written in black ink over my heart.  I got the tattoo after surviving a yearlong bout with an autoimmune disease, because I had been, well, a cocky shit when I was younger, and I wanted the aphorism to be a reminder of how small and vulnerable I had felt—and could feel again with a moment of bad luck.  In other words, I hoped that a reminder of who I am and who I had been could protect who I might become.

According to myth, the original Greek adage had been inscribed upon the archway at the Temple of the Oracle at Delphi.  Dutiful to the religious overtone, the reminder took on a sacrosanct place in my subconscious whenever I found myself confronted by a dilemma.  Was I acting for the right reasons?  Had I made a similar mistake in the past?  Was I simply rationalizing my way to a suitable explanation or was my inner dialogue accurately representing my own motives?  

Questions like these can spiral.  Socrates once said that “the unexamined life is not worth living”, but the over-examined life can be paralyzing.  Maybe it is fitting, then, that another maxim written on the archway at Delphi translates as “everything in moderation”.  Maybe I need another tattoo. 

Nevertheless, I am reminded about the value of self-examination nearly every time I tune in to the news.  When the Israeli government orders a military strike on Gaza, I wonder if anyone responsible asks themselves if they believe it is the right thing to do, or if, instead, they think it is necessary to preserve the safety and way of life of themselves and those they love.  This may seem a slight distinction, but it is an important one.  The latter is still a noble motive, but when we move away from moral rectitude and absolute certainties, we may realize that those actions that once seemed necessary are only the most obvious or appealing of a wider set of options. 

When Hamas or Al Qaeda demand the destruction of ways of life completely foreign to them because the other is the enemy, I am reminded that such xenophobia can only exist in the complete absence of empathy.  When ISIS destroys millennia-old artifacts amidst torture and attempted genocide, I am certain that the perpetrators lack anything approaching rational reflection.  This fanaticism may have been forged by intolerable conditions of societies laid waste in the wake of western hegemony, but the root cause is still an inability to put oneself in the shoes of the hated. 

I think America is a bit of a different beast.  Throughout much of its history, America has had a great national dialogue.  Nonetheless, we have napalmed villages in Vietnam and tortured enemy combatants despite pledges to the contrary.  I would suggest that the regrettable moments of our national history have occurred not because we have not reflected, but because our reflections have started with the premise that we are good and we rarely question this premise.  With such a presumption, the necessity of preserving the American status quo has often been conflated with a moral necessity to protect humanity—because America is the guiding light.  This invariably imbalances the moral arithmetic and can lead us astray.

A few Sundays ago, a friend texted me asking what I thought it meant to be a good person.  (I didn’t ask what had happened Saturday night to precipitate the question.)  I told him I thought there were three necessary and jointly-sufficient conditions: a desire to be good, the ability to self-reflect and the capacity to empathize with other people. 

Crucially, I think the same criteria can guide a country.  If a people can care about their moral standing, can put themselves in the shoes of those who are outsiders, and can reflect without bias (or with as little bias as possible) on their own reasons for acting, they will be a good people.  While America’s assumption that it is good is a weakness, it also indicates this necessary desire to be good, which has marched us on a path of progress as we refine our methods and recognize our shortcomings.

Maintain a national identity, but not at the expense of marginalizing others.  Protect self-interest while being mindful of justice as fairness.  Strive to achieve greatness, but don’t subject the less fortunate to the shock wave of want left in your wake. 

All of these liberal ideals of national justice can be derived from the maxims of personal guidance found at Delphi.  It just took us a few thousand years to put them into our own words.

Empathizing with Evil: Learning from Elliot Rodger without Forgiving

Last week, I wrote an article on a conflict of my moral inclinations catalyzed by the UCSB shooting and Elliot Rodger.  I questioned how we balance various fundamental rights against the risks that allowing those rights might precipitate.

Elliot Rodger, his actions and the justification he believed he had for those actions, raise another question: how do we think about someone who does such terrible things?  This is a question both of morality and strategy.  Morality insofar as we should ask, “What is the right way to think of such a person?”  Strategy insofar as you believe that our conception of a person—the legacy that we allow him or her to have in the aftermath of such an act—is a crucial aspect of the ongoing trend of mass murders.  Fortunately, the morally right way to understand such a person is also the most strategically advantageous tack to take in diminishing the likelihood of future similar acts.

I have often championed empathy as the most admirable emotional practice.  However, empathizing with evil is too often confused with supporting it, and this conflation deserves clarification.

What Elliot Rodger did was a terrible thing.  He killed innocent people because he felt slighted by women—both specific women and the sex in the abstract.   Women, he believed, refused to have sex with him because they were attracted to thugs and idiots.  There are awful people who not only empathize with his pain, but also cheer his actions.  Men who say things like, “Media doesn’t acknowledge the majority of males’ contentment with current sexual dystopia… It’s all about HATING WOMEN.”  Elliot was not alone in his point of view.  He represents a microcosm of society that views life as fundamentally unfair because women are attracted to some men and not to others.

There is also a larger group of people, some of whom could reasonably be described as mainstream, that say they can understand his hurt and that, while it does not excuse his actions, it’s an unfortunate circumstance that he was in.  There is a very big problem with this—not with this attempt at understanding per se, but because the attempt at understanding is undertaken so lazily. 

It is unfortunate that someone should feel rejection, especially perpetually so.  But empathizing with this specific emotion should not be confused with feeling bad for Eliot, nor should it make us understand his actions.  Elliot’s actions did not come from rejection alone, but from a sense of rejection in combination with an extreme sense of entitlement: it was not just that Elliot felt women didn’t like him, it was that he felt he was being denied affection and intimacy that he was owed by women.

The unavoidable logical implication of this belief is that women wrong men when they are not attracted to them.  If every Elliot Rodger that exists feels wronged by women who deny them, then their collective argument is that women should be attracted to each and every man that desires them.  How can this be true unless one believes that every woman’s existence is justified because of her ability to please men?  This is objectification in the strongest and most despicable sense. 

As far as I can tell, Elliot’s objectification was exacerbated by a profound narcissism.  Not only are women hurting him by failing to please him (and therefore not fulfilling their role), but as a result of his pain, other people deserve to suffer.  The narcissism becomes all the more stark when we realize that Elliot also hated the men who succeeded sexually.  Suddenly his objectification of women is made clear for what it is: not an honest (if deranged) belief in their inferiority, but an act of mental contortion to dehumanize anyone who makes him feel bad about himself.  And they deserve that role because of how they hurt him.  And eventually they deserved to die.

Some might rightly call what I laid out above an act of (admittedly, amateur) psychoanalysis.  I would call it empathy.  I started from a point of commonality between Elliot and myself: I could understand his sense of hurt and rejection, because I have experienced it myself.  Then I tried to pick apart where his argument and actions deviated from any motivation I have experienced.  I wondered, “If we have both felt this hurt, why did he want to kill people and yet I have never felt that drive?”  I discovered he felt he was owed a blissful life.

We might start another thread of exploration to understand where his extreme narcissism came from.  I don’t know nearly enough about him to do that justice.  But anyone who tries to point out how Elliot’s actions were in some way understandable because they have felt similar rejection, or because they believe women are attracted to certain types of men, should be fully aware of exactly what they are agreeing to.  It is not just a sense of hurt that drives someone to do what Elliot did.  Stopping there is sloppy reasoning and downright dangerous.

Which brings me to the strategic advantage in this exercise of empathetic understanding.  Most people hear about an atrocity like what Elliot has done and call him a monster, and implicitly argue that anyone who tries to contend otherwise is fraternizing with the enemy.  This must be because a monster is an abomination, whereas a person exists among us and acts for reasons.  If Elliot was a monster, then his victims were killed by pure evil, like in the storybooks we read growing up.  If he is a person, then maybe his victims are in some way complicit.

I reject this dichotomy outright.  Striving to understand a person who commits terrible acts does not mitigate his responsibility for those actions.  Too often the media conflates these notions: “He played violent videogames, so those are the real culprit.”  We do not need to decimate personal responsibility in order to arrive at helpful lessons.  We are, all of us, little more than the combined influences of all our past experiences, but we are still the ones who are responsible for what we do.

I do not refuse to call someone like Elliot a monster because I want to protect his memory from harm.  I refuse to call him a monster because I believe that, if any good can come from such terrible incidents, it should be understanding what causes these things to happen, so that we can strive to prevent them in the future.  By empathizing with Elliot, I was able to dissect his ‘great manifesto’ into what it actually was: a deranged justification for extreme objectification rooted in narcissism.  It is the one weapon we have against those who would flock to Elliot’s banner, like the young man on a message board Elliot frequented, who said, “he would have had a boring […] life then died of cancer […] without ever leaving a mark […] he is famous 4 ever now.”

When condemned as a monster, Elliot becomes a martyr to those who would agree with him, to those who revel in feeling like the world just doesn’t understand, that some day we will see the truth.  When we try to understand him we can both strive to create a world that does not nurture beliefs like his and mitigate his martyrdom by revealing his grandiose arguments for what they really are.

And yet I must admit, researching Elliot and the community that supports him was not easy.  Both for my own sanity, and to impart hope in the face of this hatred, I share with you the words of soul and jazz poet Gil Scott-Hebron—a shower for the soul after crawling through these moral sewers: “To give more than birth to me, but life to me […] God bless you mama, and thank you. […] My life has been guided by women, but because of them, I am a man.”

Mass Shootings and Terrorist Attacks

I was recently thinking about the latest tragedy to be added to the too-long list of mass killings in our nation’s history.  I had just finished reading this Washington Post article about the desperate plea of a grieving father, to a nation that seems to be indifferent to these repeated atrocities.  I shared in his outrage and, had I been present, would have joined in on the chant of “Not one more!”

I don’t much care for guns.  I think it represents a collective lunacy that a sizable portion of our nation’s populace think the right of citizens to buy assault weapons (without having to wait too long to take them home and shoot them) should outweigh even a single preventable human death.  I think the notion of protecting gun rights to safeguard our ability to overthrow a tyrannical government is little more than childish and embarrassing.  “Some of our nation’s people will live considerably shorter lives than they otherwise would (and many who care about them will have their lives ruined) because I think that there’s a chance we might fuck up this country enough that there won’t be any way to make it better besides killing a lot of people.”

I’m being glib because I want to highlight just how seriously I took the other side of this argument.  Then I began playing the philosopher and sought out any inconsistencies in my morality.

I have repeatedly argued against government programs that appear to ignore Constitutional restrictions and overstep sacrosanct boundaries of privacy and freedom in the name of guaranteeing our safety from a terrorist threat.  During these debates, I’ve often cited Benjamin Franklin, who said, “Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety.”  It seems a truly paranoid fear that drives an increasingly securitized surveillance nation, an ongoing national hysteria that has allowed for the militarization of our police and intelligence forces.

I am forced to confront a conflict in my beliefs.  The very same argument that I find so repulsive in the instance of gun rights is the argument I would make in the case of protecting ourselves against terrorists: we should not give up necessary and inviolable freedoms because a psychotic few threaten the safety of a statistically miniscule portion of our citizenry. 

Now, one might argue that, statistically speaking, one threat is more realistic than the other.   The terrorist threat is worldwide, and the United States government believes it has a mandate to combat terrorism across the globe.  According to the U.S. State Department, there were 6,771 terrorist attacks worldwide in 2012, resulting in 11,000 deaths and 21,600 injuries.  Given a global population of 7.1 billion in 2012, the likelihood of being killed or injured in a terrorist attack was .00046%.  In the same year, there were 16 mass shootings in the United States, with 151 deaths or injuries.  Given a 2012 US population of 313 million, the odds of a US citizen being killed or injured in a mass shooting were .000048%.  In other words, it was 10 times more likely for someone (admittedly, worldwide) to be killed in a terrorist attack than it was for someone to be killed in a US mass shooting.

Maybe you think that it is not our responsibility to prevent terrorism worldwide, so we should only consider domestic terrorist attacks.  The numbers don’t come out all that differently since 2000.  Maybe you think that crunching these numbers to distinguish the moral permissibility of these two cases is silly.  Maybe, but what we’re talking about in each instance isn’t all that different, so it can’t be some sort of consequentialist tradeoff between human rights and human lives that is doing the work.

Maybe it is the nature of the rights.  I do not see the value in having a gun, especially the sort of weapon that allows for indiscriminate killing and destruction.  I do see the value in ensuring that an overreaching government cannot know everything about its citizenry.  But isn’t that the same fear that drives Second Amendment enthusiasts?  That a government will become too powerful if certain safeguards are not maintained to prevent it?  In one, those safeguards are a Constitutional protection of privacy rights and freedoms of association, travel, and speech.  In the other, those safeguards are weapons designed to prevent a militarized tyranny.

I have to admit, I am at a loss at distinguishing between the two in a way that doesn’t devolve into my own personal preference and cultural upbringing.  As bizarre as it sounds, I do not think an argument against gun rights can rest on the balancing of freedom against the safety of our citizens from mass killings (or else we are forced to accept the marginalization of liberties to ‘protect’ us from terrorists).  If we insist on that type of argument, it must rest on the ~30,000 gun deaths that happen in the United States every year (.009% chance of dying from such an incident).  As shocking as a mass killing is, it cannot be the moral driver of our ban—it is only a psychological stimulus for debate.  But even arguing from gun deaths in the abstract devolves into a statistical balancing act.  Why is .0004% not a permissible threshold to marginalize certain liberties, but .009% is?  How likely does a threat have to be for it to warrant certain sacrifices?  How do we prioritize certain liberties over others?

I still have an instinctual moral compass that tells me a society that allows for guns (especially guns designed for highly efficient killing) is a misguided society.  And yet I am coming up short in finding a consistent argument for my beliefs.  Maybe consistency isn’t that important.  Maybe hypocrisy in morality is in some ways inescapable.  I’d rather not accept that, though.  As hard as it is to swallow, I think a ban on guns must come from a notion that guns are themselves a moral wrong, that the killing of another human being may never be justified.  This is a much harder argument to make.  And an argument for another day.

Free will is probably an illusion, but you should be really happy about that.

I’ve been thinking about free will and fate lately.  What can we control?  What can’t we control?  What mistakes are we destined to make?  Why do we repeat them?  It’s one of those maddening quagmires that people who find themselves habitually spinning their wheels should probably avoid. And yet here we are.

In most philosophical discussion on the subject, these questions fall under the category of ‘determinism’—in other words, what is determined?  Most introductions to determinism explain it in very mechanistic terms, using something like billiard balls: ball a is hit in such a way that it will strike ball b in such a way that it must move in the proper direction to hit ball c, and so forth with ball d, etc. 

The physical gets muddled up when we introduce conscious beings: most notably, people.  It’s harder to see what determinism means for us.  If we grant that the mind is just the brain (the thoughts and moods we experience are neurons and chemical interactions), then there are, at some level, physical explanations for the entirety of phenomena commonly experienced as thought.  Ascribing to a theory of deterministic effects on conscious beings—let’s call it psychological determinism—means that every interaction, stimuli and influence a person experiences serves to structure her brain in such a way that she will act in a specific way given a certain set of circumstances.  Drawn to its strongest conclusions, this sort of determinism suggests that we have no real control over our actions and the choices we make.  In other words, that we have no free will.  Our brain is simply the end result of (truly) countless individual events tracing all the way back to whatever origin led to our existence.  Fate.

This is probably concerning.  It forces us to question our understanding of what an individual is, whether our choices are illusions, and whether we should be responsible for the actions we make.  Neuroscientist and Philosopher Sam Harris published a book on the topic, suitably titled Free Will, in which he combines physiological revelations (such as the fact that EEG studies demonstrate the brain initiates actions that supervene upon ‘decision-making’ as much as 300 milliseconds before consciousness is aware of making the decision) with philosophical inference.  As Harris argues, “Free will is an illusion. Our wills are simply not of our own making. Thoughts and intentions emerge from background causes of which we are unaware and over which we exert no conscious control.”

In the New York Times review of the book, Daniel Menaker suggests, “However correct Harris’s position may be — and I believe that his basic thesis must indeed be correct — it seems to me a sadder truth than he wants to realize.”  Menaker is concerned with what these revelations mean for the notion of humanness: what is character or bravery if I am not the origin of my actions?  What does that ‘I’ even refer to in the context of such a revelation?  This is not an uncommon concern.

I’d like to look at things through a different lens, though.  If each of us is not responsible for decisions (in the way we would commonly conceive of responsibility) because they were predetermined based on our genetics and our previous interactions that served to forge our personality, then the totality of human choices and actions are all based on causal chains, and these chains share a common causal ancestor.  This origin point makes for an incredibly strong catalyst towards a communitarian approach to society, to viewing ourselves as a part of ‘humanity’ in the strongest possible sense.

Granted, our understanding of what it means to be an ‘individual’ may have to change a bit, because each identity is entirely crafted by a combination of genetics and the experiences shaped by the influences of those around us.  But I see the revelation as uplifting, not as a sad truth.  We are still unique.  There is no one with your same genetic makeup that has experienced the same chain of influences.  The possibility of what you bring to the table is unknowable in such a complex system, so the mystery of life is as real as ever.

Yet in accepting a slight shift in our understanding of what we are, this very same understanding reinvigorates our sense of commonality in a beautiful way that should absolutely not be seen as harming the value of the ‘self’.  Just as individual cells have their own purpose, they become necessary parts of something far more complex when viewed in the context of a human being.  So too should we view persons in the context of humanity.

And this answers so many ethical questions.  It explains why we have obligations to other members of our society for reasons other than ‘immutable truths’ or, for example, a merely resigned preference over the state of nature.  It gives meaning to the notion of humanity, because it describes us as beholden to a true commonality in the same way most major religions do.  It should, I think, inspire universal gratitude.  We could not be who we are without the contributions of those before us and around us.  These revelations, viewed together, guide us to an incredibly comforting discovery: the noblest truths for how we should act come from what we are.

Good things don’t come to those who wait. You have to claim them.

My friend Anastasia recently shared an article with me on “Why you shouldn’t settle in your 20s”, with the addendum that “this is the reason everyone hates our generation.”  The central claim of the article, as far as I can gather, is that there is a significant pressure placed on people in their twenties (especially young women) to settle down and find a partner; but this paradigm is outdated due to societal changes and the advances of women. 

That may be a reasonable argument, but the path the author traverses from there meanders so far afield from this understandable starting point that it begins to feel like her initial claim is little more than a college-educated rationalization for self-absorption.  Which is, I think, what Anastasia was getting at.  Plus, as my friend Carolina pointed out, the author goes by the name of ‘itskalesbitches’, which isn’t really a good sign.

The author—Kaleigh—argues that we cannot find the one we love without knowing ourselves.  I agree one-hundred percent.  About a year and a half ago I wrote in this blog that “to say “I love you” is to make a declaration of one’s own identity” because the nature of love is a recognition of another as possessing values that reflect our own sense of self.

Which is why I found the jump Kaleigh makes from there so confusing.  Get blitzed, she suggests, even if you have work; sleep with the hot guy or girl across the bar, even if you don’t know his or her name.  “Why date one person when you could date five?” she asks.  I assume Kaleigh intends these activities to provide us insight in our path to self-discovery: we need to make the most of life in our youth, have the most varied experiences, in order to figure out who we are.

I have several problems with this assertion and its tenuous relationship to Kaleigh’s main claim.  First, it assumes that self-exploration is best achieved in a very specific way: through a confined set of experience types that conveniently supervene upon what attractive people in their twenties have been doing for a long time.  She portrays these experiences as offering variety, in presumed opposition to the confining nature of a single partner.  Sure, literally speaking, in terms of numbers of partners there is more variety.  But Kaleigh misses two crucial points.

First, Kaleigh assumes that this breadth of experience offers more opportunity for self-exploration than traditional relationships.  I can’t really understand how there is much opportunity for variety in partying with the fairly homogenous types of people any set of night clubs in Los Angeles, Vegas or New York might offer.  If it’s really about discovering new things about yourself by understanding a world you don’t know, how does spending weekend after weekend at the Gansevoort or SkyBar accomplish this?

For the sake of argument, let’s grant that there is variety here, even within a group of twenty-something socio-economic cousins.  I still cannot see how these superficial interactions provide insight into who we are or who we should become.  I have had a handful of formative experiences in my life and, while almost all of them were catalyzed by other people, I honestly cannot think of a single one that was brought about by a superficial relationship.  I’ve never had a one-night stand that made me think at all differently about the way I view the world, or made me understand some flaw in myself, or something I was proud of.  They have never made me grow as a person. 

In fact, I’d say the collection of hookups in my life has done little more than stunt my self-exploration by preoccupying me with meaningless ego-pats at the expense of true discovery.  In contrast, I’ve had relationships (both romantic and platonic) that have fundamentally altered the way in which I think about my life and the world around me; because they weren’t about ego, they were about mutual discovery and exploration through challenge.  But that only ever came after my guard was down, after I had opened myself to the possibility that this other person could teach me something valuable about myself.

This notion of hookups-and-ego brings us to the second problem with Kaleigh’s argument: the mindset it will invariably engender in those who follow it cannot help but reinforce a self-absorption that has become the scarlet letter of our generation.  Kale herself argues “our 20s are meant to be our selfish years.”  What?  Says who?  When you conceive of yourself as entitled to live selfishly, your ability to live empathetically (surely a key ingredient in having a successful relationship) may very well atrophy.  When your romantic explorations become more about you exploring you than about your partner, they cease to be romantic at all. 

This mindset relegates another person to an accomplishment, or even just ‘an experience worth having’, which eviscerates human capacity for meaningful interaction, because everything loops back to a focus on the self.  If this is the experiential playground meant to help us discover who we are and train us for adulthood, then we cannot help but be creating adults whose capacity to prioritize others (like, for instance, a family) has been amputated by being habitually relegated to the background.

Assuming this isn’t just coming from an entitled girl looking to feel okay about that entitlement, I’d like to explore what other catalysts might have driven this philosophy.  (And I don’t mean that flippantly.  I really hope there’s more to it.)

She asks us to “notice how the divorce rate in our parent’s generation is the highest it has ever been,” and I’d suggest that this is the real source of her apparent apathy to romance in the present tense: it is scary, doomed to failure, and the potential source of a great deal of heartache.  Much better to put it off until we are real adults.  In other words, it isn’t actually apathy—and it certainly isn’t a new form of romantic idealism: it’s fear.

But it’s a largely baseless fear.  While it’s true that, by and large, the divorce rate is higher for men and women who marry at a younger age than those who marry later, I’m always floored by the fact that everyone assumes this has a necessary causal link to some notion that people who marry later have stronger marriages.  There is absolutely no reason that this divergence in rates could not be caused by a greater willingness for twenty-somethings to break off a marriage that is broken, since they have so much more time to start over. 

Why did the divorce rate increase so much over the last fifty or one hundred years?  Because people were getting married younger?  No, they had always been getting married at a young age.  The divorce rate increased because it became more socially acceptable to get divorced, whereas in the past the status quo had been to endure a failed marriage for social reasons.  But here’s the crucial fact that Kaleigh neglects in her ‘statistical analysis’: the divorce rate for college-educated women who have their own source of income and marry at age twenty-five is less than twenty percent.  That’s the very group Kaleigh is talking to, and she’s totally misleading them about the reality of the statistics.

While her data about divorce rates and the causal assumptions she makes regarding that data might be off, Kaleigh is right about one thing:  we grew up in a generation whose parents didn’t endure broken marriages, and as a result we have seen that love is not a fairytale, and often ends in heartache.  Even those whose parents stayed together cannot help but be affected by osmotic pressures from our peers’ collective sense of hurt.  But if this knowledge has indeed bred a fear of commitment, then that fear has caused us to tack too far in the opposite direction.  This brings me to by far my biggest problem with Kale’s argument: it ignores the possibility that such a philosophy of life will have negative consequences on our ability to recognize the very opportunities it purports to prioritize.

If our mindset is that we’re too young to meet ‘the one’, then we risk ignoring an ideal opportunity when it comes around because ‘now just isn’t the right time’.  A fear of missing out on this ‘experiential’ path prescribed to us in our twenties could end up making us miss out on completely different, amazing opportunities.  Clinical Psychologist Meg Jay, who specializes in psychological trends of modern twenty-somethings, argues in her book The Defining Decade that our 20s are the most important time in our lives for planning careers and forging important relationships.  Jay claims that the conceit that ‘thirty is the new twenty’ has trivialized what is actually the most transformative period by “robbing us of our urgency”.  People who think they have a decade to do whatever they want will essentially procrastinate, and they won’t make crucial advances in establishing footholds in their chosen career paths, or in finding the right partner.

Of the hundreds of twenty-somethings Jay has worked with, she describes that, time after time, those patients of hers who followed something akin to Kaleigh’s prescribed life path invariably felt like they had wasted a great deal of their life, that they were nowhere near where they wanted to be in their career, and that they had simply chosen whomever they had happened to be with when all of their peers started getting married.  This reality paints a far less rosy portrait than what Kaleigh seems to think will happen to her.

None of this is to suggest we need to settle down right away, at all.  I agree with Kaleigh: we should absolutely refrain from living our lives in accordance with societal pressures—either those pressures that tell us to marry at a young age, or those that tell us we cannot be ready for a healthy marriage in our youth.  What I am saying is we should not mark off a time of our life as somehow not counting.  Moreover, we should recognize that true exploration cannot come from setting out on a path we have defined for ourselves: new discoveries don’t come from following your five-year plan.  Be open to opportunities when they make themselves available.  Even if it isn’t what everyone tells you to do.

On Our Generation, Ke$ha, and the French Existentialists (I Can’t Believe I’m Typing This)

I was listening to Ke$ha the other day (yeah, I’m not embarrassed—want to fight about it?), and it got me thinking about something that I’ve been puzzling over for a while.  That’s right.  Philosophical treatise on quandaries posed by Ke$ha.  Blasting off.

While superficially preoccupied with good times at the club, Ke$ha’s lyrics all have a peculiarly common conceit, one that speaks to my generation at a surprisingly deep level.  They highlight the way we have chosen to address a recurring existential question.

Writing in the midst of World War II, French philosopher Albert Camus claimed that philosophy must concern itself with only one question.  He wrote, “There is but one truly serious philosophical problem, and that is suicide. Judging whether life is or is not worth living amounts to answering the fundamental question of philosophy.”  Camus was an existentialist; that is, he was concerned with how the individual should view life and find meaning in a world that has no intrinsic purpose.  This was the ‘absurd condition’.

So, what the hell does Ke$ha have to do with this? (And, for the record, I get bitter and resentful every time I find myself begrudgingly typing that goddamn dollar sign into her name.)  Her lyrics are emblematic in voicing my generation’s answer to Camus’ existential worry.  Yes, I’m serious.

Let’s take a look at some of her lines.  In “We R Who We R” (Jesus Christ, Ke$ha, I’m trying to maintain some credibility here…can’t we use real words?), she sings, “Tonight we’re going hard, just like the world is ours.”  In “Die Young”, she autotunetastically suggests that we “make the most of the night, like we’re gonna die young.”  In “C’mon”, she spits mad fire with (sorry, I ran out of ways to say ‘sing’), “I wanna stay up all night.  I wanna just screw around.  I don’t wanna think about what’s gonna be after this.  I wanna just live right now.

There’s an ambivalence here, a carefree “I don’t give a shit, I just want to party” overtone.  Of course, Ke$ha isn’t alone.  Miley is there with her (“It’s our party, we can do what we want”), alongside countless other pop stars.  A lot of people will tell you that pop and club singers write from this point of view because it speaks to their target audience: teens and young adults who want to feel rebellious against their parents and the establishment.  That is such a goddamn copout.

These lyrics don’t just contain apathy to responsibility, they question the prospect that there is a meaning in life greater than enjoying the moment; there is a desperate abandon to them that represents our own desperation.  The generations of the 50s and early 60s struggled through their absurd condition via suburbia.  The late 60s and 70s, in response to Vietnam and the Cold War, sought meaning in ‘free love’—but this freedom in love was purposeful, insofar as it represented a powerful declaration of identity, not apathy.  Post 9/11, post-2008 market crash, our generation has dealt with its existential funk by, well, not dealing with it. 

Our absurd condition is real, visceral, and yet largely ignored.  On the one hand, we equate living life to the fullest with ‘going hard’ and ‘screwing around’, while simultaneously admitting that we ‘don’t wanna think about what’s gonna be after this’ and, as Miley reminds us, that ‘we can’t stop’.  Of course we can’t, because what would be left?

Don’t believe me?  Let’s look at one final (admittedly long) lyric from Ke$ha’s song “Crazy Kids”: “This is all we’ve got and then it’s gone (you call us the crazy ones).  But we gonna keep on dancing ‘till the dawn.  ‘Cause you know the party never ends, and tomorrow we gonna do it again.  We the ones that play hard, that live hard, that love hard, we light up the dawn.” 

Ke$ha places herself in conflict with those who ‘call us the crazy ones’—presumably those who would suggest that staying out late, blacking out and hooking up with random strangers does not a meaningful life make.  But even in so doing, she acknowledges that it is only through necessity that we find ourselves habitually chasing these good times: it’s all we’ve got, and then it’s gone.

So what’s the problem?  In a meaningless world, why not cling to what good times we can find?  All memories are fleeting and end in death anyway (sorry guys, just channeling the French here), so why not make the most of it?

Well, that’s the problem, actually: the implicit claim that this is ‘making the most of it’.  Ke$ha calls it ‘living hard’, and she’s far from the only modern pop star to use the term to describe this sentiment.  Nor is this cultural conceit present only in music.  It pervades our culture, especially for those of us in our teens and twenties.  We brag about how drunk we got at last night’s party—I certainly texted my friend to tell him I had had over twenty drinks on New Years’ Eve (sorry, mom).  We post pictures of our late nights and tell our friends about our random hookups.  We feel like we are somehow missing out if we are not part of this culture.  We feel bad if we had a quiet night in and see all of our friends posting raging pictures on Instagram. 

But I don’t think this feeling is just about being cool.  I think, in some deep, subconscious recess, it’s about whether or not this is a good answer to Camus’ existential question.  Anyone who feels bad when they secretly wonder why they don’t love raging as much as their friends do is, I think, asking themselves what is wrong with them that they aren’t making the most of their youth.  But is ‘living hard’ making the most of life?  I think the answer is no, and that’s the problem.

In The Myth of Sisyphus, Camus retells an ancient Greek story about a man who is punished by the gods with eternal labor: he will forever be forced to push a giant boulder up a mountain, only to have it roll back down again.  For Camus, Sisyphus is the absurd hero: he is representative of all men in that he must labor in full knowledge that his work is meaningless.  Sisyphus’ tragic fate is that he knows he will have to start all over again; for the rest of us, it is the knowledge that daily toil will only end in death anyway.  So why struggle?  Why not just live hard?

Camus’ answer is, I think, the right one.  No matter what fate the gods may place on him, Sisyphus is still the master of the way in which he endures his struggle.  In the end, Camus concludes, “The struggle itself toward the heights is enough to fill a man’s heart. One must imagine Sisyphus happy.”  We find joy in life because the struggle of life is itself meaningful and, at times, joyous.  But this is not the same as living hard.  The struggle of life is crucial, and living hard is—as all of the lyrics admit—an escape from the struggle, a denial of the struggle’s meaning and beauty.

What is the struggle?  Putting oneself out there.  In friendships, in careers, in loves.  Being willing to try and fail, and to get up and try again.  There cannot be any risk (and hence no reward) with a morality that professes that “only God can judge ya, so forget the haters”.  The pleasures and relationships such a mentality breeds are necessarily fleeting.  But they also do not allow for disappointment, which is an attractive siren song.  There is no meaningful sense of rejection in a failed hookup attempt; no fear of loss from one denial when a sea of attractive, anonymous possibilities present ample opportunity.  Besides, he was just a hater.  Besides, she was just a slut.  But even if there is less to lose, there is certainly less to gain as well.

Which is not to say we have to give up drinking, or smoking, or reckless abandon.  Occasional acts of self-destruction can provide a helpful sense of freedom.  But this is the freedom of the suicide, as I like to think Camus would call it.  It is an escape from the existential struggle of humanity, not a meaningful confrontation with it.

We should not strive for ‘living hard’, and nobody who misses its value should feel bad about that.  It is a distraction from a life that can, at times, be too much.  And that’s totally okay.  We need breaks.  Sisyphus’ walk back down the mountain, his brief respite, is necessary too.  But let’s stop thinking it’s what we should be doing to have a full, young life.

The Problem is Choice

A few exceedingly kind people have asked to read my undergraduate honors thesis that I recently completed.  While I’m not exactly sure why someone would put himself through reading some admittedly dry philosophy, I’d certainly be happy knowing someone read the damn thing without being on my committee.  So, here it is.

Fuck Boston?

I originally named my blog “Reason and the Beast” as indicative of my desire to bridge a gap I perceived between academic philosophy and… everything else.  But today I want to employ ‘the beast’ part of the title as an excuse to go on a bit of a rant and, well, unleash the beast.  I’m sitting on the train right now, and I’ve come across an article on Gawker. 

When I first opened Hamilton Nolan’s article ‘Fuck Boston’, I was anticipating an ironic mockery of a great city, that maybe lambasted us for yet another sports win, but ultimately arrived at a cute little cathartic admission of brotherhood between rival cities (as far as I can tell, Nolan lives in New York).  Even as I got further in—to the “Fuck your undeserved underdog attitude” and “Fuck your tendency to claim all of Irish immigrant culture as your own” bits—I was hoping, hoping, that the article was going to take a clever turn, a wink and a nod between a writer and his readership. 

If Nolan ever meant to get there, his ride must have gotten sideswiped on I-90 by a Masshole or two, because as far as I can tell, the article finished burning in a ditch, covered in petrol.  And to be clear, I mean that figuratively, in the sort of way that implies, “You’re a bad, uncreative writer, a sorry excuse for a journalist and the sort of comedian I expect to see performing at an open mic in the Southborough Denny’s on a Thursday afternoon.”  I hope that came across.  Writing can be so difficult.  Nolan certainly knows what I’m talking about.

I used to be understated about my love for Boston, just silently enjoyed the sparkling Charles on a sunny autumn day; walked Newbury Street and didn’t make a single smug comment about the quaint and eclectic collection of shops without the breakneck hustle of Fifth Avenue.  But you know what?  I’m pissed now.  So I’m going to channel that aggression into the really angry love this city is famous for.  And since you insisted on making a comment about our accents (and because I’ve always wanted to pull a Good Will Hunting) I’m going to write the next bit in a Boston accent.

Why do I fahkin’ love Bahston?  Fah ev’ry reasahn ya—yeah, okay, this was a bad idea.  Just do me a favor and read the next bit in a Boston accent.

I love Boston for every reason you hate it.  I love it because the weather makes no fucking sense; because we have blizzards in April and I occasionally have to wear a t-shirt in November.  I love it because we still think we’re the underdogs after winning three Super Bowl titles, three World Series Championships, and a Stanley Cup, all in the last ten years.  I’d mention the Celtics championship win, but that almost seems silly for a team that has seventeen under their belts.  I love Boston because of our confusing mixture of intellectualism and boisterousness. 

I love Boston because we are making unparalleled strides in scientific research, engineering and medicine; because our absurd number of incredible hospitals are a beacon of hope for so many sick people.  I love it because I’m proud of the fact that so many of the world’s best ideas have come and continue to come from this little city of 600,000 people.  From breakthroughs in embryonic stem cell research, to the social network that dominates way too much of our time.  From Robert Frost’s poetry to Matt Damon’s shaky-camera action movies to John Rawls’ Theory of Justice.  (Utah, you can keep Mitt.)

I love Boston because we understand that freedom-fighting and terrorism are not the same thing, and not just because throwing British people’s tea in the Boston Harbor is pretty damn ironic.  We were a city that started a revolution, that sparked a fire, which, for the first time, burned bright the truth that people are inviolable creatures whose innate characteristics demand certain rights and liberties.  And we did it without using a guillotine.  So, no, I’m not going to give it up.  Two-hundred years later, we are a city that came together in the face of inhuman anger and a mutated, xenophobic idealism.

Fuck Boston?  You, good sir, are an asshole.  (I’m italicizing that, dear reader, because I really want you to lean into it.  Really feel the force of it.  The guy who stole my slice of Nochs a few nights ago was an asshole; the ignorant prick who jumps on the bandwagon of irrational bitterness for a city I love is an asshole.  To give a bit more context, Bashar al-Assad—the Syrian dictator who allegedly used Sarin gas on his own rebelling populous—is an asshole.  Style is a crucial part of getting your point across.)

As for the people who tweeted about how the next Boston bombing should be at Fenway, or how we only won the World Series because of the Marathon Bombings (you know, those explosions of molten shrapnel and flesh-searing heat that indiscriminately injured 264 people, eviscerating limbs, devastating families, and ending lives…those things): I just can’t.  I can’t respond because I can’t fathom.  Nolan’s got an endearing stupidity going for him, so I can have fun with that.  But this…

Instead, I’m going to quote from the speech Harvard president Drew Faust gave at my graduation last spring.  Describing the incredible reactions of bystanders at the Boston Marathon finish line, Faust said:

Amid the calamity, there appeared streams of people running toward the chaos, toward the explosions. The first responders — police, firefighters, the National Guard; the raft of doctors, nurses, and EMTs; the trauma surgeon who had just completed the Marathon and “rushed in” by heading straight on to the operating room at MGH. The volunteers, the bystanders — women, men, young and old — running toward the unknown, risking their own safety to see if they could help. […]

Not everyone is prepared to run toward an explosion. But each of you is exquisitely suited, and urgently needed, for something. […]

Go where you are needed. Run toward life.

For all the things I’ve described about Boston, I think this is the part of this city that makes me proudest.  Thankfully, there isn’t always a senseless catastrophe that requires these beautiful acts of heroism and sacrifice, but Boston has always been a city running toward.  From our sports fans to our researchers to our drivers with an oddly urgent desire to get wherever the hell they’re going. 

Maybe it’s this determination, this enthusiasm for life, that pisses everyone off so much.  I’m okay with that.  Boston, keep running toward.

Oh, and if you see him, tell Hamilton Nolan I said, “Fuck you too.”

An Ignorant Troll?

Ann Coulter recently did an AMA on Reddit where she was generally pretty offensive, dodged most of the questions of any content, or otherwise just touted her own greatness.  As a result, there wasn’t much to comment on from a philosophical/logical perspective, but there was one question and answer that really demonstrated (for me, anyway) the degree to which it’s problematic when people either (a) distort the truth to achieve a vision of reality in keeping with their political ideology, or (b) talk authoritatively when they have no fucking clue what they’re talking about.  I think that’s like the second time I’ve sworn on this blog, but what can I say, Ann, you bring out the worst in me.

Okay, so here’s the passage in question:

Do you believe in the separation of Church and State? If not, how can you determine which religion is the correct basis for laws?


Are you Ed McMahon trying to pitch me a softball? it’s not only not “explicitly” there, it’s not “implicitly” there either. Lots of states had established religions during after the passage of the 1st amt, which says “CONGRESS shall make no law respecting an establishment of religion.” I.e. congress could neither establish a religion, nor interfere with the states doing so. Read it again (or I should say, for the first time.)

So, Ann’s version of history is sort of right, in the way that you’re sort of telling the truth when you tell your teacher you had to hand in your paper late because your grandfather died… but neglect to mention he bled out storming the beaches of Normandy in 1944.  Truths: (1) There were indeed states that had religious establishments that were more or less official to various degrees when the Bill of Rights was passed. (2) The text she listed is an accurate representation of the relevant portion of the 1st Amendment.  (3) The 1st Amendment, upon ratification of the Bill of Rights, applied only to the federal government.

Okay, that’s all well and good, but logical arguments are not sound unless their premises are true and complete.  See, Ann sort of glossed over the rest of US history and Constitutional law between 1791 and 2013.  There was this thing called the Civil War and the passage of the 13th and 14th Amendments.  The 13th Amendment barred slavery, but when it became clear that most of the southern states were going to make life as hard as possible on the newly freed black population, the 14th Amendment became necessary.  Here’s what the relevant part of the 14th Amendment says:

No State shall make or enforce any law which shall abridge the privileges or immunities of citizens of the United States; nor shall any State deprive any person of life, liberty, or property, without due process of law; nor deny to any person within its jurisdiction the equal protection of the laws.

There’s a lot of history to the 14th Amendment’s jurisprudence in US courts and, since I’m not yet a lawyer I’m not going to speak authoritatively on subjects that I don’t have expertise in (perhaps Ann should follow my lead on this).  But the highlights are fairly straightforward: the phrase “nor shall any State deprive any person of life, liberty, or property without due process of law” has come to be known as the Due Process clause.  This applies to the states, not the federal government, and there is a strong history of Supreme Court cases that have interpreted the Due Process clause of the 14th Amendment to incorporate the fundamental rights of (most of) the Bill of Rights to bind state governments as well as federal governments. 

And this isn’t, like, novel or particularly academic; it’s kind of straightforward.  It’s why state and local police have to abide by the 4th Amendment’s protection against unreasonable search and seizure.  It’s why the Supreme Court is able to overturn what they deem to be unreasonable firearm regulations enacted by state legislatures.  It’s out there and part of society.

Ever wonder where the phrase ‘separation of church and state’ came from?  It’s not in the 1st Amendment.  It’s actually in the opinion of Everson v. Board of Education, the Supreme Court case decided in 1947 that incorporated the 1st Amendment’s ‘establishment clause’ (the bit about religion) to apply to the states as a result of the 14th Amendment.  Now, Ann can feel free to disagree with this.  I’m sure a large part of the country does, and that’s their prerogative.  But as it currently stands, Everson is still good law in the United States, and it most certainly happened (sixty. years. ago.).  So either learn about it if you’re going to be a condescending prat, or stop twisting history to fit your message. 

So, I don’t know.  Is Ann ignorant or just a troll?  If she really didn’t know this stuff, I hope she has a good researcher for her books.  Even if Ann has never taken an intro to constitutional law class (which would be odd for a political commentator and ‘political theory’ author), it would take about 10 minutes on Wikipedia to find it all. 

But then again, I probably just have a liberal bias.

Healthcare Hypocrisy

Throughout the lead up and duration of the government shut down, I’ve been thinking about the motivations of those whom I believe to be responsible for the debacle (e.g. Freedomworks and similar groups, who published a Blueprint to Defund Obamacare, the talking points of which comprise many of the sound bites you’re likely to hear from the vocal Republicans and Tea Party Patriots who voted for the shutdown).  I thought I’d briefly describe what I view their stated philosophy to be, and why it (i) should be unappealing to their constituents and (ii) demonstrates internal conflict in their normative framework (if you’re feeling like this is just a particularly flowery way of calling them hypocrites, well, you’re not wrong).  I’m not going to get into the practical stuff—the more cynical side of why this is really happening—I just want to poke holes in the moral claims being made, because it’s fun…and simultaneously sad that nobody seems to want to talk about it on the national stage.

The simplest way to state the far right’s stated goal is this: we must shrink the size of government because large government interferes with liberty.  The standard response from the left has usually been that the right wants large government in certain areas (e.g. military spending, national surveillance, abortion bans, gay marriage and substance control) that map onto their own moral views about what is good, they just want government to stay out of the way in other areas (e.g. environmental protection, banking regulation, mandated healthcare coverage, and gun control).  It seems haphazard, one might suggest, to claim that moral legislation is justified on the grounds of ‘sanctity of life’ in the case of abortion, but preserving the inherent value of human life by ensuring all people have access to care when they get sick is an overreach of government authority.  It seems convenient, one might suggest, to claim that the regulations which would be most expensive to businesses should be the same instances in which the government has overstepped its bounds (e.g. environmental protection, banking regulation, and mandated healthcare coverage).

In order to quash the counterarguments from the left, the right has championed the moral value of personal responsibility: you want to be free not just because it feels good, but because it’s the righteous man’s burden to be responsible for his actions.  This is a great way to keep a hold of constituents who start to question the reality of the dream they’ve been fighting for—any feelings one might have that this random application of ‘liberty’ is not all it’s cracked up to be is because of an embarrassing weakness on your part; if you feel like you’re being taken advantage of, it’s only because you want a handout that doesn’t belong to you; if you work hard, you can achieve the American Dream.  As Steinbeck said, “Socialism never took root in America because the poor see themselves not as an exploited proletariat but as temporarily embarrassed millionaires.”

So what’s the problem here?  Isn’t there something to be said for personal responsibility?  Absolutely.  It’s not that personal responsibility doesn’t have value, it’s that the conservative right has falsely portrayed personal responsibility as existing in necessary tension with empathy.  One particularly hilarious example of this was Fox News anchor Megyn Kelly’s 2011 tirade about the value of maternity leave.  As Jon Stewart so eloquently put it, “I just had a baby and found out [government-mandated] maternity leave strengthens society.  But since I still have a job, unemployment benefits are clearly socialism.”  One begins to wonder if ‘personal responsibility’ isn’t just a more palatable way of saying, “I got mine.”

Here’s the thing though, you can believe that liberty has value, that people should be responsible for their own destiny, and still believe that a state is a community that benefits from mutual participation and protection.  There’s a really great example of people who live out this belief every day, and it has almost universal American appeal: this is why we have a military.  The US armed forces is a group of people who put their lives at risk to ensure that other people for whom they would otherwise not be responsible can live safely.  A person can be responsible for the safety of his fellow countrymen and still believe in the value of personal responsibility.  Sometimes a plane crashes into a center of commerce, and it’s our responsibility as a nation to come together, help those who were immediately affected by the tragedy, and do everything we possibly can to make sure it never happens again.

But the vast majority of the time it’s not a plane piloted by suicide bombers, it’s an unexpected cancer or a diagnosis of heart disease, followed closely by unemployment.  If someone can give me a good reason why that’s any different, I’ll buy them a drink.

With God on Our Side

Two days after the tornado that ravaged Moore, Oklahoma had dissipated, the traditional debates had already returned to the forefront.  The liberal message was one clamoring for further disaster relief and concerns of climate change; while conservative pundits focused on the tragedy, trying to avoid the pageantry of politics on this landmine of a topic.  Ignoring this wise advice, Senator Jim Inhofe, a Republican from Oklahoma suggested that federal tornado relief was not the same as federal hurricane relief—which he had opposed in the aftermath of Hurricane Sandy, supposedly because of the pork included in the bill.

Another perennial thread unspooled in Facebook posts, blogs and Op-eds (even though most of the major media outlets recognized it for the quagmire that it is): the Oklahoma tornado, which killed 24, including 9 children, was part of God’s plan.  Just as Hurricane Sandy had been before it.  Just as the 2011 earthquake that rocked Japan and destroyed the Fukushima nuclear plant, killing nearly 16,000, had been before that.  Just as the 2004 Tsunami, which rocked the Indian subcontinent, killing over 230,000 people, had been before that.

In one particularly misguided debate, an anonymous prophet clarified his position—that it was okay to assist those who had survived the tornado, even if it was part of God’s plan—by suggesting that “the ones he wanted to die are dead”.

After the comment made my stomach turn, it reminded me of one of my favorite songs—“With God on Our Side” by Bob Dylan.  In this seven-minute, haunting folksong, Dylan describes growing up in 1950s America, where he was taught ‘that the land that he lived in had God on its side’.  Through several verses, he chronicles an array of regrettable events in American history: from the British slaughter of Native Americans during colonization, to the American Civil War, to World War II, the invention of chemical and nuclear weapons, and Vietnam.  In each snapshot, Dylan explains a recurring sentiment, which he best summarized while describing the confusion he felt learning about World War I as a child: “The reason for fighting, I never did get—but I learned to accept it, accept it with pride; for you don’t count the dead when God’s on your side.”

After I listened to Dylan and cooled off a bit, convinced again that there are reasonable and beautiful ideas in the world, my inner philosopher took over.  I started thinking about the logic behind the idea that God has a master plan for all of humankind, and that this master plan can involve the suffering of those who might otherwise not deserve such treatment.

Giving up oneself or offering another for the purpose of a greater good, usually intending some sort of harm to occur, is generally termed a ‘sacrifice’.  Jesus is doubtlessly the most famous and universally admired sacrifice (except by Ayn Rand). 

But we must keep in mind that, at some level, he sacrificed himself—he was in on it, as it were.  Offering up another as a sacrifice without consulting them is generally seen as uncool and/or barbaric.  So I find myself irked when someone tells me a loved one or an innocent child was called back to heaven as part of God’s plan, because he needed another angel.  (That, and the idea that God never gives us more than we can handle.  I am reminded of Tig Notaro’s now-famous standup comedy performance, when, after describing how, in the span of six months, she had developed pneumonia, Clostridium Difficile, and breast cancer, her mother died and her girlfriend walked out on her, she imagines God watching her from above saying, “You know, I really think she can handle a bit more.”)

Okay, so I don’t find this particular belief and consolation appealing.  It doesn’t work for me, but it does for some people.  And that’s fine.  It doesn’t, in-and-of-itself, have any major internal contradictions.  But the idea of forced sacrifice, where the sacrificed party isn’t part of the decision, almost certainly belongs to a certain school of philosophy known as consequentialism.  Basically, the idea here is that the moral worth of an action is determined by the value of the consequences it brings about.  Killing someone might be permissible if it saves five lives.  Sending a tornado through a town in Oklahoma might be permissible if it is part of a grand plan we cannot comprehend.  This theory stands in stark contrast to non-consequentialist theories (if the name didn’t give that away), where other factors—like the inherent worth and inviolability of humans—are considered.  Non-consequentialists generally don’t believe you can kill someone to save five people.  The person has a right not to be harmed in such a way.

Now, one might believe that the scope of this plan is limited: that God works through nature and miracles, but not through men.  This would certainly help deal with free-will concerns, but I don’t think this particular strain of Christianity can make use of this expedient.  I return to Dylan’s “With God on Our Side”: that God made America the greatest country on earth is certainly a mainstream theme in American circles of discourse.  But America is a nation of people, founded by people, through a declaration of war against the British Empire and (later) a codified set of laws established among former colonies through fierce negotiations.  Perhaps God guided their actions, but the actions and laws of people created America.

These two views—that God has a plan that makes sacrificing some acceptable, and that God can work through men and women to fulfill this plan—are not inherently evil, or bad, or degrading to human value, even if I don’t like them.  Consequentialism is a perfectly reasonable ethical framework, believed by many very careful philosophers.  But it cannot work in tandem with a non-consequentialist theory that bespeaks the inherent, ultimate and inviolable nature of human life.  This is where the contradiction arises.

In other words, one should not believe that God’s plan is so great and beautiful that inexplicable and countless tragedies can be part of the equation without ruining its splendor, while at the same time believing that human life has absolute value.  One cannot believe that a soldier is being guided by God’s will in killing his enemy, and at the same time believe that a mother has necessarily perverted God’s love in her decision to abort the fetus that will become a baby for which she is not ready.

If God’s plan allows for sacrifice to achieve its glory, and if God’s plan is mysterious, and if God can work through men and women, then any act, no matter how heinous or righteous, can be a part of God’s plan.  Once we recognize this, we realize that this plan alone does not give us the moral tools we need to govern our actions.  We cannot judge the events of nature or the acts of men—or indeed our reactions to either—on the basis of faith in a power that is beyond our comprehension.  Maybe God exists, maybe he does have a plan, but that shouldn’t enter into our moral equation.  

Only Us

Two bombings happened on April 15th, 2013.  One was on Boylston St. at the finish line of the Boston Marathon.  The other was in Baghdad, where twelve coordinated explosions killed many and injured more.

I describe these two attacks without political agenda.  Unlike many writers, I am not suggesting that we compare tragedies in some sort of morbid and perversely callous pissing contest, as if a body count was necessary for understanding pain.  Nor am I arguing that foreign catastrophes are covered at a disproportionately low rate because we just don’t care, as if America was a self-involved teenager.  Unlike the Guardian, I am not suggesting, “[W]hatever rage you’re feeling toward the perpetrator of this Boston attack, that’s the rage in sustained form that people across the world feel toward the US for killing innocent people in their countries.”  I do not think any of the acts of kindness and heroism and charity exhibited in the immediate aftermath could possibly be characterized as rage.

This is the wrong way to look at what has happened.  It is the same divisiveness that misinformed whatever hatred prompted such monstrous acts of lunacy on April the 15th.

Rather, I describe these two attacks to highlight our similarity.  I live across the river from Boston and I’ll always consider Boston my home city.  I have never been to Baghdad, but I know how the people of Baghdad felt on Monday.  They felt scared, confused, worried about the safety of their loved ones.  They tried to make sure the people they cared about were okay.  They looked to the news to give them something to latch onto, some way to understand what was happening around them.  They felt angry when their warmest words of comfort were empty and useless in consoling the people whom they loved.

I know these things because they are how I felt and what I did in Boston on Monday.  I had friends running the marathon and friends cheering on the sidelines.  The people I care about and the thousands around them celebrating Marathon Monday were enjoying an activity of camaraderie and living in the most vibrant sense of the word.

Some might suggest that, in attempting to universalize these emotions, I am being insensitive to the tragedy at home.  That this is a time to focus on us; that this is an American heartbreak.  It is not.  It is a calamity of humanity, because any such acts are in direct opposition to what we as a species must stand for.  They are in irreconcilable conflict with the human purpose.

It does not dilute our pain to suggest that others share it.  It need not be ‘us’ and ‘them’.  Indeed, there is only ‘us’.  There are only the victims and the countless people who care about them.  The perpetrators of these abuses have forfeited their right to count amongst humanity, because their goal is not just immoral, it is inhuman in both method and methodology.

If it is ever possible for some good to come of senseless tragedies, perhaps this can be it.  Perhaps instead of blaming whatever incidental faction, ideology, religion, video game, or book it was that ‘caused’ this violence, we can recognize that, at the end of the day, it was just a handful of people, misguided by the only evil that can cause such hatred: an ignorance of what it means to be human; to love humanly and live humanely.

The Other Kind of Consent

Tomorrow morning, I’ll be having a routine medical procedure done.  This normally wouldn’t have much philosophical worth, but there will be a moment of dialogue that is crucial for our growing understanding of what the doctor/patient relationship is, could become, and in what new light we might learn to value this relationship.  

This moment also (and entirely coincidentally…) happens to be what I’ll be spending the next year of my life working on in the process of completing my senior honors thesis in the field of bioethics.  So there’s that.

See, right before the doctor puts me to sleep, she will read through a series of risks associated with the procedure, ask me if I understand them and their weighted value in comparison to the rewards of having the procedure done, and ask me to sign a form indicating this understanding and giving my consent.

This wasn’t always how things worked, though.  The notion of informed consent is a fairly novel idea in the history of legal requirements in medicine.  Prior to 1957, with the decision Salgo v. Leland Stanford Jr. University Board of Trustees and the more formalized opinion in Natanson v. Kline, there was no legal notion of informed consent.   Medicine was, for a long time and without much contention, a field in which doctors passed out treatments without much explanation or justification.

Which isn’t to say that they were tyrants in any damaging sense of the word.  Rather, the body of opinion rested firmly on the idea that the doctor/patient relationship was one in which the doctor prescribed treatment based on her informed medical opinion to a lay patient who, not understanding the situation himself, trusted the opinion of his physician.

We can see this belief peek through in the Hippocratic Oath (which in my younger years I called the Hypocritical Oath, a far more confusing notion).  The relevant bit goes like this:

I swear by Apollo, the healer, Asclepius, Hygieia, and Panacea, [now that’s how you start an oath…]: 

…I will prescribe regimens for the good of my patients according to my ability and my judgment and never do harm to anyone.…

The key terms here are “for the good” and “never do harm”.  This is known as the beneficence clause of a doctor’s oath.  

That was the traditional occupation of doctors: to do good by their patients and never to harm them.  And the medical community has always implicitly construed these goods and harms as having to do with bodily health.  Who can blame them?  

But as we’ve progressed as a society, our views of harms and goods have become more complex.  We realize that sometimes the desires of a patient are not so simple as ‘to survive’; that they may wish to live their last days with dignity, or in blissful ignorance.  The individuals and the situations vary.

Doctors are medical practitioners, not moral arbiters.  Their position in guiding medical diagnoses and prognostic options should not be conflated with a special insight into the right choice.  Affirming this point in his book How We Die, surgeon Sherwin Nuland recounts his history in practicing medicine:

More than a few of my victories have been Pyrrhic.  The suffering was sometimes not worth the success…. [H]ad I been able to project myself into the place of the family and the patient, I would have been less often certain that the desperate struggle should be undertaken.

Which is not to say that the fight is itself undesirable, but rather that an understanding of what that desire represents and could potentially mean is vital to a patient’s valuing his autonomy and making an informed decision.  This idea, this simultaneous weighting of autonomy and beneficence as cohabitants in a reasonable relationship, is informed consent.

The desire for informed consent arises from a non-parity in the respective knowledge bases of the patient and the doctor.  As framed in the landmark decision Arato v. Avedon, this non-parity concern evolves into a moral demand for informed consent in three steps:

1) Patients are generally not knowledgeable of medicine and the medical sciences, and therefore do not have comparable knowledge to that of their physician.

2) Yet, an adult of sound mind both has the right and obligation to exercise control over his own body and to determine whether and which medical treatment he should submit himself to.  

In combining these two premises, we arrive at an obvious conclusion: 

3) The patient depends on his physician and trusts that he will honestly convey the information upon which he relies during the course of the decision-making process as well as all of the relevant risks and rewards of the proposed.  As a result, the physician has an obligation to provide this information.

Today, this may seem a fairly uncontroversial conclusion.  Yet, as we examine the question, it becomes less and less simple.

In The Cancer Ward, novelist Alexander Solzhenitsyn poignantly captured the concern that arises from informed consent.  When a patient challenges her doctor’s right to make unilateral decisions on the patient’s behalf, the doctor gives a troubled but certain answer, “But doctors are entitled to the right—doctors above all.  Without that right, there’d be no such thing as medicine.”

A more critical examination of this concern can be found in Thomas Duffy’s article “Agamemnon’s Fate and the Medical Profession: from the New England Law Review, where he argues, “Paternalism exists in medicine to fulfill a need created by illness.”  That is, it is not the doctor that is limiting the patient’s autonomy, but a necessary characteristic of a situation constructed by the illness to which both doctor and patient must respond as best they can.

But this carries an implicit thesis: that the physician still knows best (at a moral level).  How can this be so when there is still so much doubt in medicine?  In the words of Dr. Brian Goldman during his TED Talk Doctors Make Mistakes: Can We Talk About That?, “If you take the system… and weed out all the ‘error-prone’ health professionals, well… there won’t be anybody left.”  

Or, as Dr. Alvan Feinstein said in his book Clinical Judgment:

Clinicians are still uncertain about the best means of treatment for even such routine problems as… a fractured hip, a peptic ulcer, a stroke, a myocardial infarction… At a time of potent drugs and formidable surgery, the exact effects of many therapeutic procedures are dubious or shrouded in dissension.

Or, because of Dr. Gregory House, the now infamous desire to solve The Riddle, as Dr. Sherwin Nuland elaborates:

[A surgeon] allows himself to push his kindness aside because the seduction of The Riddle is so strong and the failure to solve it is so weak.  [Thus, at times he convinces] patients to undergo diagnostic or therapeutic measures at a point in illness so far beyond reason that The Riddle might better have remained unsolved.

Given all of this, I cannot help but think it unwise and unfair to demand moral guidance from our physicians in addition to medical prognoses.

And indeed, sentiment has already shifted in many regards in this direction.  The Presidential Commission conducted a survey in`1961 in the Journal of the American Medical Association, which found that 90% of doctors did not inform patients of cancer diagnoses.  Sixteen years later, in 1977, 97% of doctors surveyed routinely disclosed a cancer diagnosis.  The times, they are a’changing.

But the situation is not so simply addressed.  The questions are incredibly complicated.  Let me offer you an example, crafted by Dr. John Arras in his essay “Antihypertensives and the Risk of Temporary Impotence: A Case Study in Informed Consent.”

In this thought experiment, a patient with hypertension, for whom diet and exercise has failed as a remedy, seeks medical assistance from his Primary Care Physician, Dr. Kramer.  Dr. Kramer generally prescribes “a common diuretic, hydrochlorothiazide, as the second line of defense [after diet and exercise]” for hypertension, due to its cheapness and effectiveness.

The drug had a potential side effect of causing temporary impotence in 3-5% of men who took the pill; the impotence would resolve upon completion of the treatment or discontinued.  Dr. Kramer wonders if she should tell her patient about this risk, considering that he is a newlywed and may find this a particularly problematic time to be experiencing such issues; she reasons that he may be willing to pay extra for a more expensive drug that would not cause this problem.

Dr. Kramer consults with another physician who suggests, “The risk is quite low, entirely reversible, and consider this: if you share this possible side effect with your patient, this little bit of truth is likely to make him extremely anxious about what could happen….  Telling him about the risk of impotence could actually make [him] so worried that he would become impotent at your suggestion.”

Here we have an instance of apparently direct conflict between beneficence and autonomy.  What should the doctor do?  

Consider a less trivial situation, where a patient has been diagnosed with Hepatosplenic T-Cell Lymphoma, an almost-always fatal condition.  Is a doctor obligated to tell the patient, even if treatment is not an option?  What if the patient does not want to be informed?  Or has a heart condition that may be exacerbated by the knowledge?  How do we weigh these concerns?

Moreover, is truly informed consent even possible?  It is a commonly recorded psychological phenomenon that people undervalue the risk of actions.  Take cigarette smoking.  The ‘it won’t happen to me’ belief is ubiquitous: we understand there is a statistical risk, but dissociate ourselves from the statistic.  

How can we actually subvert this common psychological move?  And, if it turns out we cannot, does that force us to recalculate the balance between beneficence and autonomy?  If an individual cannot accurately assess his own risk, should we leave the choices to those who are dissociated enough that they can?

These are difficult, troubling questions.  But these questions have yet to be satisfactorily answered, and they need to be.  As Dr. Pauline Chen argued in her New York Times essay, in its current form, informed consent is often a theater act:

Pete looked away from me and stared at the consent form. Yet even as I watched his brows knit together, his eyes widen then wince, I kept on talking. I had gone into my inform consent mode — a tsunami of assorted descriptions and facts delivered within a few minutes. If Pete had wanted me to pause and linger over something, I never knew. He couldn’t get a word in edgewise….

Pete signed the consent. But as he took the pen to paper, I couldn’t help noticing the tremor in his hand and the pall that had suddenly descended upon the room and our interaction.

The common lingo among physicians is  ’to consent the patient’.  Linguistically, it is not an actively forged relationship between patient and physician, it is an action performed on the patient, a legal requirement that must be completed before getting down to business.  We need to do so much better.

These questions push us to the limit of what ethics can grapple with.  They cannot be answered in a brief article.  They demand of us careful consideration.  Or maybe I’m just bigging-up my honors thesis…

Anyway, I suppose I don’t really have a conclusion this week.  Sorry.  I don’t know what to tell you.  I’ll be trying to come up with satisfactory answers to these questions over the next year.  

I’ll let you know when I figure it all out.  BRB.

The Guilty Ones

Towards the end of this past semester, I was at dinner with one of my professors, and found myself debating at some length a question of morality.  I’m sure most of you are familiar with Sophie’s Choice—a book, a movie, and a dilemma: you’re a mother with two children and are told to pick which one will die and which will live, or else both will be killed.

Philosophers have a similar thought experiment that removes a bit of the complicated sentiment that Sophie’s Choice is so rife with, broadly called “Trolley Problems” (one of the most famous thought experiments—this list and the descriptions are actually quite good, despite what the ‘z’ in the domain name might indicate).  The thought experiment my professor offered me in this particular discussion is slightly different, but the general premise holds.

You are alive during Manifest Destiny era America, and you and twelve fellow settlers are traveling west in hopes of finding some nice land that doesn’t belong to you.  You do not know any of your companions, as you signed on to the trip at the last minute.  In the middle of the night, a band of Native Americans descend on your caravan and tie up all thirteen of you before any resistance can be offered.  The chief of the tribe rides up to the group and lectures you about being Western Imperialist Asses.

Then he has you untied and brought before the group.  His warriors stand behind each of your twelve companions.  He hands you a rifle and pulls up one of your companions, whom he tells you to kill.  If you do, the remaining twelve of you will be set free to go home and live out the rest of your Entitled-White-Man lives.  If you do not, all twelve of your companions will be killed.  It is important to note that, either way, you will survive.  This Chief is very clever; he doesn’t want you to be motivated by a selfish desire to live.  

Before we talk about what your options are here, we need to talk about a spectrum of moral culpability that moral philosophers use to explain the justification, or lack thereof, of an action.  In simplest terms, the spectrum of culpability goes like this (from most culpable to least): inexcusable, understandable, excusable, justifiable, and praiseworthy.  (These categories are not always mutually exclusive, because some of them operate slightly independent of the others, but this spectrum will do for our purposes.)

An inexcusable act is one that we belief to be absolutely and abhorrently wrong, like shooting up a crowd of innocent people for selfish reasons.  No real discussion here.  Guy’s just awful.

An understandable act is one that is still inexcusable (insofar as it must still be punished as morally wrong), but one about which we can nonetheless recognize a common ground and empathize with the motivations of the perpetrator of the act.  Like hunting down the man who killed your wife.  We have to say the act is wrong, but we kind of get why you did it.  

An excusable act is one that is both understandable and somehow warrants the disregard of normal moral and legal standards.  For example, if you were walking down the street and happened upon Osama bin Laden, totally helpless and at your mercy, it would be excusable for you to kill him if you knew he would otherwise escape prosecution or punishment and your motivation was to bring him to some form of justice.  It would normally be wrong to kill a defenseless person in retribution like this, but because of the chance of his escape and the gravity of his crimes, our justice system would not charge you with murder and you would be hard pressed to find someone who thought you did the wrong thing.

A justifiable act is a bit different, but the distinction is subtle.  A justifiable act is not just one in which we set aside general morality, but one for which the scale actually tips such that we believe you have indeed done nothing wrong.  If someone has a gun drawn on you and clearly intends to kill you, you are justified in shooting him first.  There is no immoral act to ‘excuse’ because we already believe killing in self-defense to be justified, as a rule.

A praiseworthy act is stronger still.  A praiseworthy act is one in which you have actually done something laudable; an act that might, in isolation, be wrong, but because of the circumstances makes you a ‘better’ person because of it.  Killing someone who is in the midst of a shooting spree, and thereby preventing many immediate deaths, is a praiseworthy act.

Now that we have painted these distinctions, we can come to the question at hand.  My professor argued that you would be excused in killing one of your companions in order to save the other eleven of you.  I agreed.  The problem we had was with the converse: she believed that you would be justified for not acting at all.  I disagreed.

Just to be clear: the Chief tells you to kill someone to save twelve, and we both agree that you are excused of wrongdoing in this act of murder.  But I believe that, furthermore, it would be inexcusable for you not to act.  I believe that not acting makes you complicit in the death of the thirteen.

Why should this be?  I think the reason lies in your motivation for not acting, so let’s try and see if we can explain what that motivation might be.  More people clearly die if you do not act.  Eleven is greater than one.  The math checks out.  So your motivation cannot be to save life.  The motivation is that you do not want to be the person to pull the trigger.  I believe this to be an inexcusably selfish motivation.

Let me explain.  I believe that your motivation for not pulling the trigger is that you do not want to live with the guilt of the act of what you perceive to be the killing of a defenseless and undeserving victim.  This guilt may come from a belief that what you are doing is wrong, and this may be a justified guilt if you believe that the act of killing is wrong.  But not acting will be to avoid this guilt, and that is a selfish act.

What I am talking about, then, is mandated sacrifice (in situations where the stakes are high enough).  And no matter how you cut it, the stakes are always high enough in this example.  Even if you had to kill eleven to save one, the stakes are still high enough, because you are still saving a life.  Your guilt does not balance the matter.

My professor argued that the motivation for not doing so is that you do not want to make yourself complicit in an immoral act, and it is your belief that you are doing the right thing that guides your choice (not the guilt), so your inaction is justifiable.  

But you are complicit either way.  If you do not act, the others will die.  Death will result from either choice, so your complicity is unavoidable.  In one, you do not pull the trigger, yes; but why should this matter?  We have already established that there are justifications for killing, so it cannot be that we think killing under any circumstances is inexcusable.  The problem is you do not want to be the one to do it.

To highlight and defend my point, let’s turn briefly to an actual trolley problem.  Five people are tied to a train track with a trolley approaching.  On an alternate track, one person is shackled.  You have a switch at your fingertips which will allow you to move the trolley from the track where the five are to the track with one, thereby saving the five and killing the one.

In another example, five people are once again tied to a track.  Except this time you have no switch at your disposal.  Instead, you have a very fat man that you can push off of a bridge and onto the track.  This will kill the fat man, but save the five people.  (I didn’t actually come up with this, so if you think I’m being insensitive, direct your grief to Judith Jarvis Thompson.)

Neuropsychologist Joshua Greene conducted a study showing that different sections of the brain operate in these different scenarios, a phenomenon he attributed to “emotion” getting in the way of the more immediate and real pushing of the fat man (as opposed to the somewhat sterile and distant act of flipping a switch).  And both fMRI imaging and the numbers back this argument: more people were willing to flip the switch than push the fat man.  

The number of victims in the respective scenarios don’t matter to us as much as the emotions of the act, so I do not think it is a strong deontology that is preventing you from firing the gun in the Indian Chief example. 

We come back to guilt.  You cannot get over the fact that you killed someone.  But I believe this cannot possibly be weighed against the life of another person.  Not acting is immoral because it leads to more death; even if you will feel worse about acting, you must.  You must bear that burden.  This is mandated sacrifice.

In my first article, I cited these trolley problems as being symptomatic of the trend of philosophers to be out of touch with what they need to be discussing with people.  Is it hypocritical, then, for me to bring them up now?  Am I retreating into the ivory tower?

I don’t think so.  I’m trying to illustrate a larger point here. It may be that you feel bad about doing something, either because it will hurt someone you care about, or perhaps just because you are just too close to the situation.  That doesn’t mean that you are excused from acting, that morality passes by or that the right thing has suddenly changed to accommodate sentiment.  Morality is not so lenient.

It is a point summarized by Isaac Asimov in a rather elegant quip: “Never let your sense of morals prevent you from doing what is right.”