Friday, July 4, 2014

Programmers also need Moral Philosophy

I stand corrected; programmers need knowledge of moral philosophy too.  I realized this after hearing this BBC story about developers of self-driving cars explicitly asking philosophers for help in formulating which person the car should hit.

When first starting my project to teach other programmers all the practical concepts I was learning from Philosophy, I focused on ontology, the study of describing the world. Philosophy has 2500 years of study of this topic that computer science naively leaves up to intuition. I thought that only the IS side of Hume's IS/OUGHT divide would be relevant to actual programming.  It turns out that real programmers doing real software development need the OUGHT side too.

Hume's IS/OUGHT Divide

The philosopher David Hume wrote that all statements fall into one of two categories: descriptive statements about "what IS", versus, prescriptive statements about "what OUGHT to be", and one can't judge what ought to be without a clear accurate understanding of what is.

One of the basic tasks of Philosophy is to try to explain and justify one's intuition and gut reactions via a set of explicit logical rules.  The IS side of things worries about the best way to describe and categorize things, and how we can justify that we know what we think we know.  The OUGHT side of things worries about rules guiding "moral" decisions, and which rules apply in which situations, and what are the overriding goals of each rule system. In other words, what is the "right" thing to do.  In both of these categories, it turns out that our intuitions often result in conflicting answers, thus the need to analyze and sort them out (ahem, easier said than done).

The IS statements are the ones that first come to mind when developing a self-driving car. What IS the terrain, the car speed, the distance to the curb, the position the car in the next lane will be in two seconds.  These questions are the kind covered in Artificial Intelligence classes, the ones first needed to be able to drive at all.  The ones that let a system detect potential collisions, and formulate the set of options available to avoid them.

But, IS statements don't describe which of those options is the "right one", the choice it OUGHT to make. It is only after you realize that sometimes there is no purely "good" option, no option that leaves everyone unscathed, that you realize you will have to program the car to decide who to hit! How does the poor programmer decide that?! Luckily, some programmers had the wisdom to call a philosopher for help encoding moral rules rather than blindly using their programmer's intuition.

So, if a self-driving car hits someone, who OUGHT to have responsibility?  The auto maker? The car owner? The car's software developer who programmed its rules?   If a bicycle darts in front of the car, but the action of swerving to avoid an inevitable collision will itself cause a collision with someone else, who OUGHT to be hit?  The "at fault" bike? The more-likely-to-survive but "innocent" car in the next lane? Override the "never cross the double yellow line" rule and swerve into oncoming traffic (potentially resulting in a chain reaction)?

"Moral" Philosophy

When looking at the language describing the scenarios above, we see words like "action", "choice", "responsibility", "cause", "result", "fault", "innocent", "never", and "more likely to survive". These lead to classic concepts in moral philosophy like Action and Agency, Causation, Free Will vs Determinism, Moral Responsibility vs Moral Luck, Desert (i.e. who deserves what) and Legal Punishment, which are intertwined in the following way; we expect those making decisions to be morally/legally responsible for the consequences of their actions, assuming that they were able to make a free choice.

But there are debates about whether the ends justifies the means (Consequentialism) versus a bad deed is a bad deed (Deontological Ethics). There are also debates about what the overarching goals should be; the most good for the most people (Utilitarianism) versus the most deserving (Prioritarianism), or the most freedom (Libertarianism), or the most equality (Egalitarianism), etc, etc.

These are just the tip of the iceberg, but it is worth the study since they provide a language for documenting and explaining your ultimate set of rules as well as making you aware of the many non-trivial scenarios. Lest you programmers think that Philosophy is overkill, take a look at books like "The Pig that Wants to be Eaten" cataloging the many well-known moral paradoxes that result from relying on intuition and gut reactions.


No comments:

Post a Comment