The "Procrastination" episode of QI has nailed the culprit as perfectionism. And I agree that it is the reason for not having posted on all the topics that speak to me the most. I find it easier to finish posts on topics I care less about because I'm not worried that I got it "perfect"...
Series P: 12. Procrastination
Wednesday, February 6, 2019
Tuesday, July 4, 2017
Programmers also need Moral Philosophy
I stand corrected; programmers need knowledge of moral philosophy too. I realized this after hearing this BBC story about developers of self-driving cars explicitly asking philosophers for help in formulating which person the car should hit. . . . . .
When first starting my project to teach other programmers all the practical concepts I was learning from Philosophy, I focused on ontology, the study of describing the world. Philosophy has 2500 years of study of this topic that computer science naively leaves up to intuition. I thought that only the IS side of Hume's IS/OUGHT divide would be relevant to actual programming. It turns out that real programmers doing real software development need the OUGHT side too.
One of the basic tasks of Philosophy is to try to explain and justify one's intuition and gut reactions via a set of explicit logical rules. The IS side of things worries about the best way to describe and categorize things, and how we can justify that we know what we think we know. The OUGHT side of things worries about rules guiding "moral" decisions, and which rules apply in which situations, and what are the overriding goals of each rule system. In other words, what is the "right" thing to do. In both of these categories, it turns out that our intuitions often result in conflicting answers, thus the need to analyze and sort them out (ahem, easier said than done).
The IS statements are the ones that first come to mind when developing a self-driving car. What IS the terrain, the car speed, the distance to the curb, the position the car in the next lane will be in two seconds. These questions are the kind covered in Artificial Intelligence classes, the ones first needed to be able to drive at all. The ones that let a system detect potential collisions, and formulate the set of options available to avoid them.
But, IS statements don't describe which of those options is the "right one", the choice it OUGHT to make. It is only after you realize that sometimes there is no purely "good" option, no option that leaves everyone unscathed, that you realize you will have to program the car to decide who to hit! How does the poor programmer decide that?! Luckily, some programmers had the wisdom to call a philosopher for help encoding moral rules rather than blindly using their programmer's intuition.
So, if a self-driving car hits someone, who OUGHT to have responsibility? The auto maker? The car owner? The car's software developer who programmed its rules? If a bicycle darts in front of the car, but the action of swerving to avoid an inevitable collision will itself cause a collision with someone else, who OUGHT to be hit? The "at fault" bike? The more-likely-to-survive but "innocent" car in the next lane? Override the "never cross the double yellow line" rule and swerve into oncoming traffic (potentially resulting in a chain reaction)?
But there are debates about whether the ends justifies the means (Consequentialism) versus a bad deed is a bad deed (Deontological Ethics). There are also debates about what the overarching goals should be; the most good for the most people (Utilitarianism) versus the most deserving (Prioritarianism), or the most freedom (Libertarianism), or the most equality (Egalitarianism), etc, etc.
When first starting my project to teach other programmers all the practical concepts I was learning from Philosophy, I focused on ontology, the study of describing the world. Philosophy has 2500 years of study of this topic that computer science naively leaves up to intuition. I thought that only the IS side of Hume's IS/OUGHT divide would be relevant to actual programming. It turns out that real programmers doing real software development need the OUGHT side too.
Hume's IS/OUGHT Divide
The philosopher David Hume wrote that all statements fall into one of two categories: descriptive statements about "what IS", versus, prescriptive statements about "what OUGHT to be", and one can't judge what ought to be without a clear accurate understanding of what is.One of the basic tasks of Philosophy is to try to explain and justify one's intuition and gut reactions via a set of explicit logical rules. The IS side of things worries about the best way to describe and categorize things, and how we can justify that we know what we think we know. The OUGHT side of things worries about rules guiding "moral" decisions, and which rules apply in which situations, and what are the overriding goals of each rule system. In other words, what is the "right" thing to do. In both of these categories, it turns out that our intuitions often result in conflicting answers, thus the need to analyze and sort them out (ahem, easier said than done).
The IS statements are the ones that first come to mind when developing a self-driving car. What IS the terrain, the car speed, the distance to the curb, the position the car in the next lane will be in two seconds. These questions are the kind covered in Artificial Intelligence classes, the ones first needed to be able to drive at all. The ones that let a system detect potential collisions, and formulate the set of options available to avoid them.
But, IS statements don't describe which of those options is the "right one", the choice it OUGHT to make. It is only after you realize that sometimes there is no purely "good" option, no option that leaves everyone unscathed, that you realize you will have to program the car to decide who to hit! How does the poor programmer decide that?! Luckily, some programmers had the wisdom to call a philosopher for help encoding moral rules rather than blindly using their programmer's intuition.
So, if a self-driving car hits someone, who OUGHT to have responsibility? The auto maker? The car owner? The car's software developer who programmed its rules? If a bicycle darts in front of the car, but the action of swerving to avoid an inevitable collision will itself cause a collision with someone else, who OUGHT to be hit? The "at fault" bike? The more-likely-to-survive but "innocent" car in the next lane? Override the "never cross the double yellow line" rule and swerve into oncoming traffic (potentially resulting in a chain reaction)?
"Moral" Philosophy
When looking at the language describing the scenarios above, we see words like "action", "choice", "responsibility", "cause", "result", "fault", "innocent", "never", and "more likely to survive". These lead to classic concepts in moral philosophy like Action and Agency, Causation, Free Will vs Determinism, Moral Responsibility vs Moral Luck, Desert (i.e. who deserves what) and Legal Punishment, which are intertwined in the following way; we expect those making decisions to be morally/legally responsible for the consequences of their actions, assuming that they were able to make a free choice.But there are debates about whether the ends justifies the means (Consequentialism) versus a bad deed is a bad deed (Deontological Ethics). There are also debates about what the overarching goals should be; the most good for the most people (Utilitarianism) versus the most deserving (Prioritarianism), or the most freedom (Libertarianism), or the most equality (Egalitarianism), etc, etc.
These are just the tip of the iceberg, but it is worth the study since they provide a language for documenting and explaining your ultimate set of rules as well as making you aware of the many non-trivial scenarios. Lest you programmers think that Philosophy is overkill, take a look at books like "The Pig that Wants to be Eaten" cataloging the many well-known moral paradoxes that result from relying on intuition and gut reactions.
Labels:
case study,
epiphanies,
moral philosophy,
philosophy,
POSTSCRIPT
Tuesday, February 9, 2016
Absolutely, Holes are Relative
This blog is normally about what computer programmers can/should learn from philosophers. Occasionally though, there is a posting like this about what philosophers can learn from programmers...
In An Introduction to Ontology[1], there is a discussion about whether holes “exist” (where “exist” has a technical Philosophical meaning). In laying out the pros/cons of defining a hole as a region of space, the (fatal?) flaw with the approach is claimed to be that, when the thing that had the hole in it (say a pair of jeans) moved to the next room, the hole wouldn’t/couldn’t move because it was defined as a particular region of space.
I, as a computer programmer, am left dumbfounded that anyone would make the definition of a hole be an absolute region of space, any more than they would define the geometry of each leg in the jeans as being an absolute region of space.
As modeled in typical vector drawing logic, the legs and the hole would each simply be a “part” of the jeans entity. Each part typically has attributes defining its contents (often simplified to just its color), so the left leg is denim/blue and the hole is void/transparent. Alternatively, the hole can be thought of as a “subtractive” part which simplifies the description of the geometry of things that have holes. (Beware of saying “that just simplifies the math but it isn’t real” because that's what they said when Copernicus noted that the math was simpler if planets orbited the sun rather than everything orbiting the earth).
CAD/CAM systems often model things as having parts known as features[2], which include not only subtractive “passages” (two openings, i.e. holes), but also subtractive “depressions” (one opening, i.e. dents) and subtractive “voids” (zero openings, i.e. completely enclosed). These are in addition to additive “protrusions”, “connectors” and “stand-alone volumes”. Don’t think a protrusion is an entity (or at least a part)? Don’t peninsulas exist?
So, just as no one has a problem with saying that the leg part of the jeans is a cylinder whose location is relative to the parent jeans, there should be no problem saying that the hole is passage relative to the parent left leg (just over the knee). After all, isn’t a hole always a hole in something? (yes, I know there is philosophical debate about this, like everything else).
The real head-scratcher, for me, is why it is apparently so non-obvious to the philosopher, that the textbook on this stuff simply ends the a-hole-is-a-region-of-space discussion with “this theory gets it thoroughly wrong”. There is no mention of “you know, there is a trivial rebuttal to that objection”. But that is why occasionally this blog has posts about what Philosophers can learn from Programmers.
[1] An Introduction to Ontology, Nikk Effingham, Polity Press, 2013
[2] Parametric and Feature-Based CAD/CAM, Shah/Mantyla, 1995, Section 7.3.1 Form Feature Schema
In An Introduction to Ontology[1], there is a discussion about whether holes “exist” (where “exist” has a technical Philosophical meaning). In laying out the pros/cons of defining a hole as a region of space, the (fatal?) flaw with the approach is claimed to be that, when the thing that had the hole in it (say a pair of jeans) moved to the next room, the hole wouldn’t/couldn’t move because it was defined as a particular region of space.
I, as a computer programmer, am left dumbfounded that anyone would make the definition of a hole be an absolute region of space, any more than they would define the geometry of each leg in the jeans as being an absolute region of space.
As modeled in typical vector drawing logic, the legs and the hole would each simply be a “part” of the jeans entity. Each part typically has attributes defining its contents (often simplified to just its color), so the left leg is denim/blue and the hole is void/transparent. Alternatively, the hole can be thought of as a “subtractive” part which simplifies the description of the geometry of things that have holes. (Beware of saying “that just simplifies the math but it isn’t real” because that's what they said when Copernicus noted that the math was simpler if planets orbited the sun rather than everything orbiting the earth).
CAD/CAM systems often model things as having parts known as features[2], which include not only subtractive “passages” (two openings, i.e. holes), but also subtractive “depressions” (one opening, i.e. dents) and subtractive “voids” (zero openings, i.e. completely enclosed). These are in addition to additive “protrusions”, “connectors” and “stand-alone volumes”. Don’t think a protrusion is an entity (or at least a part)? Don’t peninsulas exist?
So, just as no one has a problem with saying that the leg part of the jeans is a cylinder whose location is relative to the parent jeans, there should be no problem saying that the hole is passage relative to the parent left leg (just over the knee). After all, isn’t a hole always a hole in something? (yes, I know there is philosophical debate about this, like everything else).
The real head-scratcher, for me, is why it is apparently so non-obvious to the philosopher, that the textbook on this stuff simply ends the a-hole-is-a-region-of-space discussion with “this theory gets it thoroughly wrong”. There is no mention of “you know, there is a trivial rebuttal to that objection”. But that is why occasionally this blog has posts about what Philosophers can learn from Programmers.
[1] An Introduction to Ontology, Nikk Effingham, Polity Press, 2013
[2] Parametric and Feature-Based CAD/CAM, Shah/Mantyla, 1995, Section 7.3.1 Form Feature Schema
Labels:
holes,
ontologies,
parts,
programmers_teach_philosophers,
space
Subscribe to:
Posts (Atom)