Tuesday, September 15, 2009

Why did the Action Philosopher cross the road?

“Why” is such an ambiguous question; it asks about an event…
  1. what were the “causes” (i.e. the physical producers of effects)
  2. 
what were the “actions” (i.e. who did what)

  3. what were the “reasons” (i.e. thoughts, motivations by someone that led to their (in)actions).

And to make things worse, each of those categories of questions can be answered at many different levels of abstraction. “Why did Y die?” could be answered with:
  1. causes: “Y’s car crashed” or more specifically “Y’s brakes failed”
  2. actions: “X sabotaged Y’s car” or more specifically “X cut the brake line”
  3. reasons: “X wanted revenge” or more specifically “X wanted Y dead”.
And still worse, each of those categories exist in a chain such that each answer to “why” produces a new “why” question. Each cause has its own cause, actions occur in ordered sequences, and reasons are triggered by previous events. So, “why did X want revenge?” because “Y intentionally ruined X’s wedding”, “well WHY did Y do that?”, and so on. “What is happening” is an equally ambiguous question because it can also be answered in terms of causes or actions or reasons.

The Chicken crossing the road joke is so old, with so many new punch lines, that people sometimes don’t get the original joke anymore. It, of course, is that the answer “to get to the other side” is too much of a HOW rather than a WHAT (i.e. confusing proximate and ultimate goals). In the Philosophy of Action, which seeks to differentiate things done on purpose from things that merely happen, many philosophers have difficulty in separating what a person is doing from how they are going about doing it (not to mention, where their concept of “intention” fits in). Distinguishing what from how is so central to writing specifications, programs & documentation, that we Programmers should have something useful to pass along to Philosophers. I would teach them about our knowledge of: levels of abstraction, the difference between top-down and bottom-up views of the world, and the difference between the “intention” of the logic versus an instruction trace. Not that programmer’s have no problems in this regard…

Top Down: Good Grief! Why don’t you move this code out into subroutines? There are fifty pages of code in one switch statement here! I can’t see the forest for the trees.

Bottom Up: Good Grief! How can I tell what the program is doing when it is ten levels deep in function calls!
Top Down: At the very least, put some comments in here to say what the program is doing.

Bottom Up: I don’t believe in comments because the source code tells you exactly what the program is doing.

Top Down: Ok, so what does this part of your program do?

Bottom Up: Well, “i” is set to zero, and then it gets incremented each time through this loop here, and…

Top Down: No, really. Put comments in English telling what the program was supposed to be doing.

Bottom Up: Ok…. ++i; /* Increment i */



Events & Causes vs Actions & Reasons

“What is left over if I subtract the fact that my arm goes up from the fact that I raise my arm?" - Ludwig Wittgenstein, Philosophical Investigations §621
Human Agency, Intention, Actions, and Events are topics in Philosophy of Action and Philosophy of Mind. Events are simply “things that happen”[2] in a chain of causes and effects. Agency is a philosophical term for what programmers call “a thread of control”. Human Agency is the capacity of human beings to make choices and act upon them. In Action Theory, if an AGENT A has the DESIRE for X plus the BELIEF that doing Y will result in X, then A will have the INTENTION of doing the ACTION Y. If A actually does Y, then that desire and belief should be considered CAUSES of the EVENT of Y happening (…or not: “There has been a notable or notorious debate about whether the agent's reasons in acting are causes of the action”[3]). Unless specifically noted as unintentional, actions are only those that an agent does intentionally, so falling off a cliff would not be an “action” but jumping off would. Actions usually involve “bodily movements” that effectively translate mental intentions into physical events. (“Actions are bodily movements that are caused and rationalized by an agent’s desire for an end and a belief that moving her body in the relevant way will bring that end about.”[5]) Further, there is a certain moral component to human (in)actions that doesn’t apply otherwise. One can ask “should this have happened” about human actions (say, not pre-evacuating New Orleans), which would make no sense for events & actions that involve no human decisions (e.g. Katrina hits New Orleans, or, the spider killed the fly). Finally, in the Identity Theory of Mind[4], it is held that mental states, processes, and events (like desiring to eat ice cream) are mirrored by brain states, processes, and physical or physiological events (like this and that neuron firing).

Background: philosophers/metaphysicians are interested in figuring out what is “really real” versus just a story our mind made up in trying to interpret all those real-world inputs we get from our senses. They have decided that a way to prove that a thing is “real” is to show that it can affect an event happening or not: “a test of the reality of a property is that it can be causally efficacious”[6]. So, that is why there is so much interest in the relationship between actions/reasons/intentions and the scientifically measurable physical events/causes. If one can’t show how the intention to perform the action of eating an ice cream cone somehow translates into a synapse causing a tongue muscle contraction event, then maybe actions and intentions aren’t really real.

So! All settled then! Well no. Here is just a sampling of technical disagreements between philosophers:
  • Is the “action” of moving a body part the same as the actual movement of the body part? Or is the action just the “intention” bit.
  • Contrary to the Identity Theory of Mind, some say[5] that you can’t map intentions onto brain states because intentions aren’t events, so you won’t find any corresponding physical events to map to. That’s why, explanations in everyday conversation, even by pro-ITOM philosophers and scientists, are in terms of intentions (“she crossed the road to catch the bus”) rather than mental and brain states. Still others say that “intention” isn’t real, it is only a story we tell to explain the world.
  • Some say “that which causes an action constitutes the agent’s reason for it.”, others say that you can’t explain the “reason” why somebody did something by just looking at what “caused” it.[5]
  • Causal Fundamentalism says that everything can be explained at the “physics” level of causes and effects, others say no.[5]
  • Some say actions are a subclass of events; others say actions are a relationship between an agent and an event, i.e. actions are instances of the relation (agent, “bringing about”, event). [Programmers think: difference between “class Action extends Event” versus “create table BringingAbout(ActionID,AgentFK,EventFK)”]
So, does the action of “turning on a light” include “the light illuminating”, or does it stop at “flipping the switch”, or at “the hand movements that flip the switch”, or at “trying to move the hand”? Does it include the firing of the neurons? the muscles contractions? the moving of the bones? Are they all separate actions, or not actions at all? Are they the only actions with no overarching “flip the switch” action?

As I wrote about the mind/body problem, I think we have a levels of abstraction problem here.

Programmers understand that systems are built in layers where each layer exposes WHAT it can do on top, hiding the HOW it does it underneath. The how of one layer uses the what of the layers below it. There are layers on top of layers, and there are layers within layers. Computer software forms a layer on top of computer hardware, but there are multiple layers within each. Within software there are layers for programs on top of High Order Languages on top of Assembly language. Within hardware there are layers for processors & memory on top of logic circuits & amplifiers on top of transistors & resistors on top of literally layers of silicon dioxide & gallium arsenide. Some of these layers are so complete and versatile that they form their own independent paradigm such that events can be completely described at their level with no reference to layers above or below it.

Another way to look at levels of abstraction is in the way that the same series of events can be interpreted in a hierarchy of meaning. For example, this text has meaning as a series of sentences, but it can also be interpreted as just a series of words, which are just a series of letters, which are just a series of ASCII codes, which are just a series of hex digits, which are just a series of binary digits, which are just a series of alterations between zero and five volts on a chip. Same universe but simultaneous multiple levels of interpretation into events.

The WHAT defines the intention to do some action, and The HOW is the planned (in advance or on the fly) set of more detailed actions to accomplish it. One can describe and explain the action at any level, but confusion occurs when jumping around from one level of abstraction to another. This is a problem when programmers or philosophers do it. Intermixing instructions at one level with more detailed instructions from lower levels makes it hard to follow the logic at either level. Non-programmers can understand this by imagining a cake recipe that goes into the instructions for growing wheat in the section that “desires” a cup of flour.

SO, human agency is equivalent to the “top level of abstraction” that is deciding WHAT ultimate goal state is desired, and it entails all the lower levels. And like a software process (i.e. agent), there is a “thread of control” that threads through the call stack, traversing down through the levels of HOW, returning back up through layers of WHAT.

Intention

What is it for a person to “will” or “intend” an action? Does the intention to raise one’s arm manifest itself as a brain state that can be seen in some scanner, as distinct from the actual activity of raising that arm? Jennifer Hornsby rejects “the physicist’s Fundamentalism”, because it lacks “intentionality”[5]. As described earlier, each level of abstraction can simultaneously tell the story in its own terms, BUT, that doesn’t mean being able to see “intention” at any level just from looking at “what happened”. In the first place, it is hard to translate upward in the abstraction layer cake. Secondly, looking only at the events that occur leaves out the intentional logic paths not taken.

Even though programmers know that there is “intention” in programs (because that’s how we write them!), in programs written by someone else it is often difficult to divine what it is. To translate back up to the “WHAT was intended” from the “HOW it was implemented” is so hard that we tell programmers to write it explicitly in comments in the code. When faced with fixing a flawed program whose intention isn’t clear, programmers will prefer to rewrite it from scratch rather than trying to figure out what it was doing.

Programmers know that when analyzing an ICE instruction trace (think flight data recorder for programs), it will not show all the “intention” in a program because it will not show the paths in the program logic that were not taken. You might infer that a decision was being made by the code when it executed certain test and branch instructions, but you won’t see the “road not travelled”. That’s why in debuggers it is crucial to have access to the source code to see the complete logic, whether it was executed or not, otherwise one could not diagnose and correct any incorrect translations of “intention” at one level of abstraction into a “plan of action” at the next lower level.

Also, with regard to looking for intention in brain (or computer memory) states, it is in the program logic, not the “state data” nor the instruction trace. BUT, intentions are not mystical nor abstract, they are physically embodied somewhere. With computer programs, it is in the encoded logic which sits side by side with state data in memory hardware. Because, at the right level of abstraction, the encoded intentional logic IS just data.

Now, like action theory rules, workaday computer programs lay out the “intentions” in designed plans of explicit rational language. In artificial intelligence programs, the intentions are often implied via weighted scoring of alternative actions. And, the brain may act more like the ant colony example of emergence. The “purposeful” behavior of the colony as a whole “emerges” out of the actions of all the individual ants who presumably have no clue about the “intentions” of the colony. So brain/colony intentions may not be “designed” as much as “evolved”. On the other hand, programmers are painfully aware of inexplicable behavior by programs that they designed. And, lest non-programmers assume that rationally designed program behavior can’t mindlessly evolve, there are many times when bugs get fixed by making a local logic change with no idea of the global consequences; and if that band-aid causes a new bug over there, then another band-aid is placed over there, and so on, until the behavior has evolved just enough to pass inspection. However, in all those cases, while the intentions at one level may be hard to map to those at another, they are, in fact, all there at once.

SO, why DID the chicken cross the road? Well, because the muscles in the left leg contracted causing it to move forward, then the muscles in the right leg…


[Ed. Note - 12/11/12: as per my disclaimers, once I start looking for my epiphanies on the net, I find them. E.G. in this case, see "Aristotle's Four Causes". Congrats Bruce, your musings on the various meanings of "why" were done better 2400 years ago...read more!]

[1] Philosophy Bites podcast on agency

http://philosophybites.com/2008/06/jennifer-hornsb.html
[2] Casati, Roberto, Varzi, Achille, "Events", The Stanford Encyclopedia of Philosophy (Fall 2008 Edition)

http://plato.stanford.edu/archives/fall2008/entries/events/
[3] Wilson, George, "Action", The Stanford Encyclopedia of Philosophy (Fall 2009 Edition)

http://plato.stanford.edu/archives/fall2009/entries/action/
[4] Smart, J. J. C., "The Identity Theory of Mind", The Stanford Encyclopedia of Philosophy (Fall 2008 Edition)

http://plato.stanford.edu/archives/fall2008/entries/mind-identity/
[5] Jennifer Hornsby, Agency and Actions, Cambridge Univ Press, 2004

http://eprints.bbk.ac.uk/95/
[6] Jacob, Pierre, "Intentionality", The Stanford Encyclopedia of Philosophy (Fall 2008 Edition)

http://plato.stanford.edu/archives/fall2008/entries/intentionality/




No comments:

Post a Comment