Wednesday, September 16, 2009

Counting Intentional Non-events

In writing earlier about a programmer’s take on the debates in the Philosophy of Action, there was a particular philosophical argument that I wanted to delve into separately because I have a suggested solution. [Notice how my intention was to not write about this while writing earlier.]

Recall that in Action Theory, loosely speaking, Events are general cause & effect “things that happen”, and Actions are things that someone (aka the Agent) does on purpose, usually involving some “bodily movement” to get things rolling. One of the debates is whether Actions are, in fact, Events or not. [For object-oriented programmers, the question is whether “class Action extends Event” or not.] Jennifer Hornsby[1] argues that the “standard story of action”, with its focus on body movements, and its claims that all actions are events, is off-track because it leaves out NOT moving (e.g. not eating that ice cream because of a desire to be thinner, spoiling the party by not turning up, etc.).

There is a subtle philosophical point here: Events are considered particulars, i.e. individual things that “exist” in the sense that each is associated with a span of time and a region of space (when and where the event occurred), and one can (figuratively) “point” to it, and count it, and tell it apart from other events. Additionally, it is common in philosophy to maintain that a “hole” (the lack of something) doesn’t exist as a particular. After all, I can’t count how many green things are not in this room, nor point to each of them, nor tell them apart from each other. So, the argument goes that since you can intentionally “not wink at me” (by definition, an action), Action can’t be a subtype of Event.

However, in thinking about what it would mean for a computer program to “not do some action”, I realized there was a way to pinpoint those occasions. For example, in special game playing programs, moves are considered, compared with other moves, and ultimately one is chosen with the rest being rejected. It seemed to me that the rejected moves are "not done" in an "intentional" fashion, in contrast to moves that were never even considered. Game programs are a particular form of the general class of programs that generate "plans" to reach some goal not unlike humans. Actions are considered or rejected as the plan is being put together. Also, in everyday programs, the logic decides to call a subroutine or not, thus the routines are intentionally not called sometimes, in contrast to not doing things that aren't even in the program. It makes sense to say that routine A was "not called" x number of times, but no answer other than zero seems to make sense when asking how many times did the program not go to Mars.

So, I came up with a theory of how one can count, for a given situation, actions that are not a positive performance (i.e. that are not “doing something”). By “given situation” I mean a particular period of time and an area of space (for example, the duration of time we were together at this table for lunch today). To count how many times an event A did not occur in a given situation (e.g. you not winking at me during lunch):
1) establish the minimum time span that actually doing A once would take, including any time needed to separate one occurrence from another. Dividing the situation duration by the minimum action duration would establish a maximum potential count. (e.g. lunch was an hour, a wink takes a second plus a second in between, therefore a potential maximum 1800 winks over lunch)
2) establish what conditions are required for A to occur once (e.g. we both must be at the table with eye contact), and eliminate all the time spans where the conditions did not hold (e.g. I was in the restroom for 10 mins, you were talking to the next table for 15 minutes, so a tighter maximum would be 1050 winks)

But, to count how many times the action A did not occur in a given situation, there must be intention. In order for “not doing something” to be an intentional action, the agent must consider doing A and decide not to. And, the required conditions have to be met otherwise the action couldn’t have taken place anyway (i.e. motive and opportunity).
3) So, the actual number of occurrences of “not doing something” is how ever many times the agent considered doing it, and didn’t, when they could have. E.G. you thought about winking 5 times while we had eye contact but only did once…the 3 times you thought about it while I was in the restroom don’t count, so, you didn’t wink at me 4 times.

Interestingly, the time it takes to consider and reject doing something may take much less time than the doing of it, so, the maximum potential number of times something wasn’t done may be much more than the max. potential times that it was. One might get-pregnant twice a year but not-get-pregnant 3 times a week.

Finally, because one must consider (not) doing something in order for it to count as an action, one can say confidently that some things did not occur zero times. There were zero times that I did not give you a unicorn (because there are no such things, hence no opportunity). There were zero times that I did not stand on your shoulders because it never even entered my mind.


[1] Jennifer Hornsby, Agency and Actions, Cambridge Univ Press, 2004
http://eprints.bbk.ac.uk/95/

[2] Jennifer Hornsby, Actions (International Library of Philosophy), Routledge, 1980


Tuesday, September 15, 2009

Why did the Action Philosopher cross the road?

“Why” is such an ambiguous question; it asks about an event…
  1. what were the “causes” (i.e. the physical producers of effects)
  2. 
what were the “actions” (i.e. who did what)

  3. what were the “reasons” (i.e. thoughts, motivations by someone that led to their (in)actions).

And to make things worse, each of those categories of questions can be answered at many different levels of abstraction. “Why did Y die?” could be answered with:
  1. causes: “Y’s car crashed” or more specifically “Y’s brakes failed”
  2. actions: “X sabotaged Y’s car” or more specifically “X cut the brake line”
  3. reasons: “X wanted revenge” or more specifically “X wanted Y dead”.
And still worse, each of those categories exist in a chain such that each answer to “why” produces a new “why” question. Each cause has its own cause, actions occur in ordered sequences, and reasons are triggered by previous events. So, “why did X want revenge?” because “Y intentionally ruined X’s wedding”, “well WHY did Y do that?”, and so on. “What is happening” is an equally ambiguous question because it can also be answered in terms of causes or actions or reasons.

The Chicken crossing the road joke is so old, with so many new punch lines, that people sometimes don’t get the original joke anymore. It, of course, is that the answer “to get to the other side” is too much of a HOW rather than a WHAT (i.e. confusing proximate and ultimate goals). In the Philosophy of Action, which seeks to differentiate things done on purpose from things that merely happen, many philosophers have difficulty in separating what a person is doing from how they are going about doing it (not to mention, where their concept of “intention” fits in). Distinguishing what from how is so central to writing specifications, programs & documentation, that we Programmers should have something useful to pass along to Philosophers. I would teach them about our knowledge of: levels of abstraction, the difference between top-down and bottom-up views of the world, and the difference between the “intention” of the logic versus an instruction trace. Not that programmer’s have no problems in this regard…

Top Down: Good Grief! Why don’t you move this code out into subroutines? There are fifty pages of code in one switch statement here! I can’t see the forest for the trees.

Bottom Up: Good Grief! How can I tell what the program is doing when it is ten levels deep in function calls!
Top Down: At the very least, put some comments in here to say what the program is doing.

Bottom Up: I don’t believe in comments because the source code tells you exactly what the program is doing.

Top Down: Ok, so what does this part of your program do?

Bottom Up: Well, “i” is set to zero, and then it gets incremented each time through this loop here, and…

Top Down: No, really. Put comments in English telling what the program was supposed to be doing.

Bottom Up: Ok…. ++i; /* Increment i */



Events & Causes vs Actions & Reasons

“What is left over if I subtract the fact that my arm goes up from the fact that I raise my arm?" - Ludwig Wittgenstein, Philosophical Investigations §621
Human Agency, Intention, Actions, and Events are topics in Philosophy of Action and Philosophy of Mind. Events are simply “things that happen”[2] in a chain of causes and effects. Agency is a philosophical term for what programmers call “a thread of control”. Human Agency is the capacity of human beings to make choices and act upon them. In Action Theory, if an AGENT A has the DESIRE for X plus the BELIEF that doing Y will result in X, then A will have the INTENTION of doing the ACTION Y. If A actually does Y, then that desire and belief should be considered CAUSES of the EVENT of Y happening (…or not: “There has been a notable or notorious debate about whether the agent's reasons in acting are causes of the action”[3]). Unless specifically noted as unintentional, actions are only those that an agent does intentionally, so falling off a cliff would not be an “action” but jumping off would. Actions usually involve “bodily movements” that effectively translate mental intentions into physical events. (“Actions are bodily movements that are caused and rationalized by an agent’s desire for an end and a belief that moving her body in the relevant way will bring that end about.”[5]) Further, there is a certain moral component to human (in)actions that doesn’t apply otherwise. One can ask “should this have happened” about human actions (say, not pre-evacuating New Orleans), which would make no sense for events & actions that involve no human decisions (e.g. Katrina hits New Orleans, or, the spider killed the fly). Finally, in the Identity Theory of Mind[4], it is held that mental states, processes, and events (like desiring to eat ice cream) are mirrored by brain states, processes, and physical or physiological events (like this and that neuron firing).

Background: philosophers/metaphysicians are interested in figuring out what is “really real” versus just a story our mind made up in trying to interpret all those real-world inputs we get from our senses. They have decided that a way to prove that a thing is “real” is to show that it can affect an event happening or not: “a test of the reality of a property is that it can be causally efficacious”[6]. So, that is why there is so much interest in the relationship between actions/reasons/intentions and the scientifically measurable physical events/causes. If one can’t show how the intention to perform the action of eating an ice cream cone somehow translates into a synapse causing a tongue muscle contraction event, then maybe actions and intentions aren’t really real.

So! All settled then! Well no. Here is just a sampling of technical disagreements between philosophers:
  • Is the “action” of moving a body part the same as the actual movement of the body part? Or is the action just the “intention” bit.
  • Contrary to the Identity Theory of Mind, some say[5] that you can’t map intentions onto brain states because intentions aren’t events, so you won’t find any corresponding physical events to map to. That’s why, explanations in everyday conversation, even by pro-ITOM philosophers and scientists, are in terms of intentions (“she crossed the road to catch the bus”) rather than mental and brain states. Still others say that “intention” isn’t real, it is only a story we tell to explain the world.
  • Some say “that which causes an action constitutes the agent’s reason for it.”, others say that you can’t explain the “reason” why somebody did something by just looking at what “caused” it.[5]
  • Causal Fundamentalism says that everything can be explained at the “physics” level of causes and effects, others say no.[5]
  • Some say actions are a subclass of events; others say actions are a relationship between an agent and an event, i.e. actions are instances of the relation (agent, “bringing about”, event). [Programmers think: difference between “class Action extends Event” versus “create table BringingAbout(ActionID,AgentFK,EventFK)”]
So, does the action of “turning on a light” include “the light illuminating”, or does it stop at “flipping the switch”, or at “the hand movements that flip the switch”, or at “trying to move the hand”? Does it include the firing of the neurons? the muscles contractions? the moving of the bones? Are they all separate actions, or not actions at all? Are they the only actions with no overarching “flip the switch” action?

As I wrote about the mind/body problem, I think we have a levels of abstraction problem here.

Programmers understand that systems are built in layers where each layer exposes WHAT it can do on top, hiding the HOW it does it underneath. The how of one layer uses the what of the layers below it. There are layers on top of layers, and there are layers within layers. Computer software forms a layer on top of computer hardware, but there are multiple layers within each. Within software there are layers for programs on top of High Order Languages on top of Assembly language. Within hardware there are layers for processors & memory on top of logic circuits & amplifiers on top of transistors & resistors on top of literally layers of silicon dioxide & gallium arsenide. Some of these layers are so complete and versatile that they form their own independent paradigm such that events can be completely described at their level with no reference to layers above or below it.

Another way to look at levels of abstraction is in the way that the same series of events can be interpreted in a hierarchy of meaning. For example, this text has meaning as a series of sentences, but it can also be interpreted as just a series of words, which are just a series of letters, which are just a series of ASCII codes, which are just a series of hex digits, which are just a series of binary digits, which are just a series of alterations between zero and five volts on a chip. Same universe but simultaneous multiple levels of interpretation into events.

The WHAT defines the intention to do some action, and The HOW is the planned (in advance or on the fly) set of more detailed actions to accomplish it. One can describe and explain the action at any level, but confusion occurs when jumping around from one level of abstraction to another. This is a problem when programmers or philosophers do it. Intermixing instructions at one level with more detailed instructions from lower levels makes it hard to follow the logic at either level. Non-programmers can understand this by imagining a cake recipe that goes into the instructions for growing wheat in the section that “desires” a cup of flour.

SO, human agency is equivalent to the “top level of abstraction” that is deciding WHAT ultimate goal state is desired, and it entails all the lower levels. And like a software process (i.e. agent), there is a “thread of control” that threads through the call stack, traversing down through the levels of HOW, returning back up through layers of WHAT.

Intention

What is it for a person to “will” or “intend” an action? Does the intention to raise one’s arm manifest itself as a brain state that can be seen in some scanner, as distinct from the actual activity of raising that arm? Jennifer Hornsby rejects “the physicist’s Fundamentalism”, because it lacks “intentionality”[5]. As described earlier, each level of abstraction can simultaneously tell the story in its own terms, BUT, that doesn’t mean being able to see “intention” at any level just from looking at “what happened”. In the first place, it is hard to translate upward in the abstraction layer cake. Secondly, looking only at the events that occur leaves out the intentional logic paths not taken.

Even though programmers know that there is “intention” in programs (because that’s how we write them!), in programs written by someone else it is often difficult to divine what it is. To translate back up to the “WHAT was intended” from the “HOW it was implemented” is so hard that we tell programmers to write it explicitly in comments in the code. When faced with fixing a flawed program whose intention isn’t clear, programmers will prefer to rewrite it from scratch rather than trying to figure out what it was doing.

Programmers know that when analyzing an ICE instruction trace (think flight data recorder for programs), it will not show all the “intention” in a program because it will not show the paths in the program logic that were not taken. You might infer that a decision was being made by the code when it executed certain test and branch instructions, but you won’t see the “road not travelled”. That’s why in debuggers it is crucial to have access to the source code to see the complete logic, whether it was executed or not, otherwise one could not diagnose and correct any incorrect translations of “intention” at one level of abstraction into a “plan of action” at the next lower level.

Also, with regard to looking for intention in brain (or computer memory) states, it is in the program logic, not the “state data” nor the instruction trace. BUT, intentions are not mystical nor abstract, they are physically embodied somewhere. With computer programs, it is in the encoded logic which sits side by side with state data in memory hardware. Because, at the right level of abstraction, the encoded intentional logic IS just data.

Now, like action theory rules, workaday computer programs lay out the “intentions” in designed plans of explicit rational language. In artificial intelligence programs, the intentions are often implied via weighted scoring of alternative actions. And, the brain may act more like the ant colony example of emergence. The “purposeful” behavior of the colony as a whole “emerges” out of the actions of all the individual ants who presumably have no clue about the “intentions” of the colony. So brain/colony intentions may not be “designed” as much as “evolved”. On the other hand, programmers are painfully aware of inexplicable behavior by programs that they designed. And, lest non-programmers assume that rationally designed program behavior can’t mindlessly evolve, there are many times when bugs get fixed by making a local logic change with no idea of the global consequences; and if that band-aid causes a new bug over there, then another band-aid is placed over there, and so on, until the behavior has evolved just enough to pass inspection. However, in all those cases, while the intentions at one level may be hard to map to those at another, they are, in fact, all there at once.

SO, why DID the chicken cross the road? Well, because the muscles in the left leg contracted causing it to move forward, then the muscles in the right leg…


[Ed. Note - 12/11/12: as per my disclaimers, once I start looking for my epiphanies on the net, I find them. E.G. in this case, see "Aristotle's Four Causes". Congrats Bruce, your musings on the various meanings of "why" were done better 2400 years ago...read more!]

[1] Philosophy Bites podcast on agency

http://philosophybites.com/2008/06/jennifer-hornsb.html
[2] Casati, Roberto, Varzi, Achille, "Events", The Stanford Encyclopedia of Philosophy (Fall 2008 Edition)

http://plato.stanford.edu/archives/fall2008/entries/events/
[3] Wilson, George, "Action", The Stanford Encyclopedia of Philosophy (Fall 2009 Edition)

http://plato.stanford.edu/archives/fall2009/entries/action/
[4] Smart, J. J. C., "The Identity Theory of Mind", The Stanford Encyclopedia of Philosophy (Fall 2008 Edition)

http://plato.stanford.edu/archives/fall2008/entries/mind-identity/
[5] Jennifer Hornsby, Agency and Actions, Cambridge Univ Press, 2004

http://eprints.bbk.ac.uk/95/
[6] Jacob, Pierre, "Intentionality", The Stanford Encyclopedia of Philosophy (Fall 2008 Edition)

http://plato.stanford.edu/archives/fall2008/entries/intentionality/




Saturday, September 12, 2009

Prestigious Programmers Process Persons Philosophically

Like the “mind/body problem” discussed earlier, I think Programmers have interesting input for Philosophers on another perennial topic of theirs: “personal identity” and the side topics of teleportation and duplicating people (“persons” in philosophy speak). In addition to a perspective on whether the transported person is the same as the original (which is the normal focus of attention), programmers also have an insight into an oft overlooked aspect: Why would a duplicate ever act differently than the original?

Beam Me Up, Scotty

In [Western] Philosophy, “personal identity” is the thing that links a past, present, and future “you” such that it is right to hold “present you” accountable for the acts of “past you”, and “present you” cares about what happens to “future you”. In many philosophical discussions of “what is a person”, and “what is consciousness”, the topic of teleportation (aka teletransportation) comes up. To the non-philosopher, it is probably surprising that science fiction is discussed in scholarly debates about what is “really real”. The Star Trek transporter (and many variations of it) are discussed because it stops participants from getting away with shallow “common sense” answers. It presents problems with simplistically defining a person as the collection of atoms that embody that person, or as an unbroken series of memories, or as a soul (of the Judeo-Christian type).
For example:

  • When you re-materialize from transporting, if you are made up of different atoms than constituted you before you left, are you really you? Or, are you a clone with the original you having died?
  • When you re-materialize, are you still you because you are made up of the same unbroken series of memories? But, if you say yes, what happens if the transporter fails to destroy the original? If the one now on Mars is you then who is the person still standing on the transporter pad here on Earth?
  • If you consider teleportation as constituting a “break” in memory/consciousness, and therefore you would not be the same person, then why are you considered to be the same person after being unconscious with no memories for 8 hours each night?
  • If you have/are a soul then what happens to it? Does the re-materialized you get it? What if the original you wasn’t destroyed?
Artie Deco and 4-Q-2

A variant on transporter discussions is simply around duplicating a person, and it is here that I would acquaint philosophers with the Unix system call: fork().

In the Unix operating system, there are “processes” that run (apparently) simultaneously, and are analogous to persons. Like a person, each process has its own memories, its own intentions, and its own “location in space-time”. And, via the fork system call, processes can be duplicated. In fact, the fork scenario is even stranger than a transporter…

Imagine a special small room, and inside is a black vending machine with a single button. If you enter the room and push the button, a cell phone is dispensed. Except that when you pushed that button, a copy of you was created in another dimension, and to both you and the copy, it seems like nothing has happened. It appears to each that they are the one who just pressed the button and was given a phone. When each copy leaves the room, they are back in the same original dimension, and each may encounter and interact with the other.
Now, how would you know if you are the original or the copy? Does “original” and “copy” even have meaning here? Most people would try to answer in terms of “who has the original atoms” and the like, and if one copy did have them then the situation is the same as the broken transporter scenario. So, imagine further that until either copy does something to cause its body to diverge from the other, the copies even share the same atoms. On an atom by atom basis, each copy only gets its own as either copy does something to diverge on that particular atom. And, it is effectively random which copy keeps any particular atom as they diverge over time. SO, which one is the original, and which one is the duplicate? I.E. which one is the “same” person as the one that originally hit the button?

As contrived as that scenario sounds, that’s what happens when Unix forks a process (with the help of copy-on-write virtual memory). And programmers have an answer as to which one is the “same” as the original. …drum roll please… ahem…

“It is so arbitrary that God has to tell each whether they are the original or the copy!”

What do you think the cell phone is for? Actually, the cell phone serves two purposes (to keep the analogy really strict); each phone has a text message with either the phone number of the copy (meaning that YOU are the original), or a zero, meaning that you are the copy. The original is free to call the copy.

But I Don’t Want To Be The Copy

Here is the part that Star Trek, et al, usually gets wrong, and Philosophers don’t seem to ponder: Why do the copies act differently than each other? One mitigating factor is that each copy is usually materializing in a different location, or a different time, or with special defects (like milquetoast Capt. Kirk versus Mr. Hyde Capt. Kirk), so they can be forgiven for not having each copy act in lock step. But in the fork scenario, they exist in the same situation, and therefore would do exactly the same thing as each other. Whoa! What about Free Will and all that stuff? Well, given the same location, environment, memories, “programming” if you will, why shouldn’t they do exactly the same thing as each other?

With Unix processes, unless the programming takes note of whether it is the original or not, and unless it does different things based on that information, each copy will attempt to do the same things. They will only diverge as they both attempt to do something that only one will be allowed to do. For example, locking a file, or reading the very next bit of data from an already open file. The Unix operating system (aka God), will arbitrarily pick one of the copies to succeed when both try simultaneously. [Think multiple CPUs all you geeks shouting that they don’t really do things simultaneously.] So, the copies begin to fitfully diverge as one succeeds then does what it planned to do upon success, and the other one fails then does what it was planned to do upon failure.

Now, programmers know that to be productive, they must decide what they want each copy to do, and write the program in such a way that each does what it is supposed to. With people, you must know what will happen when you press that vending button, and why you are pressing it, and what you want each copy to do, AND HAVE THE WILLPOWER TO DO IT!

It does no good to have a plan about what the “copy” is supposed to do if you have the personality that is going to decide after it’s too late that you don’t want to be the copy! If your "programming" is such that, after pressing the button, you decide to not look at the text message because now you don't really want to know...well then the OTHER copy is going to do the exact same thing! [If you do look but decide to ignore it then, unlike a program, it gets to be a psychological issue of whether you really can ignore it.]

There is a wonderful exploration of all these aspects in the movie “The Prestige”. I would point out each scenario, but I so want you to see it if you haven’t already, and I don’t want to spoil all the twists and turns at the end.

[1] http://www.amazon.com/Hardware-Wars-30th-Anniversary-Collectors/dp/B001OTSFE4/




Wednesday, September 2, 2009

Programmer: What Mind/Body Problem?

While my general project these days is trying to teach programmers the many things they can learn from Philosophy, I am once again struck by how many philosophers get confused by things that programmers understand quite well. In today’s particular case, it is while reading "Which Physical Thing Am I?"[1], and once again, it is related to the "mind/body problem". That essay ends “if this philosophic hypothesis seems implausible to you, you try to formulate one that is less implausible.” Well, my reply is "let me teach you about levels of abstraction and RAID drives”.

Mind/Body Problem

Thinkers over time have recognized that a person remains the “same person” even though their body undergoes changes over time, even to the extent of losing parts; say, an arm, or a leg. In 1954, it was discovered that "98% of the atoms in the human body are renewed each year"[2]. This has led many to conclude that what makes a person a person, must be different and separate from their body. For some, this different thing is a "soul". For others, it is a "mind" or “consciousness”. In general, this is often called mind-body dualism.

Others have concluded that, because nothing exists outside of physical reality, the mind can't be different from the body, but since the body does change over time, there must therefore be some subset of the body (say, the brain) that is identical to the mind. Jews and Muslims believe[4] that there is a bone in the spine, called the "Luz", that doesn't decay, and from which the entire body will be rebuilt during resurrection. Dr. Roderick Chisholm, thinks that, while it probably isn't the Luz bone, it likely is something microscopic within the brain. In "Which Physical Thing Am I?"[1] he writes,

"I am literally identical with some proper part of this macroscopic body, some intact, nonsuccessive part that has been in this larger body all along."
[By "nonsuccessive", he means something that is NOT a series of different parts over time. By contrast, an army regiment is successive in that it can exist for hundreds of years even though no individual soldier does.]

Software/Hardware Non-problem

Now keep in mind that the Stanford Encyclopedia of Philosophy[3] describes Chisholm as "widely regarded as one of the most creative, productive, and influential American philosophers of the 20th Century", and that Chisholm published the above in 1989 (not 1889), and it is included in anthologies[1] as recently as 2008 (not 1908). It is not just ancient philosophers who get confused over this topic. What are they confusing that programmers know?

Programmers know that software (like the mind) runs on hardware (like the body), but it isn’t synonymous with the hardware. That doesn’t make software mystical or beyond physical reality; it is just at a higher level of abstraction. At that more abstract level, software exists as an information process whose data state at any given point in time is embodied in some set of physical components. But it would be the same software even if running on other equivalent hardware.

In fact, there are several layers of abstraction between your typical program and hardware, what with Java code compiled into virtual machine code, interpreted in a VM written in C code, that compiled to Intel assembly code, that assembled into
hexadecimal machine code,… And we haven’t even made the leap to electrons flowing through transistors yet! The point being that software doesn’t care which transistors, even though every i+=1; statement runs in lock step with some transistors somewhere doing something specific to implement them. If that flip-flop didn’t flop then that bit wouldn’t change, and that auto-increment instruction wouldn’t increment, and so on. But, a programmer understands that it isn’t the electrons running through the NAND gate that “makes” a program DO what does, any more than some x=0; statement makes a Square.draw() method DO what it does. They all happen simultaneously at different levels of abstraction.

RAID fights bugs

To Chisholm’s particular point that continuous personhood requires (at least some) continuous body bits, programmers know that the “life span” of any particular set of data, is not tied to any particular physical component. Information is independent of the media used to record, or embody, or carry a copy of, that information. A perfect example is a RAID device containing multiple disk drives. Data in a RAID is kept in multiple physical locations such that any particular location can be destroyed without losing the data. So, an arbitrary set of data (say, a file) can exist unchanged over time, even though the hardware storing it has completely changed. With a “hot swap” RAID, a system using that file would have no idea that a drive died, was removed, replaced with a new drive, and that new drive re-populated with data.

And, just like a mind’s constantly changing data set, that file’s contents on the RAID device can be constantly changing. And those constant data changes are totally independent of the constant drive changes that might be taking place inside the RAID.

The point here is not “the brain is a RAID drive”; the point is that a RAID drive is an existence proof that ongoing software processes do not require “intact, nonsuccessive” hardware. And the larger point is that Mind and Body can both be understood as simultaneous lock-step processes at different levels of abstraction in the same way that C++ method calls are in lock-step with CMOS chip voltage changes.

[1] Metaphysics, the big questions, 2nd Ed., 2008, chap 35
[2] Time Magazine, Oct 11, 1954 quoting Dr. Paul C. Aebersold of Oak Ridge
http://www.time.com/time/magazine/article/0,9171,936455,00.html
[3] http://plato.stanford.edu/entries/chisholm/
[4] http://en.wikipedia.org/wiki/Luz_%28bone%29