Wednesday, May 19, 2010

A hole for every component, and every component in its hole

Amongst the surprisingly simple ideas that aren't so simple when thought about Philosophically are holes.  There is a small library of publications on what exactly holes are.  A summary can be found online in this Stanford Encyclopedia of Philosophy article on the metaphysics of holes.  One of the viewpoints it cites is: ‘There is no such thing as a hole by itself’ (Tucholsky, 1930).  This reminded me of one of my very first blog posts from 2000 which I reprint here...

There is no such thing as a Component

I maintain that there is no such thing as a Component in the same way that there is no such thing as a donut hole. Just as the donut hole doesn't exist without a donut to define it, a Component doesn't exist without a Framework to define it. Using a printed circuit board as a metaphor for a framework, it's the "sockets", into which IC chips are meant to be plugged, that define components. So called universal or standalone components are meaningless (and certainly useless) without some framework that expects components of the same purpose and interface.

Ok, so what's your point? The point is that too many developers (and books on the subject) think about components as standalone chunks of functionality that can be "glued together" after the fact. They don't realize that the framework has to come first and foremost in conception and design. Szyperski doesn't get around to talking about frameworks until chapter 21 of his Component Software book for heaven's sake.

Even physical components are like this. The prototypical component, the IC chip, always was designed within a family of chips that were meant to work together. They all needed the same voltage levels for zeroes and ones and tri-states, the same amperage levels, the same clock rates, etc, etc. Other families used other voltage levels. The first reusable, interchangeable parts in history were for rifles. They were meant to be easy and quick to replace (as opposed to the hand crafted muskets they were replacing) but they were meant specifically to make rifles!

Rummaging around a garage, you could find all sorts of "widgets" and "gizmos" that you might guess are components of something, but unless you know what framework they were meant to be a part of, they are not good for anything but door stops or paperweights. In other words, random components don't tend to fit together or work together.

Too many people are trying to make "universal" components without realizing that those components still work within some framework that allows them to be put together and communicate with each other. The problem is that other people doing the same thing have defined other "generic" frameworks that are none the less incompatible.

For example, the toys that baby boomers played with when they were young abounded with generic frameworks of universal components: Tinker Toys, Lincoln Logs, Erector Sets, LEGOs. They all had universal components within a generic framework that let you build anything. BUT, you couldn't mix Tinker Toy parts with Erector Set parts (without glue or duct tape).

Ah, you say. That's why I like duct tape, weakly typed, languages like Perl that lets me glue together parts. Also, what about Play-doh?! You could stick anything together with that! Yes, but there was a reason you made bridges out of Erector Sets instead of Play-doh, and the same reasons apply to software systems (but Strong versus Weak typing is another discussion).

Objects versus Components

Until I had this epiphany about components as donut holes, I didn't have a good answer to the question "what's the difference between an object and a component?". I now understand that all objects ARE components, but not all components are objects. The framework that defines a set of components does not have to be an object oriented framework. But all object oriented languages define an object framework. They are generic enough frameworks that any objects programmed in that language may inter-operate with each other. Unfortunately though, as with Tinker Toys and Lincoln Logs, Java objects typically can't interact with Smalltalk objects.

In the Java language there are at least two levels of object framework. There are plain old Java objects (POJOs) and there are so-called JavaBeans. Whereas any property of a POJO can be accessed (assuming its not protected by the "private" keyword) via a fooObject.barProperty syntax, only special properties may be accessed via the JavaBeans framework. JavaBeans are those objects that have defined special property accessor and mutator methods of the form: getBarProperty() and setBarProperty(). "JavaBean" is the name given to any component that works within that specialized framework. To make matters confusing however, it turns out that Javasoft called more than one framework "JavaBeans" (arrgh!). There are even more specialized versions of JavaBeans that are made to work with fancy GUI toolkits.  And of course, they caused even further confusion by calling yet another (different) "widget", from yet another (different) framework, a JavaBean: The Enterprise JavaBean! So, without clearly focusing on frameworks, even Javasoft confuses different component types with each other!

The moral? Don't fret that there is no such thing as a truly "universal" component. Don't spend energy trying to build them, or building "single universal" frameworks. Focus on what is needed for your situation and design a well crafted framework first and foremost. If it needs to work with other frameworks (like whatever Microsoft builds that won't integrate with anybody else), understand that framework bridges will be needed. It is the rare case that a mere "socket adapter" will suffice.



Sunday, May 16, 2010

Is Morality Eating Your Own Dogfood?

There are two schools of thought about whether programmers should have to write tests to verify their own code (in addition to writing the code itself). The philosophy of economics, and psychology, and morality, all overlap in studies that show how people will readily abandon moral responsibilities if they are given ways to avoid the stigma of doing so. This leads me to feel more justified in my belief that programmers do a poorer job of reading, understanding, and implementing a specification when someone else has the responsibility of verification.

Changing the rules changes people’s attitudes


There is a 1998 experiment[1] that keeps popping up in the new “freakonomics”-type literature[2][3] (e.g. Economics 2.0 [4]), where a controlled subset of Israeli day-care centers started charging a fine for parents who came late to pick up their children. To everyone’s surprise, the number of people showing up late almost doubled. Additionally, when the fine was later dropped, the number of late parents stayed at the high level. It is theorized that the moral responsibility parents felt to be on time was much stronger than the economic cost of paying a fine, which was rationalized by the parents as a fee, thus removing the stigma of being late. The fee made it “just business”. As Professor Michael Sandel summarized[5]
“So what happened? Introducing the fine changed the norms. Before, parents who came late felt guilty; they were imposing an inconvenience on the teachers. Now parents considered a late arrival a service for which they were willing to pay. Rather than imposing on the teacher, they were simply paying her to stay longer. Part of the problem here is that the parents treated the fine as a fee. It’s worth pondering the distinction. Fines register moral disapproval, whereas fees are simply prices that imply no moral judgement.”
 Don’t make “not my job” “just business”

The blind experiment is a well established doctrine in science requiring that people should not know too much about something they are testing, otherwise the results are often biased. Scientists gathering their own raw data (not to mention interpreting their own data) often get the results they expected to get, where objective outsiders don’t. With that theory, it is argued that software development projects should engage external, objective, “QA testers” to develop and administer test suites against the code produced by the “programmers”. Since many programmers don’t like to eat their spinach, ahem, write their own tests (or documentation for that matter), there are not usually arguments against the idea.

From my experience though…
  • Programmers will suffer peer pressure and social costs if they fail their own tests (which is a good thing).
  • Programmers who understand that they are obligated to deliver testable components, will do so more often if they must actually produce the tests themselves, compared to those where testing is “not my job”.
  • The act of writing a test forces a clearer understanding of both the interface and the implementation of the tested component compared to just programming it.
  • Programmers who fail their own tests will be much more likely to change that component’s implementation if needed, rather than obstinately maintaining that the externally-produced test is wrong.
  • Writing your own tests is the most systematic method of “eating your own dogfood
Belt and Suspenders

I say, both independent testers AND the original programmers should each write independent test suites. This way you get the power of both perspectives. This is an old lesson from the days of computer punch-cards; two different people key-punch the same data so that they can later be compared, thus eliminating most typos.

[1] A Fine is a Price, URI GNEEZY and ALDO RUSTICHINI, Journal of Legal Studies, vol. XXIX ( January 2000)
http://rady.ucsd.edu/faculty/directory/gneezy/docs/fine.pdf

[2] Brain food: when does a fine become a fee?, Aditya Chakrabortty, The Guardian, Tuesday 23 February 2010
http://www.guardian.co.uk/science/2010/feb/23/brain-food-fines-and-fees

[3] Why an L.A. Times wikitorial effort went wrong, Clay Shirky, O'Reilly Media Gov 2.0 Summit, 2009-09-09
http://itc.conversationsnetwork.org/shows/detail4411.html

[4] pg 7, Economics 2.0, Norbert Haring, Olaf Storbeck, Palgrave Macmillan, 2009

[5] Michael Sandel, The Reith Lectures 2009, BBC, 9th JUNE 2009
http://www.bbc.co.uk/programmes/b00kt7rg

Tuesday, May 11, 2010

The purpose of a thing is in US as well as in IT

In an earlier post, I advocated adopting Philosophers' practice of considering the purpose of a thing when creating a definition for that thing.  Plato and Aristotle would have said that one of the things that made an acorn, an acorn, was that it had the "goal" or "purpose" of becoming an oak tree.  In defining a domain model (aka business objects model), document the "purpose" of a class in order to get at its true attributes and behavior. But, as I was recently reminded, not only can the purpose of a thing be "in the thing itself", it can also be solely in our minds.  I.E. it begs the question: if we are defining a class of things, what is our purpose in caring if we know that something is one of those things?

I had this AHA moment after reading the article "Unclassified" in the June 2010 issue of Discover magazine, where I was surprised to learn that there is no accepted universal definition of a biological species; there are at least 20+ competing definitions.  I had thought that "being able to breed fertile offspring" was the definition, but that is only one (and of course it leaves out the vast majority of living things on earth that reproduce asexually).

After having read about all the conflicting ways to organize and cluster individuals into species, each one with its own way of looking at things, it begged the question: Why do you want to know? I.E. What is the purpose of knowing which species something is?"  Depending on why you want to know, you choose one definition over all the others.

But of course as Darwin thought, this would mean that species are not "real". Instead of discovering pre-existing forms, we would merely be inventing arbitrary sets of attributes-in-common. Therefore, unlike the "teleology" of Plato and Aristotle where the "purpose" or "goal" of a species is internal to itself, it would seem that a possibly more important purpose is the one that WE have in wanting to place a particular into that species.

For example, Ponder all of the various shapes, sizes, forms, etc of things that you would want to call a "chair" [and do a Google image search of "chair"]. Now, ponder coming up with a universal definition of chair (such that all chairs would be recognized as such, and nothing else would), and you will see that it will be much easier if you can refer to the purpose we have for them; i.e. being able to sit (comfortably?) on them.  Without that, it is hard to distinguish between a storage box (not a chair) and a storage bench (a chair).  [Try it. Do a Google image search for storage box and then storage bench.]

In this sense, a species would be more like a Java Interface than a Class.  Classes usually embody the pre-existing forms viewpoint, i.e. the notion that attributes and behavior are really "in the thing" rather than merely "how we want to look at it".  And while in practice Interfaces are often just wrappers around class definitions, ideally, each Interface defines a standard socket into which an object of any "form" may fit, as long as it can perform a certain "role" and participate in a certain "protocol" (see my definition of component).

SO, the lesson to learn is: When considering the purpose of a thing as a part of its definition, "purpose" is both its purpose, and our purpose for wanting to recognize one in the first place.