Wednesday, April 21, 2010

Class Constructors considered harmful

PREFACE: There is a school of thought in computer programming that Classes are harmful in certain ways, but the basis for those arguments are very programming-technique specific.  This essay is about a much more general, mind-set oriented, objection.

My basic project these days is to learn Philosophy 101 and contemplate what impact that knowledge should have on software development practices.  After reading topics like the Philosophy of "becoming", and Aristotle's "four causes", it is clear that Philosophers spend much more time trying to define the circumstances of an entity's creation than do system analysts and programmers. While Philosophers have long been concerned with WHY did WHO do WHAT to cause an object to come into being, programmers are mostly concerned with the mechanics of HOW an object should be constructed.  It strikes me that this is due to the tunnel vision encouraged by the class constructor method.  [And, due to destructors and automated garbage collection, an even worse situation applies to object “death”.] So, to paraphrase a famous title, I (albeit tongue-in-cheek) consider Class Constructors harmful.
CASE STUDY: At a recently defunct Top-5 bank, I uncovered the fact that there were major incompatibilities in data being used to produce credit scores for borrowers and their loans. The scores depended on a grade generated for a "facility". The problem was that the very concept and definition of "facility" was not the same in various parts of the bank. If they had answered the following simple questions, they would have realized that they were not talking about the same thing: When and why does a new facility come into existence, and when/why does it cease to exist.
In the course of analysis and requirements gathering, a major goal is to identify and define the various business domain entities. But, in a vicious cycle, the definition of an entity is often too shallow with regard to its birth and death because “causal” information often stagnates as merely background text in some requirements document.  This is because programmers have no standard place to put that logic into the code.

Object oriented practice has one put "all" the properties and behavior associated with an entity into its Class definition, where "all" for class Dog means "the dog, the whole dog, and nothing but the dog"[1].  However the "nothing but the dog" constraint means that, the logic involved in deciding whether an instance of class X should be created, is not normally a method of class X.  Since the "cause" of X's instantiation usually involves other classes, that logic lies outside of X proper, thus, standard development methodologies leave the analysis of causation and purpose out of the design of class X.  Even Factory classes are focused on object construction, rather than why it should be constructed, and why now, and by whom.

The Mediation pattern comes to mind as a place to put this sort of logic because it involves multiple classes.  The logic that decides it is time for a Sculptor to use a Chisel to carve a Statue out of a block of Marble doesn’t belong solely in any of those classes.  A programmer would be tempted to put a “carve” method in Sculptor since that is a “behavior” of the sculptor, but Philosophy considers it an essential part of the definition of the statue itself.  And that is a problem with Mediator classes in the first place; the desire to have everything relevant to X be “nearby in the source code” (a raison d'être for classes) is defeated when some of it is off in various Mediators.  Having the teleology of Statue off in some CarveMediator isn’t much better than it residing in Sculptor.carve().

With event-driven systems (e.g. MOM, SOA), the series of events that trigger the creation of an entity instance may be complex. And whether event-driven, or "batch processing", the sub-systems are often distributed, increasing the value of encapsulating this logic in a single place. With Java EE and service oriented designs, there would be value in having the entity services include this logic.

In any event, I believe that there is a need to learn from Philosophy that their concept of "form" (which is the equivalent of OOP's class) has always included the purpose of a thing as well as its blueprint.

[1] Object Oriented Analysis and Design, Grady Booch,  1991

Monday, April 19, 2010

Silver Bullet: Model the world, not the app

DISCLAIMER: Ok, I admit it...this is cut/pasted directly from my brain fart notebook, i.e. not ready for prime time...but dammit Jim, its just a blog!

In the arsenal needed to fight unsuccessful software development projects, it will take a whole clip full of silver bullets.  One of those silver bullets, I believe, is more accurately modeling the world using knowledge of Philosophy.

There is a great struggle between getting everything "right" up front,  versus, doing "just enough" specification and design.  When trying to balance "make it flexible" in order to support future re-use, versus XP mandates like "don't design what isn't needed today", it is hard to know (or justify) where to draw the line.  Due to "changing requirements", those "flexible reuse" features (that were merely contingent at design time) are often mandatory before the original development cycle is even complete.

WELL, lots of requirements don't change THAT much if you are modeling correctly in the first place.

Humans haven't changed appreciably in millennia, even if the roles they play do.  So, if "humans" are modeled separately from "employees", it is that much less work when you later need to integrate them with "customers". [Theme here is "promote roles programming", the justification of which is made more obvious when taking essentialism to heart.]

In general, the foundation of one's data/domain/business/object/entity-relationship model is solid and unchanging, if all "domain objects", "business objects", etc are modeled based on a clear understanding of the essential versus accidental aspects of the "real world", and NOT based on the requirements description of a particular computer system or application.  Modeling based on "just what is needed now according to this requirements document today" is too brittle, both for future changes, and especially for integrating with other systems and data models.

After all, adding properties and relationships to entities is fairly easy if the entities themselves are correctly identified.  It is much harder to change the basic palette of entities once a system design is built upon them.  Also, all the more reason to be sure and not confuse entities with roles they can take on.

Example: I don't have to wonder who I might have to share employee data with if I realize that an "employee" is actually just a role that a person takes on.  If I model the essentials of the person separately from the attributes of the employee role, it will be much easier to integrate that data with, say, a "customer" database later.  If the customer data model recognizes that "customer" is just a role that a person takes on, its Person table is much more likely to be compatible with my Person table than would be the case with my naive Customer and their Employee tables (and still other Patient tables, etc, etc.)

Tuesday, April 13, 2010

Do Objects Have Souls?

While writing the article, Implementing "Real" Classes in JavaScript for Developer.com, I was tempted to add a sidebar with the provocative title "Do Objects Have Souls?".  The article itself demonstrated a technique for simulating Java-like classes in JavaScript, and as introductory material, it explained the difference between Java's "class"-based semantics versus JavaScript's "object prototype"-based semantics.
For Java programmers: In a nutshell, Java creates object instances that have all, and only, the properties of its Class, and the property list is fixed over the life of the object. In JavaScript, object instances have no "real" class, however, each object is free to add and delete properties at any time.  Because of this, JavaScript Object objects are pretty empty compared to Java Object objects at birth because they can become anything after they have already been instantiated [hence Existential Programming!].
In trying to understand the language differences myself, I began musing on the parallels between philosophical notions of "the soul" and JavaScript's empty shell of an "object" that is generated by obj = new Object;

Souls as property containers

In western philosophy there has been a 2500 year old school of thought that "things" (aka objects) have properties, some of which can never change (i.e. essential properties) versus those which may change over time (i.e. accidental properties).  One concept of "soul" is that it is the bundle of essential properties that constitute a thing.  This idea has also been equated with "identity".  Attached (non-permanently) to the attribute bundle are the various accidental properties.  This sounds a lot like the "empty" JavaScript object which is ready to add and update and delete [accidental] properties, while all the time keeping constant the essential, unchanging, object identity (as referenced by obj !).

When medieval alchemists distilled liquids into their essences they called them spirits because they were the "soul" of the grape, herb, flower, etc. To insure removing all of the accidental properties, they distilled things 5 times to produce the quintessential spirit.  Distilled spirits in the alcohol sense often have names that reflect the notion that they have a life essence captured in them. Whiskey and Aquavit are both names that translate into "water of life" in their original languages.  In the movie Perfume, a villain repeatedly attempts to distill the essence of pretty women using the same techniques as distilling flowers into perfume. [Spoiler Alert: flowers don't survive the process...]

When the Well of Souls runs out of RAM

Another aspect of souls that rhymes with JavaScript is the ancient lore that newborns are given a soul at birth which is plucked from a "well of souls" (aka the chamber of Guf).  In JavaScript, as empty objects are created and given an identity, they are plucked from a heap of available memory (i.e. dynamic memory allocation). In both cases, bad things happen when there are none left.

When the well of souls runs dry, the Messiah will come and reboot the world; when your browser runs out of heap space, your JavaScript will gag and someone will have to come and reboot the browser (or at least the web page).  The plot of the 1988 Demi Moore film, The Seventh Sign, is based on the Guf mythology. Demi's baby is due to be born on February 29 which is the date on which the last soul will leave the Guf and it will be empty.