Friday, August 31, 2007

Fuzzy Unit Testing, Performance Unit Testing

In reading Philosophy 101, about Truth with a capital "T", and the non-traditional logics that use new notions of truth, we of course arrive at Fuzzy Logic with its departure from simple binary true/false values, and embrace of an arbitrarily wide range of values in between.

Contemplating this gave me a small AHA moment: Unit Testing is an area where there is an implicit assumption that "Test Passes" has either a true or false value.  How about Fuzzy Unit Testing where there is some numeric value in the 0...1 range which reports a degree of pass/fail-ness? i.e. a percentage pass/fail for each test.  For example, testing algorithms that predict something could be given a percentage pass/fail based on how well the prediction matched the actual value.  Stock market predictions, bank customer credit default prediction, etc come to mind.  This sort of testing of predictions about future defaults (i.e. credit grades) is just the sort of thing that the BASEL II accords are forcing banks to start doing.

Another great idea (if I do say so myself) that I had a few years ago was the notion that there is extra meta-data that could/should be gathered as a part of running unit test suites; specifically, the performance characteristics of each test run.  The fact that a test still passes, but is 10 times slower than the previous test run, is a very important piece of information that we don't usually get.  Archiving and reporting on this meta-data about each test run can give very interesting metrics on how the code changes are improving/degrading performance on various application features/behavior over time.  I can now see that this comparative performance data would be a form of fuzzy testing.

Sunday, August 26, 2007

Birth of a Blog

On this date I had the blinding realization that after a year and a half of thinking and note scribbling about this thing I was calling Existential Programming, I should grab the domain name and start a blog to plant my flag on the name and topic.  I had the thought while recording notes on the original ideas that led to Existential Programming...

Recalling the inspiration for Existential Programming, the concept

While working on a Javascript project in early 2006, I became aware of the differences between Java class-based objects and Javascript's prototype-based objects. I wanted to work with Java-like classes and researched existing attempts to implement them and what it would take to "do it right".  In the course of comparing different attempts, I came across different definitions of "class".

Once I realized the ability to dynamically add and delete attributes of an object in Javascript, I realized that one could nicely map them onto semantic-network relations/tuples. I already knew about EAV database schemas.  I had the epiphany that O/O and E/R and S/N modeling could all be made isomorphic once one had class-less objects as Javascript had.

Once I realized that classes were isomorphic to semantic networks, and having already known that computer ontologies were anything but universally agreed upon, I had the epiphany that OO class structures were too restrictive because they assume a single ontology.  On the other hand, I did not want to give up the benefits of strong typing.  So I had the idea that objects should be able to simultaneously house the attributes of multiple ontologies, and with class-less objects I could see how to implement the whole system.

Together, these trains of thought somehow gave rise to the intuition "I'll bet people have already thought about this...like maybe in Philosophy?". I quickly discovered it had everything to do with this topic. And once I found out what Existentialism was, it became clear that class-less objects have the same deep idea.

So, I coined the term "Existential Programming" to refer to the whole "project" of exploiting type-less objects to implement multiple strong types simultaneously.

Which came first - the Chip or the Socket?

Q: Which came first, the integrated circuit chip, or the socket into which it fits?

A: The socket came first (at least conceptually). Before a chip is designed, there is a framework defined into which the chip will integrate: Digital vs analog signals, voltage levels for digital one and digital zero, power requirements, clock rates, transistor type, etc. This usually results in the design of whole families of chips that are intended to work together.

SO, no chip is created in isolation, just as no software component should be. Otherwise, the result would be that there is no common framework with which other components can be attached (Super-glue, play-doh, and duct tape not withstanding).

Of course, as I wrote back in 2000, there is no such thing as a component (in the same way that there is no such thing as a donut hole).

Sunday, August 12, 2007

Killer App of Existential Programming?

Ok, with all this theorizing about "existential programming", what the heck is it good for?
I.E. What is the killer app of existential programming?

The answer could be "integration of uncooperating systems"...
  • Data Integration
  • Data Model (i.e. schema) Integration
  • Application Integration
  • Frameworks API Integration
By developing an information (code/data) environment where different paradigms can coexist without even being aware of each other, uncooperating systems can be mediated between that otherwise could not.