Many, if not all, of the things we learn are represented in our brains as complex networks of (mathematical) functions that are simulated in computers via the technique of neural networks. And, if you have ever looked at the data associated with a neural net that has been trained to, say, recognize a handwritten letter "A", you've seen how there isn't any way to put that knowledge into words. In fact, it is not clear by simply looking at a net's matrix of fuzzy weights that it knows anything at all, much less what it knows!
In a similar vein, DNA obviously encodes tremendous amounts of knowledge, like how to make a beating heart, that are also very hard to translate into explicit "facts". In fact, DNA, like self-modifying source code in Lisp, is hard to verbalize because what it "does" can be very far removed from what is "says".
Now because the whole theory and strategy of representing knowledge via "facts", whether in a traditional database, or a semantic network, assumes that everything can be put into a fact format, there are huge amounts of knowledge in the world that can't be used. Yes, one can simplistically use a BLOB to stuff anything reducible to a number string into a database tuple, but that doesn't really put the knowledge into a form that can be "reasoned" about via inferencing rules. And it is that sort of inferencing that is the rationale behind the Semantic Web and why we should bother to create it.
Is this a (fatal) flaw in the foundation of the Semantic Web? Many philosophers have claimed it is a fatal flaw in Rationalism which is the philosophical equivalent to semantic networks. Phenomenologists insisted that many other flavors of knowledge had to be handled beyond the ones covered in rationalism and logic. How will the semantic web handle these? Is it even possible philosophically?
No comments:
Post a Comment