SemWeb: Difference between revisions
No edit summary |
No edit summary |
||
Line 1: | Line 1: | ||
''In which I try to explain technical and practical aspects of the Semantic Web to a lay audience, of which I am part. '''Contributions welcome''', this is a wiki.'' | |||
== Ramblings == | == Ramblings == | ||
Revision as of 15:39, 24 September 2009
In which I try to explain technical and practical aspects of the Semantic Web to a lay audience, of which I am part. Contributions welcome, this is a wiki.
Ramblings
The Semantic Web is a concept that allows massive, reliable reuse of data. One of the most remarkable things about the Web is it is based on HTML, a text format that is highly accessible by humans and computers. Every Web page uses the same syntax to indicate how and what should be displayed, they all use the same retrieval mechanisms. This was a remarkable and unexpected (disruptive) breakthrough in communications, but the way companies jumped in to make the Web more attractive did little to make the exchange of data easier. Efforts over the years have struggled with complexity and standardization, with major initiatives interfering with each other for technical reasons (eg Microformats vs RDFa) or while trying to dominate in the market.
One of the concerns has been the model for how information will be shared. Today it's common for non profit organizations to hoard their information, to create "proprietary databases" they can use to pitch to granting agencies. Another factor is that ignoring standards allows efforts to move ahead on their own terms, without making their systems fit into larger systems which could slow them down. Another factor is insecurity - an organization may have a perfectly useful database, but in implementation it may not compare well to best technical efforts. There are two main requirements - well known mechanism usable by any organization, and the schemas/ontologies, descriptions of how the data will look for reliable re-use.
The Internet has been mainstream for 15 years, and we're starting to see real breakthroughs in Semantic Web type applications. With unlimited room for our improvement by building on rather than hoarding data, and the recognition of the value of a true participatory society, many efforts to not share data start to appear ignoble. An unidentified new sector of public participation is developing based on the ease and minimal cost of gathering and organizing data functionality and interested parties on the Internet. The cost is simply making data re-usable, yet many agencies fear this approach since it will affect their societal placement (and most don't trust 'the masses').
Approaches to Semantic Web applications
Mining
There are essentially two types of SemWeb applications, mining and intentional semantic development. One technique in mining is "scraping" to parse presumably reliable HTML pages. Many citizen projects use this technique to extract public data from recalcitrant government sources, for example, They Work for You. Mash ups are related, sites like Housing Maps combine data from disparate sources into one useful interface. However, scraping can be easily foiled by obfuscating low level structure, intentionally or not.
Another mining approach involves scraping human oriented text. Open Calais is a infrastructure example of this. Health Base is an end user application. These sites use patterns in human text to try to derive statements. This technique is easily foiled leading to incorrect observations.
Intentional markup
Intentional semantic development involves explicit markup of text items. Most HTML documents today contain only text and links. Semantically marked up documents have explicit annotations about data objects, indicating them as entites such as people, places, dates, and so on. Relations (links) have explicit meanings.
In FOAF, we can indicate "me" links on our home page that indicate another representation of ourselves. We can indicate links to friends, business associates, and organizations. It quickly becomes apparent that decentralized Facebook sites could be enabled, where individuals can publish their information wherever they like, using whatever licenses they like, and sites like Facebook can provide their own views of these webs of data.
Using RDFa and Microformats, annotations are added to regular HTML that give them semantic meaning. A person's information can be marked up with hCard, allowing you to "right click" and add that person to your address book. Similarly formats exist for locations and events.
Google, Yahoo and others use these formats to make their results more reliable. It used to be their information guessed what content on a page was content. So if you searched for "frames," looking for picture frames, you would be likely to find a page that referred to "frames" in its navigation. RDFa and Microformats allow more reliable markup of subjects, allowing meta directories to embed reviews from any cooperating site rather than trying to do everything themselves - because these reviews link back to the originating site, it's a "win win win" situation, for the meta directory, originating site, and end user, with richer, less biased results when a critical mass is reached.
The heavyweight options are systems such as RDF and Topic Maps. They provide a complex interlinked way to describe arbitrary data. Today they are only used for specific projects, but as their use grows we can expect the web to become more interlinked allowing an endless assemblage of information using the best references.