SMW Introduction
Introduction
The Semantic Web is a concept that allows massive, reliable reuse of data. One of the most remarkable things about the Web today is it is based on HTML, a text format that is highly accessible by humans and computers. Every Web page uses the same syntax to indicate what should be displayed, they all use the same retrieval mechanisms. This was a remarkable and unexpected (disruptive) breakthrough in communications, but the way companies jumped in to make the Web more attractive did little to make the exchange of data easier. Efforts over the years have struggled with complexity and standardization, with major initiatives interfering with each other for technical reasons (eg Microformats vs RDFa) or while trying to dominate in the market.
One of the concerns has been the model for how information will be shared. Today it's common for non profit organizations to hoard their information, to create "proprietary databases" they can use to pitch to granting agencies. Another factor is that ignoring standards allows efforts to move ahead on their own terms, without making their systems fit into larger systems which could slow them down. Another factor is insecurity - an organization may have a perfectly useful database, but in implementation it may not compare well to best technical efforts.
There are two main requirements to make data reusable and available - well known access mechanisms usable by any organization, and the schemas/ontologies, descriptions of how the data will be organized and detailed (categories and fields) for reliable re-use. Settling these details can also be a tremendous effort, particularly the latter.
Yet, the Internet has been mainstream for 15 years, nearly a generation of new and experienced users, programmers, researchers and so on using the most advanced systems available around the world. We're starting to see real breakthroughs in Semantic Web type applications. With unlimited room for our improvement by building on rather than hoarding data, and the recognition of the value of a true participatory society, many efforts to not share data start to appear ignoble.
An unidentified new sector of public participation (examples below) is developing based on the ease and minimal cost of gathering and organizing data functionality and interested parties on the Internet. This sector includes individuals, physical communities, and communities of interest, it includes real experts, dedicated hobbyists and the casually interested. They try to solve problems and better understand their world, but they need real data. These groups can work reciprocally with our existing institutions to efficiency fill gaps and build our systems. The cost is making public data re-usable at the institutional level. Unfortunately many agencies fear this approach since it will affect their societal placement (and most don't trust 'the masses').
Another factor holding things back is how we use computers today - for the most part, like a typewriter. Not many people embed data from spreadsheets into their email, use automatic facilities for events and contacts, shared to-do tasks, etc. Documents and communications are one-offs, out of date the moment they're sent, and nothing is explicit in them. A semantic approach to computer data will change all this. Data will be more consistent, and when it comes to important statements we should be able to expect more.
Computer front ends, and people's habits will need to change to accommodate this. Sadly, however, the culture of many organizations and individuals will hold things back. Too many web design firms create sites like its 1995 (or emphasize Flashy presentations that can't even be used by many people), too many executives can't be bothered to remember their passwords, too many people make excuses for not pursuing a way that constructively builds on our fascination with information.
Approaches to Semantic Web applications
Mining
There are essentially two types of SemWeb applications, mining and intentional semantic development. One technique in mining is "scraping" to parse presumably reliable HTML pages. Many citizen projects use this technique to extract public data from recalcitrant government sources, for example, They Work for You. Mash ups are related, sites like Housing Maps combine data from disparate sources into one useful interface. However, scraping can be easily foiled by obfuscating low level structure, intentionally or not.
Another mining approach involves scraping human oriented text. Open Calais is a infrastructure example of this. Health Base is an end user application. These sites use patterns in human text to try to derive statements. This technique is easily foiled leading to incorrect observations.
Intentional markup
Intentional semantic development involves explicit markup of text items. Most HTML documents today contain only text and links. Semantically marked up documents have explicit annotations about data objects, indicating them as entities such as people, places, dates, and so on. Relations (links) have explicit meanings.
In FOAF, we can indicate "me" links on our home page that indicate another representation of ourselves. We can indicate links to friends, business associates, and organizations. It quickly becomes apparent that decentralized Facebook sites will be enabled, where individuals can publish their information wherever they like, using whatever licenses they like, and sites like Facebook can provide their own views of these webs of data referring to embedded licenses like ccRel.
Standard RSS and Atom syndicated feeds are also gaining rich data, including geo location, that allow third party sites to create views based on distributed data.
Using RDFa and Microformats, annotations are added to regular HTML that give them semantic meaning. A person's information can be marked up with hCard, allowing you to "right click" on a web page to add that person to your address book. Similar formats exist for locations and events.
Google, Yahoo and others use these formats to make their results more reliable. Without them, information is guessed from overall content on a page. So if you searched for "frames," looking for picture frames, you would be likely to find a page that referred to "frames" in its navigation. RDFa and Microformats allow more reliable markup of subjects, allowing meta directories to embed reviews from any cooperating site rather than trying to do everything themselves - because these reviews link back to the originating site, it's a "win win win" situation, for the meta directory, originating site, and end user, with richer, less biased results when a critical mass is reached.
The heavyweight options are systems such as RDF and Topic Maps. They provide a complex interlinked way to describe arbitrary data. Today they are only used for specific projects, but as their use grows we can expect the web to become more interlinked, allowing an endless assemblage of information using the best references.
One way to 'intentionally' create semantic data is Semantic Mediawiki.
Next: Semantic Mediawiki and the Semantic Web