22.11.2010 19:48

Back in April Facebook announced support for the Open Graph protocol, a simple RDFa vocabulary for encoding metadata into web pages. Some thought that given Facebook's immense popularity this will finally tip the scales toward a true semantic web and will entice authors to start including structured information in their web sites. Some even thought this will made the social activities on the web more open and not just open a new conduit through which Facebook can suck in more data.

If you still think this attempt will bring a significant change to how people write web pages, please have a look at the Open Graph data included at a random page on one of the top 100 sites on the web:

<meta property="og:site_name" content="Answers.com"/>
<meta property="og:url" content="http://www.answers.com/topic/albert-einstein" />
<meta property="og:type" content="website" />
<meta property="og:title" content="Albert Einstein"/>

This RDFa block tells me that I'm looking at a website and gives its URL, domain and title. In other words, it takes 292 bytes to tell me nothing I don't already know. Information that this page contains an encyclopedic entry about a physicist is as hidden as ever.

Yes, this is only one example, but I think it shows exactly how web reacts to novelties. Make browsers display pages that don't validate and none will bother to make their pages valid. It's the same thing.

Posted by Tomaž | Categories: Code

Add a new comment

(No HTML tags allowed. Separate paragraphs with a blank line.)