This blog article on “controlled serendipity” spurred me to conduct a little content curating of my own, resulting in this gem of a research paper that documents how the BBC utilizes Linked Data technologies to make it easier for BBC users to navigate its vast programming database.
The first article discusses how the Web collective–the user commons if you will–is benefiting from individual efforts at curating content, done largely as a free service driven by a spirit to share.
Sharing has become a reflex action when people find an interesting video, link or story. Great content going viral isn’t new. But the sharing mentality is no longer confined to the occasional gems. It’s for everything we consume online, large or small.
I think anyone engaged in the social Web would readily agree with this sentiment. It’s what makes participating in this distributed forum so fun. The article also points out, however, that the vast content mines that exist can be somewhat difficult to navigate to find true gems. Thus, the implication is that content providers need to step up to the plate and deliver content systems that make it easier on Web “content curators”.
The research paper referenced above describes how the BBC used a concept called Named Entity Recognition (NER) to extract concepts from textual input. This allowed for more efficient human editorial input to ensure that these concepts were accurate. Once approved these “concepts” were transformed into links appearing on a Web page. This process then allowed the BBC to use the “concept links” to create user journeys through their site. All this is based on semantic web principles. The future looks bright, indeed, for those of us who constantly scour the Web for salient content.