At 1200 GMT on 23rd May 2006 there were 35 different perspectives of news about Estonia derived from 58 media stories in GoogleNews. These were identified by my Latent Semantic Analysis engine automatically. In addition, the engine was able to create abstracts about these perspectives using relevant content from one or more of the media stories and is able to identify the news story that is the single most relevant to the perspective (if there is one).
What you are seeing here is a an alpha version of a Web 2.0 application for extracting intelligence from news coverage automatically. It makes sorting reading and evaluating news much faster. It creates visual maps of the news and backs this up with content resulting in files of the relevant content/context, a full index is here.
The significance of this kind of research is that it shows how we can extract 'intelligence' from web pages such as news stories and blogs, identify relevant and related concepts and then re-assemble the news based round these concepts into short stories for fast news briefing.
It makes reading 'the news' faster and identifies the critical issues of the day from different angles.
If you want to try this out for your subject (industry sector, company, brand etc.) let me know. There is development work yet to be done (and some more investment in software) but you will get an impression of the power of this kind of capability.
Picture: The literary cat.... can obviously see the news in the dark