Tuesday, April 12, 2011

In defence of Aves

Having been involved with PR evaluation for over 20 years, I have been watching a number of recent debates about the use of statistical analytics with a lot of interest. 

Last week PR Week , the Public Relations industry trade publication, put a number of perspectives together about the use of Advertising Value Equivelents (AVE’s), http://goo.gl/oBEk. There is heated debate on the subject. 

Tom Eldridge put an argument together entitled “ Why Klout and Peerindex fail to measure your online reputation” last January http://goo.gl/oKrOo.

Newly financed http://www.ubervu.com has, like many others, automated sentiment analysis as part of its service.

The evidence of these debates goes on and on.

What they all have in common is that they use algorithms in an attempt to bring insights into an ocean of data.
In PR, Marketing and advertising the use of algorithms is commonplace and always has been.
In psephology, the study of election results, as well as in sample surveys and  focus groups, the face value figures are not commonly helpful and need interpretation. In their development, a system of managing these extrapolations quickly turns into an algorithm used for used for calculationdata processing, and automated reasoning.

There are some key elements to be considered when using algorithms for gaining insights.
The first is the quality and range of data used.

In almost all research there are a lot of variable to be considered.

For example, in many evaluation methodologies used in PR and advertising media selection, a test of readership for a specific article is expressed in a range of ways including newspaper readership, circulation, position of page and position on page and a whole range of other data points.

The extent to which any of these measures can be attributed to the actual readership of any specific news story is often not clear.

A measure of value of an advertisement can be attributed to the cost the market will bear and thus an advertisement of a specific size, page and position will provide evidence of the value of that real estate in a publication. Such space, were it to be editorial and as appealing to the reader could be considered to have a comparable value.  An Advertising Value Equivalent is on its way. Because editorial has the imprimatur of being editorial it is regarded with more authority by the reader and therefore, some say, has an even greater value. For some it is twice as much and for others five times as much and more.

Here we see evidence of the second key element in using algorithms.

The data used and the methodology adopted need to be common, commonly understood, and transparent for anyone to judge the veracity of the results provided.

In an article, ‘The problem with automated sentiment analysis’, Freshnetworks show how deeply one needs to look into such algorithms http://goo.gl/tjCyI and demonstrate clearly that the devil is in the detail. It notes that humans can be about 80% accurate in sentiment analysis of media corpora and that machines can compete but not in the fine detail. Thus the computers provide an excellent overview already.
That there are criticisms and that there are issues is beyond doubt but progressively, the ability of computers to take the strain and reduce no small proportion of cost.

I suggest, before dismissing automation as useless, there is a case for looking for current benefits in the knowledge that very soon developers will have the computing ability to resolve the issues.

AVE’s may be dismissed in 2011 but will they, or an alternative come back to bite the critics in a year of two?

I believe they will.  

2 comments:

  1. Hi, the main problem with AVE is that the measure is not scientific. There are no research which says, how people do read papers (or Internet articles) and how people do read advertisements, what are the differences and similarities. So comparing articles written by journalists with ads has no strong theoretical basis.

    It has not been even researched. Yes, some American companies (VMS for example) created their own indexes based on AVE and they say it is working. But this is still their research. To say that is correct, the research should be reproduced by independent researchers.

    The second problem is that the publishing an ad is quite different type of work than media relations works. So how they can be compared in this way? It makes no sense at all.

    ReplyDelete
  2. Hi Anna,
    Thank you for your contribution.
    I think you may find some of the work in on the subject of media influence is instructive.
    Without revisiting research going back a few years, I though you may be interested in the breadth and depth of studies in how people read news and understand advertisement.

    I have no disagreement with your principle point that the relationship between an article and an advertisement will be associated with different values as between actors from time to time. Equally, the notion of value and values is different as between actors.

    My point is that this is nothing more than a challenge for the technologist.

    I challenge you view that there has not been much research. How, when, where and to what effect, newspaper content reception is affective is the subject of huge research as this taste of some work in the area will attest:

    Curtis, for example has done considerable work notably in the political field http://goo.gl/sKEjL.
    Pew has research into how reading habits are changing http://goo.gl/wtl1l.
    A C Neilsen has a sudy on what parts of papers people read http://goo.gl/Ydelq
    Guy Consterdine has done a lot of work on advertising for the PPA and other organisations http://www.consterdine.com/reports.asp
    JISC has a number of research programmes http://www.jisc.ac.uk/
    There arew eye tracking studies for everything from the iPad http://goo.gl/WBbqG to newspaper advertsiments http://goo.gl/imZ9a
    and Alan Kennedy's work is interesting too http://goo.gl/tSyIn

    I could go further but I think you will get the point.

    ReplyDelete