In a previous post I highlighted a small example of how the Onalytica Recession-Index gave a good indication of an impending recession in the UK.
However, I haven’t had the opportunity to conduct a more thorough analysis so I recently asked my colleague, Dr. Andreea Moldovan, to have a look at the Onalytica Recession-Index in relation to GDP. Her findings impressed me.
Figure 1 (below) shows the UK GDP against Recession-Index for UK Economy. The values for GDP are given in quarterly percentage (or relative) change on previous quarter.
The Recession-Index is a 1 month leading indicator and the values on the chart already contain the lead, i.e. Recession-Index at Q1 2010 contains actually data from Dec '09 to Feb '10, etc.
The series have different scales and are represented on different vertical axes, for ease of chart interpretation. The left vertical axis corresponds to the Recession-Index values, while the right axis is for the GDP.
Since Q1 2010 and except for Q4 2010, the Recession-Index correctly predicts the UK GDP direction of growth (increase or decrease) 1 month ahead. On the chart this is reflected by the series having opposite directions: decrease in Recession perception by the population corresponds to a growth in GDP and vice versa. So, in 8 out of 9 situations analysed the prediction is correct.
The lead of the Recession-Index is in practice more than 1 month as the GDP values are announced some time after the end of the quarter.
The same analysis for the US Economy and the conclusion is the same, that the expectation (or fear) of a recession by the population can be used to predict 1 month ahead the direction of growth rate (increase or decrease) of the US GDP.
Figure 2 (below) shows the quarterly values of US GDP (2011 revision) against the Recession-Index in the context of US Economy. As before, the Recession-Index already contains the 1 month lead on the chart, i.e. the Q3 2008 value actually refers to Recession-Index data for Jun '08 - Aug '08, etc.
This time the period analysed is longer, from Q3 2008 to Q2 2012. Except for Q4 2010 and Q3 2011, the Recession-Index is a 1 month leading indicator for the direction of US GDP growth rate (increase or decrease).
In summary, the Onalytica Recession-Index for UK and US respectively predicts the direction of the country’s GDP one month out in 87%(US) to 89%(UK) of the analysed quarters. I would not be surprised if a model, which takes a few more signals (maybe another Onalytica Index) could make the correct prediction every time. Stay tuned!
OECD is now forecasting that the UK economy will enter a recession (story from Telegraph).
From a macro perspective I guess it is not counter-intuitive that growth comes to a halt when the world is engaged in a massive deleveraging operation; and at the same time impacted by the increasing uncertainty caused by the public finances in the Euro Zone.
But reading the forecast from the OECD I couldn’t help think back on the previous posts on this blog about how the change in online sentiment for some time as indicated that a recession was becoming more likely.
The 2nd of August I wrote a this post that showed that those with more influence in the debate on the UK economy was becoming more concerned about a possible recession than the public in general.
Since our influence-weighted analyses usually serves as leading indicators, this was a clear warning sign.
On the 1st of November I wrote this post in which I point out that the Onalytica Recession-Index for the UK economy had reached an all-time high (since April 2010).
In October 2011 the equal-weighted Recession-Index, which represents the sentiment of the board population, actually overtook the influence-weighted, which represents the sentiment of those with more influence in the debate on the UK Economy. The gap has widened in November.
This effectively means that the broad population as a whole are now more convinced that we are heading for a recession and are likely to reign in their spending further.
The referendum on the Alternative Voting system in the UK took place last week and the ‘No’ campaign won the vote. This concurs with our finding in our white paper: Using the Internet as a Market Research Database: Revelations of the UK Elections 2010; that relative share of the online debate reflects voting behaviour. In our white paper we found that changes in daily election poll results could be estimated by measuring the changes in the relative amount of online discussion. In our analysis of the global English debate on the Alternative Vote we found that the ‘No’ Campaign generated a larger share of the online debate and this indeed reflected voters’ preference in the end.
There is an interesting article in this week’s The Economist on how internet firms are becoming a valuable source of economic insights.
The article mentions a number of cases where online data can be used as an indicator of market activity.
While the methods described are not precisely the approach used by Onalytica, the general idea is the same: Certain changes in online activity (or debate) are rooted in economic or market activities and can, when processed correctly, provide a valuable and real-time source of insight.
One element of the article is a bit different than our experience. We find that in many cases, changes in search activity is a laggard, not a leader, of real-life market events, for example car purchases.
Reading the World Wide Web
With the internet now a mainstream media, and the majority of households in the UK having broadband accounts – it is understandable that the internet has now grown to such a size that it can be overwhelming, and sometimes confusing when searching for specific information. According to Google, the number of unique URLs online has surpassed 1 trillion and continues to grow rapidly. If this content could be sorted, categorised and filtered into relevant intelligence it could be hugely valuable for organisations and governments alike.
In our new White Paper released today Using the Internet as a Market Research Database, we have taken the UK Election as a case study and used InfluenceMonitor™ to do the leg work for us in trawling the internet for relevant content enabling us to draw some very interesting and insightful conclusions.
Download our White Paper here: Using the Internet as a Market Research Database to find out more about some of these findings such as: how changes in the daily election poll results could be estimated by measuring the changes in the relative amount of online discussion.
Click on the icon below to download a copy of the complete White Paper as a PDF:
Alternatively, view a slideshow that gives an overview of the White Paper:
The chart below shows the daily sentiment score associated to Gordon Brown between 6th April and 6th May. The daily sentiment score gives immediate insight about the overall positive or negative opinions about a product or brand, or in this case, Gordon Brown. A shift in sentiment can indicate a positive or negative shift in a brand’s value or perception. We can see from the below graph that:
- Over the study period Gordon Brown was associated with negative sentiment scores.
- From 6th April to 22nd April the daily sentiment score for Gordon Brown was decreasing. (This suggests that relative incidents of negative terms, on pages mentioning Gordon Brown, were increasing).
- On the 28th April there was a massive drop in Sentiment score. This date coincides with the ‘Bigot-gate’ event.
Last night Onalytica sponsored the drinks reception for the PdF (Personal Democracy Forum) post election review “Action Replay” at the RSA in London. We were able to showcase to a very interested audience some of the results of our analysis of the debate - analysis that we have been tracking in the run-up to the election.
The below chart shows a sample of ‘UK election’ daily buzz and influence – calculated using InfluenceMonitor between 6th April and 6th May. As the discussion was monitored on a daily basis, we can instantly see when the topic is most and least discussed. When the amount of talk rapidly changes – we can drill into the debate to learn why.
- 6th May had the greatest amount of discussion – the actual day of the election.
- There is a clear pattern of discussion throughout the days of the week – for example, the UK election was not discussed as much at the weekends.
- Weekly peaks coincide with Thursdays – the 15th, 22nd and 29th April – these were the days of the TV debates.
- The peak in discussion in the run-up to the election was Wednesday 29th April, the day of the third TV debate which gained most attention; this also coincides with “bigot-gate”.
- 6th April – the day the election was announced was also the day that saw the second most discussion, after the actual day of the election.
- It is interesting to note that at the beginning of this analysis, 6th April, when the election was announced - the share-of-influence was significantly higher than the share-of-buzz, however share-of-buzz caught up fairly rapidly and followed the share-of-influence throughout the remainder of the debate.
For several years now, many organisations have been actively monitoring and analysing the online debate in order to gain further insight into consumers’ wants, needs and experiences.
Increasingly, organisations are now taking online analysis a step further by using online buzz to help predict sales, market share and other outcomes, and to detect changes in competitors’ MarCom activities.
The idea is compelling in its simplicity: by listening to what stakeholders are saying about different brands, a reliable forecast can be made as to whether or not customers may prefer one brand over another; and why.
Unfortunately, however, it is not as straightforward as it may sound. Simply counting up the change in brand mentions is not good enough and may often lead to disastrous results.
A crucial element in transforming the online buzz into reliable predictions is the ability to attribute to each online ‘voice’ the correct weight; often referred to as ‘influence’. This of course is very intuitive: It counts more when somebody who is a recognised authority or has a large following on a particular topic talks about a particular brand than if somebody with no following voices his or her opinion. If a particular car brand is mentioned in the driving section of The Times it counts for more than if a competing brand is mentioned on my blog or indeed in The Sun.
Some may now be thinking that surely more sales will only be the result if a brand is mentioned in a positive context or is unreservedly recommended. However, that is not necessarily the case.
All things being equal, it is normally better for a brand to be mentioned in a positive context than in a negative one. But we have to remember that every time a brand is mentioned in a negative context there are two opposing forces at work. The first force is negative. The reader may be slightly less likely to favour the brand because of the negative context. However, because the mention of the brand increases the reader’s familiarity with the brand and brings the brand to the forefront of the reader’s mind, a positive force is at work too.
Whilst the old saying that “any PR is good PR” is not entirely true, research shows that it is, in fact, almost true. Unless the talk about a brand is either very, very negative or unanimously negative, any brand mention is likely to have an overall positive impact. The occasional negative mention actually often contributes positively to increasing outcome.
In fact, when we at Onalytica test prediction models we can see that in most cases we get a better (and very good) prediction of outcome (e.g. sales) if we leave sentiment out of the model. (However, I must stress that in certain situations the model actually improves slightly by including sentiment.)
When we started out predicting sales and other business outcomes from the analysis of online buzz, we were concerned that the data we collected online were not representative of the total debate. However, our ability to satisfactorily predict sales of goods that cannot be purchased or delivered online, i.e. cars, movie visits, and prescription drugs, based solely on analysing the online debate, has largely satisfied these concerns.
In fact, when we collect the buzz online what we get is a very large and very representative sample of the overall buzz (off line and online); when it comes to the debate about the vast majority of interesting issues and brands there is no separate online or offline debate. When there is an increase in the online debate about a brand there is also an increase in the similar debate at the pub and in the work place.
The ability to predict business outcomes from online buzz has sparked new ways of working among several of our clients.
Some now set out targets known as “influence budgets” that are similar to traditional budgets where revenue is the target except that here the target is related to how much online ‘influence’ a brand earns; on its main brand, its individual products and services, and on key marketing messages.
Using “influence budgets”, organisations can now predict more precisely whether or not they are on track to meet their actual revenue or market share targets; and if they are not on target they are in a position to take action earlier.
The actual actions initiated are often more traditional. They most often involve adjusting their MarCom activities, including (but not limited to) the total spend.
An interesting side-effect is that the process of benchmarking brands against an influence budget also gives organisations early insights into changes in competitors’ MarCom spends and their effectiveness.
An overview of predicting business outcome from online buzz would not be complete without a few comments on how some of the key elements differ across markets.
When it comes to predicting outcome from online buzz there are two main factors that differ from market to market.
The first market-dependent factor is the lag from an observed change in the online buzz to the change in business outcome. This lag may range from a week (e.g. prescription drugs) over 30-60 days (e.g. cars) to over 6 months (e.g. white goods).
The second market-dependent factor is how accurately changes in business outcome can actually be predicted. In certain segments where the goods/services are difficult or expensive to sample before purchase (e.g. cars, travel, mobile phone services, gadgets, financial services, etc.) the models work extremely well. In areas such as cheap FMCGs where the goods can easily and inexpensively be sampled, the ability to predict changes in sales may work less well, but can still compete very favourably with traditional models.
In a market place where products and services increasingly are at par, earlier access to better and more precise information is likely to become even more important.
By transforming online buzz into actionable intelligence, managers can now act earlier and on safer grounds if they are not on track to meet their business targets.
(To see an example of the realtionship between sales and online buzz see this previous post
The relationship between online buzz and business outcome (sales, market share, subscriptions, etc.) is a topic of increasing interest to many businesses.
The thought is compelling: What if you could listen to the online debate and precisely predict how much you (or your competitors) are going to sell next month or quarter?
Actually, in many cases you may just be able to do that.
Before I move on to a real life example of how online buzz and sales are related I would like to take a step back and explain the rationale for why that relation exists.
The typical way an organisation drives sales is via some sort of market communication; be it advertising, PR or some other form of activity.
In its simplest form the chain works like this: A company runs an advertising campaign. Those who are exposed to the campaign may be impacted in several ways. Some may accept the positive message conveyed while others may just become more aware of the brand and the offering presented. In all circumstances the brand moves up in our attention. We become a little bit more aware of the brand than we were before.
This increased awareness (as well as the message presented) leads to increased conversations about the brand and/or the message. The increased awareness (and the offering) usually also leads to more sales. This, of course is nothing new. If it wasn’t true, the £100 billion advertising, marketing and PR business would be resting on a sham. But advertising actually works – and more often better than traditional models capture.
A practical example may look something like this: A mobile phone operator (let’s call it ‘Mobuzz’) runs a fairly traditional advertising campaign on TV, billboards and online and in doing so exposes hundreds of thousands (maybe millions) to their message.
Even if the campaign is pretty average the very least it achieves is that it gives Mobuzz a little higher mindshare among those who are exposed to it. However it does more than that. It actually impacts those who are not exposed to the campaign as well. It does that via word-of-mouth.
Imagine two friends having a chat. One person, who hasn’t been exposed to Mobuzz’ campaign, tells the other that he is considering changing his phone provider as his contract is coming to an end. The other person may offer some advice or relay some personal experience, but because this person has been exposed to the Mobuzz campaign this person is slightly more likely to mention Mobuzz than if he or she and not been exposed to the campaign. This now raises the awareness of Mobuzz with the person who hasn’t been directly exposed to the campaign and even though the increase in awareness is miniscule this person is now slightly more likely to choose Mobuzz than before this conversation occurred.
If you could listen in on this kind of conversation between friends, you would be able to detect which brands gets mentioned more (and less) in which contexts and thereby start to form an opinion on which brands are gaining more (and less) mindshare. This in turn would be a strong predictor of future sales.
Aside from the fact that the thought of such monitoring is scary (and probably illegal) it is also not necessary if somewhere else we can capture a representative sample of the conversation. Enter the Internet.
Online, individuals, media and other organisations ask questions about brands, discuss them, rave about them and air their grievances. By capturing these conversations and processing them correctly we have, for many – but not all topics, a large representative sample of the conversations.
Now, turning the observed online debate into actual sales predictions is not as straight forward as you may think.
You may think that if there is a sudden increase in the debate about brand A, compared with the competing brand B, this will lead to more sales for A (compared to that of B’s). However, that would be too simplistic.
Very importantly, you have to adjust for the ‘weight’ or ‘influence’ of the voices that speak. If I write a story about Hyundai cars on my blog it doesn’t carry as much weight as if The Times driving section or a blogger with authority on this brand does it.
Therefore, raw buzz monitoring, where all voices are treated with equal importance, is mostly useless to predict anything.
There is a lot of good research to show that the more influence a brand accumulates, the larger share of the market it will gain.
This is intuitive as well: If a greater proportion of the talk is about a particular brand, this brand is going to be at the front of mind of more people. And if those who talk about the brand are voices of influence it is intuitive that it will have greater impact than if said voices have low influence.
In fact, most research I am aware of more than suggests that, over time, relative market share among brands will equate to relative share-of-influence.
Figure 1 (below) shows the percentage change in monthly influence and percentage change in monthly (UK) sales for Nissan Pathfinder for the period of February to July 2007. We can see that the two graphs follow a similar path and they appear to be strongly correlated. In fact, the Pearson product-moment correlation coefficient
(a metric often used in PR and Advertising research to measure the relationship between observations and outcome) shows a correlation of 0.99 (the scale is -1 to +1); a very, very strong correlation.
For Pathfinder we can see that the typical lag between earned influence and sales seems to be shorter than one month. This is not always the case but in my experience it is typical for cars that have been on the market for some time and thereby well known to potential consumers.
Figure 2 (below) shows the similar data for Nissan Qashqai. The period here is from March to July 2007 (introduced in February).
At first glance the two variables seem to be less correlated, but in fact the correlation (0.98) is almost as strong as for Pathfinder. There is however, a lag between the earned influence and the sales.
Notice how the change in influence increases sharply from April to May while the change in sales increase slows.Change in influence then falls from May to June, only for the change in sales to follow with a one month lag.
The lag may be caused by the fact that this is a new model. If this is the case then it is likely that the lag will become shorter as more become familiar with the model.
The lag may also be explained by delivery shortages, but regardless of cause, the point is to illustrate that measuring influence is a highly effective way of understanding where the sales are heading.
There is much more to be said about how analysis of online buzz can be used to predict future sales and market share, but I hope the above has raised the awareness of some of the possibilities.