The Concise Oxford English Dictionary (11th ed.) states that the first definition of influence is "the capacity to have an effect on the character or behaviour of someone or something, or the capacity itself".
This definition is very close to the one we subscribe to at Onalytica. In fact, we have both a general definition and a more technical one. Let's take the general one first:
Influence, is the capacity of a publication, an organisation or an individual to impact the viewpoints, actions or opinions of others over whom they do not hold power.
A couple of things can be deduced from the above definition. Firstly, it is clear that the influence will change if the context changes meaning; the same person is unlikely to have the same impact in relation to very different topics, such as global warming, mobile phones or health services. Secondly, influence is different from power. When you have power over someone you create an impact by instructing them. If you have influence you leverage reputation, argument, communication skills and similar rather than direct power. Influence is an objective measure and we normally talk about relevance as a measure for the perceived influence.
Influence can therefore be seen as a stakeholder's real "punching weight" in the debate—meaning how much they move the markets, through what they say or publish.
Our technical definition of influence is (as it says on the tin) a bit technical; here it is:
Influence is the topical weight, different for each voice, that when applied to voices that "speak" about a set of competing brands, transforms the brand's share of the debate into the share of the market.
In reality, this means that if media X posts an article about a car model and this leads to 20 new cars being sold, and media Y posts a similar article that results in 10 cars being sold, then media X has twice the influence of media Y in the debate on cars.
Can it be measured? At Onalytica we subscribe to the view that "if it matters it can be measured"—we even run a course where we teach techniques to measure things that are difficult to measure. We didn't develop the science of measuring influence at Onalytica (we learned it from this gentleman) but we have done some pretty good implementations of it. How do we know we have good influence weights? Because our predictions about market outcome are fairly good, and we have evidence that if you treat all online voices equally, these same predictions become pretty bad.
One important point to add might be—"how do you test an influence score to find out if it's good?"
The Scientific Method is the well–established way of arriving at a conclusion: Based on observations and/or initial research you—sometimes via a conjecture—formulate a hypothesis, which you use to formulate predictions about an outcome. You then design and conduct experiments to prove or disprove your hypothesis by demonstrating your ability to predict outcome of the experiments.
So, by making predictions based on our definition of influence and how it interacts with outcome, we have been able to test that the predictions about outcome (e.g. market share changes, awareness, recall, sales, sentiment change) that can be made using influence are significantly better than those that are made without influence.
If you would like to know more about influence—we recommend the following links:
Do you have your own definition of influence that works better? Please point us to your research that demonstrates how well it works. We would love to learn about it.