In the recent weeks a number of blog posts dealing with measuring influence have stirred up quite a debate.Jeff Jarvis
and Steve Rubel
aired their thoughts on the issue.
David Brain and Jonny Bentwood from Edelman’s London office published
something termed “Social Media Index” where they propose a brand new method for measuring on-line influence.
Most of my feelings after having read the article were (far better) summed up by Jennifer Mattern
in her comment
The article offers no research, no references to prior research, no logical reasoning for its claims and proposes no way of testing the model.
Can you imagine if the CEO and the Head of Research of a large investment bank published a new options pricing model without actual research and without mentioning the Black–Scholes
model? Of course not. So why in PR?
Actually the very notion that influence is not related to a topic is comical. As I understand it, the article proposes that someone has the same influence regardless of the topic discussed. In other words, TechCrunch (for example) is as influential on the topic of fly fishing or jogging as it is on Silicon Valley gossip. I don’t think that’s the case. We know both from intuition and from research that influence is topical.
At best, the Social Media Index is an indication of popularity. But popularity is not influence. Those who are very popular often have good influence but you don’t need to be very popular to have a lot of influence.
When you measure popularity, all “votes” count the same. However, when measuring influence each “vote” counts with the weight of all the votes leading to the voter.
Take a look at the figure below.
Person A is clearly more popular than person B. However, those who listen to person A do not, to any large extent, go on to influence others.
Quite differently so for B. Those who listen to person B they go on to influence others, who go on to influence others, and so on. The aggregated impact (or influence) of B is in this case bigger than A.
The figure (above) indicates why models that only take the first layer into account will come up short when trying to explain the overall outcome.
In pre-Internet times measuring popularity (or “reach” or “circulation”) was a relatively good proxy for influence because those who read a particular news paper or watched a TV programme did mostly not go on to influence anyone outside their close social circle.
But because of the Internet this has all changed and the social event horizon
is now defined by language rather than physical distance. Both direct and indirect influence must therefore be taken into account when trying to measure influence correctly.
Relative influence correlates better to outcome than relative popularity simply because influence takes the indirect effect into account.
So how to measure influence?
Well, it’s been done for a long time. In the academic community “citation analysis” has been used for decades to measure the influence of academic journals, articles, scholars and universities.
The principle is quite simple: You collect all references made between articles about a particular topic. The references are transformed into a large set of simultaneous equations that, when solved, provides the relative influence of each journal. Thomson Scientific
is probably the leading provider for the academic community.
However, the original science for measuring influence in a linked network was developed by Wassily Leontief
who devised something called the Input-Output Analysis
. It was originally (and still is) used to measure how sectors of the economy directly and indirectly influenced each other.
Leontief won the 1973 Nobel Prize
in Economics for this specific work.
Now, you may doubt that knowledge gained in pre-Internet times more than 60 years ago can provide any useful input to explaining how to measure online influence.
But in much the same way as engineers at NASA draw on Isaac Newton’s work
, those who shape the on-line world draw on Leontief’s.
In 1965 Random House published a book by W. H. Miernyk titled “The elements of input-output analysis”. The book deals, as the title says, extensively with Leontief’s work.
This book is cited as a source in the article “Measuring the relative standing of disciplinary journals” by P. Doreian (1988).
Doreian’s article then goes on to be cited as a source by Jon M. Kleinberg in his article “Authoritative Sources in a Hyperlinked Environment” (1998).
Then, same year, an article, which you may have heard of, cites Kleinberg’s:
Lawrence Page, Sergey Brin, Rajeev Motwani, Terry Winograd. The PageRank Citation Ranking: Bringing Order to the Web.
There you have it: From Leontief to Google in 4 easy steps. (There may be shorter/more paths).
Ok, end of part 1.
In part 2: The list of the 80 or so most influential PR blogs measured using citation analysis including their relative influence. Stay tuned.
Ten years ago search engine Alta Vista was the biggest and most awesome ruler of the Internet.
Then something happened. Two research students at Stanford University made what today seems like a pretty small paradigm shift in search: Instead of mainly relying on page-content analysis to rank search results they created a search engine that also takes network structure (e.g. linking) into account when prioritising search results.
On the 14th of September 1997 the domain name Google.com was registered and the rest, as the saying goes, is history.
But it’s not.
Google isn’t going to be forever. They are going to succumb to the same mighty force that put them in their present position: evolution.
As unimaginably as it may seem today it will happen.
And here is why:
In the last 10 years Google have not substantially improved their search algorithm. They may have made a few refinements, but they haven’t been pushing to make yet another paradigm shift in search. A shift that will be necessary to stay ahead of the game. A shift that hundreds of other companies are currently working on and getting closer to delivering.
Thinking about it initially seems strange: A company (Google) that spits out so many innovations on a monthly basis; yet they can’t seem to move their search technology further...
The problem for Google is the following: They have one cash cow: Adwords. But Adwords is reliant on searchers NOT finding what they are looking for in Google
Imagine that Google provided perfect results: They always provided you with a list of the best options for you. Then there would be little or no incentive to click on the advertisements on the search page.
So, the better the Google search results, the less clicking on ads on the right side. Less clicking because there is no need to – the generic results already lists the best options for you. But also less ad-clicking because the better Google’s search results are, the less credible the propositions in the ads are.
In that lies Google’s main problem. If they innovate to stay ahead of the search game – they lower their revenue.
It may be inconceivable but someday in the not too distant future, a site is going to come along that delivers better search results by one or more order of magnitudes. The search results will be so good, that each of us will instantly loose our competitive edge if we don’t use it. It will be a repetition of the Alta Vista-to-Google transition over again. The only difference will be that, due to the extreme connectedness of people today, the switch will be completed in a much shorter period of time.
So what will the next generation search look like? Good question. I have some ideas but honestly – I’m not sure. But I’m sure it will happen. Evolution always catches up.