The AdWords keyword tool is the starting point for many search engine marketers, both organic and paid. Even so, it is not the only source of insight into how users search and what words they use. In my latest guest post on Search Engine People, I outline a few alternative sources for keyword information:
Google’s products dominate the search industry in many ways, but there are few tools more ubiquitous than the AdWords Keyword tool. Its data is incorporated in any number of SEO tools, including those offered by SEOmoz, Majestic SEO and Raventools.
With Bing’s updated webmaster tools, including better keyword tools, there is no reason not to. There is more to search than just Google, and I don’t just mean Bing or Yahoo! either. See what I mean in Keywords without the Adwords Keyword Tool.
How much can you expect a search rank in Google to change? Is dropping by five places worth panicking about? What if it happened on the fifth page, or should you only worry if it were on the first? Being able to make decisions based on this kind of information is important in managing workflow within most Search Engine Optimisation (SEO) projects.
It is easy to assume that you will always observe changes of greater range the further away from the first position in Google you get. Especially as the further away from the first spot you get, the greater the rate at which you can climb to the top.
Ranking data is available and easy to get through a number of services and tools, with SEOmoz, Raven Tools, Advanced Web Ranking (AWR) and Google Webmaster Tools able to manage and automate much of the process.
Plotting Search Engine Result Page Rank Changes
Looking beyond just the week-to-week changes requires a little more work. And data. Most good tools provide historic data for keywords that are currently being tracked, and this is the data used for this post. The ranking data was collected over a period of time from multiple sites and across multiple keywords in Google using SEOmoz.
This data was used to create the sample seen in Figure 1, plotting the observed position on Google’s SERPs and how much it changes from the next week’s rank. In the scatter plot, Change is on the y axis, while Current Rank is on x.
|Google.AU Current (x, Week 1)||Week 2||Change (y)
The points in Figure 1 represent Google.AU Current, and Change, a number derived from the difference between the current rank and the next observed rank for that keyword. Consequently, when Change is a positive number, it represents a movement away from the top position, while a negative number represents movement towards it.
As SEOmoz collects ranking data for tracked keywords appearing in the first 50 positions in Google, Bing and Yahoo, the only changes in ranking tracked have to fall within this range (however there are other tools that provide data beyond the first 50). This limit affects the data, skewing the range of change lower than what is probably seen within the population, especially as the rank gets closer to 50 and to 1. Due to these constraints, the data used in this post is a truncated distribution, restricted to observing only changes between 1 and 50.
Range of Change per Position
The standard deviation for each rank position seen in Figure 2 from the data used is fairly inconsistent, and not entirely unexpected given the limited nature of the data. The size of the sample used does not seem to be large enough to provide enough observations at each rank.
|Q||Range||Cases||Cumulative||Change Mean||Change Standard Deviation|
The sample is heavily weighed towards the first ten positions, with very little data available for any rank beyond the second page of Google (Figure 3), giving an inter-quartile range of just 2 to 13 from 1 to 50. The first two quartiles range from first to fifth position, and the third only just reaches the second page of Google’s search results.
Even within these limitations, it can easily be shown that there is a difference in expected movement as a site’s position falls further down the rankings. Figure 4 displays the distribution of individual observations, standard deviation of change per quartile with the black error bars, and the mean of change with a 95% confidence interval in red.
Using quartiles produces a series of standard deviation of change closer to what you would expect: a great range of observed changes in rank as you move away from position one. While the data supports the hypothesis, the range of positions covered by the last 25% is too large relative to the sample population to be meaningful.
Another approach taken with the data was k-means clustering, with k as four clusters. More than four clusters failed to break up the one to six range, accounting for about 63% of observations, and reduced the number of observations in the other groupings below a useful level. Even at four clusters, the groups outside the one to six range never accounted for more than 17% of the sample.
|k||Range||Skewness||Cases||Change Mean||Change Standard Deviation|
Looking at the skewness across each of the clusters seems to prove that the further down from one a position goes, the more it skews towards going up. Unfortunately for cluster 1 and 2, this is a deceptive number. As there are no observations outside of the top 50, the closer you get to 50, the fewer drops in rank will be observed, rendering the data biased towards increases in rank.
Unfortunately this was inevitable. As seen in the quartile ranges, the positions between one and five accounted for at least 57% of all observations. This distribution of data is an artefact of how the sample was created, with the keywords selected by non-random means.
It is certain to be a product of the limitations of the data collected, where the only observations included must be changes in position occurring between 1 and 50. Unsurprisingly, the same tendency towards a greater range of change the further away from the top of Google can be seen within the k-means clusters.
Much like Figure 4, Figure 6 includes black error bars representing one Standard Deviation from mean, and red error bars for a 95% confidence interval of mean for each cluster. The clusters are not in order of the positions in Google they represent:
The data in Chart 4 revealed that the range of change increased from cluster 1 to cluster 2. These two groups were both represented by the last 25% of all observations, or the final group in Chart 2. k-means clustering can also highlight outlier populations within a data set.
Partitioning the sample data into six clusters highlighted one group of observations within the first ten positions. This group showed a significantly higher than average change in rank compared to other values in this range. This group is also reflected in the skewness of cluster 4 in Chart 3.
Making Sense of the Data
There are a number of issues with the sample used for this blog post. These limitations mean that the data presented here is not a good selection of the query spaces in which the sites used exist. A few of the problems include:
- Only 1400 records were used
- Massive convenience sampling issues such as:
- Keywords are selected by inconsistent, non-random criteria
- SEOmoz data has no visibility past 50, which limits ability to observe changes involving any rank beyond that point
- No differentiation between keywords such as taxonomy or competitiveness
- No allowance for known algorithm changes
Convenience sampling is a significant issue with the data selected. Tracking terms selected for campaign and client management is certainly best practice from an SEO perspective. The data collected will create a false impression of how search engines behave in a broader sense, and only provide insight into one of the search environments as defined by the objectives of those involved. It is almost certain that this will focus the sample on vanity and short/head terms, with little tracking of long tail queries.
The data SEOmoz collects is a truncated distribution with no visibility on behaviour past 50. In practical terms, this means that the highest change it is possible to observe in this set is either 49 or -49. Terms dropping down to below 50 are not included in the data set, nor are terms coming up from below this rank.
Even within these limitations, the data did demonstrate an increase in average rate of change either up or down. Unfortunately the sample was not large enough, nor did it cover enough of a range to provide any heuristics for most of the positions observed.
Google announced a new report for AdWords on the Inside AdWords blog, and I wrote a quick post covering my first impressions. The post went live on Aspedia’s own blog, the company where I work, and covers a few interesting points.
Yesterday the Inside AdWords blog posted “Make smarter decisions with the new Auction insights report”, announcing the release of the new Auction insights report. This new report supplies information on who is competing for certain keywords and how aggressively…
One of the more interesting things about this new report is that it provides Impression Share at the keyword level, something that has not been seen before. The new Auction Insights Report also provides some interesting competitor data. Another statistic provided that is worth returning to in-depth is the Top of page rate.
Average Position as reported in AdWords (and Webmaster Tools) has been a rather unclear metric. An average by itself without any other summary statistic or standard deviation is not very useful. However reporting Average Position as well as the percentage of impressions for which the ad appeared in the top three spots at least provides some clue as to the distribution of the observations the average is calculated from. It is certainly a topic worth looking at further.
You can read the post, New Auction Insights into AdWords Competitors, here.