Archive for the ‘referral data’ tag
After months of hype, it’s been pretty quiet around Pinterest for the last few weeks. According to the latest data we just received from Shareaholic, however, the social photo sharing site is still growing quickly, though judging from the company’s referral data, Pinterest’s growth has slowed down a bit since June. From May to June Pinterest’s referral traffic to Shareaholic’s network of about 200,000 sites grew by 43.7%. From June to July, it “only” increased by 15.97%. Even at this slower growth rate, Pinterest is still on track to surpass referral traffic from Yahoo in August. This, says Shareaholic, means Pinterest will soon be the fourth largest traffic source in the world.
Today, Pinterest already drives more traffic to sites that use Shareaholic than Bing and Twitter and it’s only 0.10% away from overtaking Yahoo. Google, of course, remains the largest driver of traffic to the company’s member sites, followed by Facebook and – for the time being – Yahoo.
As our own Sarah Perez reported a few weeks ago, it was only in June that Pinterest passed Bing, Twitter and Stumbleupon in the company’s rankings.
Google+, by the way, doesn’t even appear in Shareaholic’s graphs, but as a company spokesperson told me earlier today, it accounted for a minuscule 0.06% of its referral traffic last month.
Shareaholic says its data is based on referral traffic to more than 200,000 publishers who reach more than 270 million people each month. It’s important to keep in mind that this data doesn’t necessarily reflect the Internet as a whole, but the company’s data has generally been quite similar to what we’ve seen from other third-party analytics services.
This is a guest post from Brad Shorr.
A lot of entrepreneurs and midsize firms say they “get a lot of web leads.” However, when asked if they know the source of the leads, they have only a vague idea – and sometimes, no idea at all.
How can you tell where to direct your marketing efforts if you don’t know where your leads are coming from? You can’t.
This post will talk about how to set up lead tracking on your site, take the guesswork out of your marketing, and help you get more, high quality leads.
A lot phone inquires are never even identified as web leads! This can lead firms to seriously undervalue their Internet marketing efforts.
There are basically two ways to track phone leads: “simple” and “session.”
Simple Phone Tracking
With this method, a service provider such as Marchex sets up a unique phone number that is tied to a particular lead source. Depending on how broadly or precisely you want to track, a handful or a much larger number of phone numbers can be used.
For example, some firms tie one phone number to all of their Google AdWords ads. Other firms tie a phone number to each keyword within the AdWords campaign. The first way allows you to measure the effectiveness of your PPC program as a whole; the second, more granular approach allows you to evaluate the effectiveness of each keyword phrase you are bidding on.
In the same way, a unique phone number can be tied to Microsoft adCenter ads, an email campaign, a specific email blast, a social media landing page, or even a print ad. The service provider collects the data and supplies analytics.
Beyond providing aggregated referral data, tracking services can supply you with a recording of every phone call. By listening to the call, a business can identify the source of each individual lead – and evaluate and improve its inbound sales team’s performance.
Session Phone Tracking
Firms that are highly dependent on web leads often want more detailed lead tracking data. Service providers such as Mongoose Metrics are able to assign a unique phone number to each site visitor, which dynamically displays when they enter your site.
Phone numbers stay attached to the visitor for a fixed period of time — say, 12 hours. During that time, the tracking service records all phone leads, along with data from every visitor session, including where visitors came from. Data on referring sites can be as granular as specific keywords from a PPC ad.
These services are expensive, but offer insights that can help refine not only lead generation, but conversion rate optimization (CRO) as well. We normally start recommending this approach when a firm’s media spend exceeds $5000/month.
Insider Phone Tracking Tips
- Always select an 866 number over an 800 number. Since 800 numbers have been around much longer, they tend to be recycled, and as a result, generate more junk calls from people who associate the number with its previous owner(s).
- Be sure to ask your tracking service to turn on phone recording because surprisingly, some don’t record automatically.
- When reviewing recorded calls, accept the fact you will have to listen to misdials, hang-ups, and auto-dialers (phone spammers). These often represent a fair amount of call volume, but need to be sifted through in order to identify and evaluate the true leads.
Many firms lump all of their form fills into a “web lead” pile that doesn’t tell them anything about where the leads came from. This can be fixed by setting up form tracking, which can be done internally or through a service.
DIY Form Tracking
You or your developer may have the ability to set a cookie that identifies the referring site of every visitor, including the PPC keyword. When a visitor fills out a form, the referring site information is attached to the form. Now you know where your form leads are coming from.
Form Tracking Services
Two problems with DIY tracking: first, some firms don’t have the expertise to set it up; second, even though the referral information is connected, everything else in the conversion process is still manual.
To overcome these problems, services such as Marketo can be hired to basically take over your form management. These platforms not only manage data collection and analytics, they take care of your lead response process – sending acknowledgment emails, offer fulfillment, etc.
As these services offer a high level of automation, they tend to be pricey and make sense for firms with a high volume of leads, or sell high-ticket items, and/or are highly dependent on web leads for their success.
Insider Form Tracking Tips
- Avoid using CAPTCHA on forms. Although it cuts down on spam, it causes conversion rates to plunge.
- CMS platforms may often alternatives to CAPTCHA that have little or no impact on conversion. We use Drupal’s Mollom module, which uses a 50-point inspection to identify spammers, is activated when the user submits the form. If the user is deemed a spammer, Mollum will block it or display CAPTCHA.
Get On Track
As I hope you can see from this post, lead tracking can be fairly easy to set up either internally or with the right partner. One reason businesses fail to do this is a lack of awareness – they just don’t know tracking options exist. A second issue is strategic: businesses don’t always organize a marketing program around their websites, even when their site generates a substantial number of leads. But that’s another post!
Over to You
Are you tracking leads now? What tips do you have for businesses that are ready to start?
Brad Shorr is Director of Content & Social Media for Straight North, a leading Chicago SEO firm. They work with small and midsize firms in a wide array of B2B niches, including truck tracking and machine knitted fingerless gloves.
[ Web Lead Tracking Explained: How to Get High Quality Leads with the Internet is a post written by Tamar Weinberg. ]
Like this post? Get my Internet Marketing newsletter, buy my book, The New Community Rules: Marketing on the Social Web, and subscribe to the Techipedia RSS feed.
The dust has settled a bit on Google’s decision to stop passing keyword referral data from searchers that are logged in to their Google accounts and using encrypted search by default. That began in mid-October and then ramped up a couple weeks later.
At first, [not provided] represented a small percentage of overall traffic to most of the sites for which I have analytics access. And then it grew. And grew. And grew.
Today, [not provided] represents about 25% of the daily referrers to this blog. But it also represents double-digit daily referrers to non-marketing/tech sites that I follow.
And maybe the most head-shaking thing of all is that, in just 2.5 months, [not provided] managed to become a Top 10 referrer for most of the sites that I monitor. Here’s a look at three.
Small Business Search Marketing
My impression is that this blog attracts a mix of marketing consultants/agencies and small business owners. In both cases, it’s probably safe to assume that many readers are Google users and also frequently logged-in to their Google accounts. The [not provided] keyword referrals bear witness to that.
Not surprisingly at all, [not provided] occupies the No. 1 spot among all keyword referrals for 2011 to this blog. That’s among almost 69,000 different keywords that sent 168,000 total natural search visits.
But lest you think that [not provided] only affected marketing/tech blogs and websites, consider this next case.
@U2 is my long-running hobby site about the rock band U2. It’s been in existence since 1995 and for at least the past 12 years has never ranked lower than third on a search for the band’s name. (It was #1 until the official site launched in 2000, and stayed #2 until the Wikipedia page overtook a couple years ago.)
@U2 had more than 1.4 million unique visitors in 2011 from all corners of the globe. It’s a very diverse site and certainly not limited to the tech crowd like my own blog is. Visitors range from teens to senior citizens (yes, we have readers in their 70s).
In 2011, natural search sent 1,763,917 visits to @U2 on 237,929 total keywords. But in just 2.5 months, [not provided] cracked the top 10 overall keywords sending traffic to the site.
Related: On the @U2 forum, which has its own subdomain (forum.atu2.com), [not provided] was No. 7 on the 2011 keyword referrer list. We also have a blog on its own domain (atu2blog.com), and [not provided] was the No. 4 “most popular keyword” during 2011.
In other words, this is not just a problem for tech/marketing sites.
Dr. Cynthia Bailey, California Dermatologist
As many of you know, one of my clients is Dr. Cynthia Bailey, a dermatologist in California. Dr. Bailey’s target audience is also not the tech/marketing crowd. Her site offers high-quality skincare products and solutions to a very mainstream audience; it skews female, but us guys have skincare needs, too!
And, much like my mainstream U2 site, [not provided] cracked Dr. Bailey’s list of the 10 most popular keywords; in her case, it was No. 2 on the list in only 2.5 months.
The Problem With [Not Provided]
In each case above, with three very different target audiences, [not provided] made up a substantial percentage of the overall search traffic to these sites — and the numbers would be higher if I only compared it to overall Google traffic.
There are ways to use analytics data to help get a general idea of who these [not provided] visitors are. Google’s own Avinash Kaushik has some ideas and examples in this excellent article.
(I should mention that next month’s SMX West conference has a panel dedicated to discussing ways to cope with this issue: Life In A [Not Provided] World.)
But you know what? I don’t use analytics for general ideas and guesswork; I use analytics for specific answers. And Google has taken away a lot of those answers.
For a number of reasons (many of which Danny Sullivan explained yesterday on Search Engine Land), the [not provided] keyword referrer is nothing short of a pain in the arse for anyone who does serious work on the web.
For me, a blogger who relies on analytics to understand what content visitors are looking for, [not provided] is more than 7,500 visits that are a mystery to me — and remember, that’s in only 2.5 months! I shudder to think what the full 2012 statistics will look like.
For someone like Dr. Bailey, [not provided] directly impacts the bottom line. She not only relies on keyword referrals to help decide what to write about on her blog, but also to understand which keywords drive online sales of skincare products. And in just 2.5 months, [not provided] was No. 2 among all keywords that directly led to online sales.
This isn’t the end of the world for Dr. Bailey, nor for me, but it’s a serious hurdle to accomplishing our website’s goals. (And it’s a slap in the face to think that Google’s paid advertisers aren’t facing the same hurdle, but that’s another post for another day.)
I’m curious: In your analytics, where did [not provided] wind up ranking among all 2011 keyword referrals? How has it impacted your website(s)?
Hey! RSS Subscriber: Have you purchased my e-book, “How to SEO Your Site in 60 Minutes”? If not, what are you waiting for? It’s only $25 (for the time being). Find out why Search Engine Guide said, “I can almost promise that following the advice in the book will earn you your money back ten to one hundred times over.” Here’s where to learn more: E-Book: How to SEO Your Site in 60 Minutes.
This is a post from Matt McGee’s blog, Small Business Search Marketing.
Google’s [Not Provided]: Assessing 2.5 Months of Analytics Damage
Not Sure Where to Put This One Internet Marketing for Brick and Mortar Businesses – Level343 SEO Should I Work In-House or at an Agency? [10 For/Against] – Search Engine People SEO for Google+ Profiles and Pages – Search Engine Watch Quantifying the Impact of Google’s Keyword Referral Data Shutdown – SEOmoz New Resource [...]
Follow SEJ on Twitter @sejournal
Posted by randfish
On Tuesday, October 18th, Google announced they'd be hiding search referral data for logged-in Google searchers. When questioned by Danny Sullivan of SearchEngineLand, Google provided the following estimate on the impact to search referral data:
"Google software engineer Matt Cutts, who’s been involved with the privacy changes, wouldn’t give an exact figure but told me he estimated even at full roll-out, this would still be in the single-digit percentages of all Google searchers on Google.com"
Tragically, it appears that Cutts was either misinformed or gave misleading information, as "(not provided)" became a major referrer for many websites, climbing into double digits in early November. Now, that percentage has risen even higher, into the 20%+ range on many sites. Hubspot's Brian Whalley reported that the average website using their analytics lost 11.36% of keyword referral data and 423 sites lost more than 20% (15 unlucky souls lost 50%+, which seems almost crazy).
In an attempt to better quantify the impact, we ran a small survey last week, asking fellow marketers to supply information about the impact to their sites.
Here's a visualization of 60 sites' analytics data, showing the self-reported percent of their Google search traffic that used keyword "(not provided)":
Our average in the 6 days from Nov. 4-10 almost exactly matches the average of the several thousand Hubspot customers (11.36% vs. 12.02%), and thus makes me feel pretty good about that data from the survey-takers.
A little more about these 60 respondents:
- We collected 66 finished surveys, but scrubbed 6 that had missing, suspcious or improperly filled-out information
The types of sites reporting data included a wide variety, as illustrated below:
The sites included in the survey also included a solid variety of traffic numbers. The distribution below reports visits from Google organic search recorded in October, 2011:
We asked the respondents what level of impact this change had on their content and marketing efforts, and received the following distribution of replies:
Approximately 1/5th of those surveyed reported no impact on their content/marketing efforts, which likely suggests those folks don't typically use keyword-level data to help them improve OR the change hasn't cost them enough data to have a negative impact. Another 1/5th claimed a strong impact, which is likely how I'd describe this change for our internal efforts. Granted, we don't actively use this data every week, but we've relied on it heavily for reporting and in the past for audits around content optimization and the generation of new content (or updating/refreshing of old material).
Here's numbers and a visualization of the referrer encryption data specifically for SEOmoz.org:
From Oct. 19th – 30th, Google sent 163,909 visits from organic search to our website. 3,762 of those visits, or 2.3%, were via keyword "(not provided)". We didn't sweat this too much. As per Matt Cutts' promise, it was in single digits and, while frustrating, had a very tiny impact on our analytics, marketing and content optimization efforts.
But from Oct. 31st to November 13th, Google sent 191,726 visits and 35,168 of these came via keyword "(not provided)," 18.34%. This has a serious impact on our ability to make our website better for visitors (in particular, identifying keywords that are sending traffic but potentially not having a great experience that we should be making new blog posts, videos, updates, etc. to help).
To me, that's the most tragic part of this change. The underselling of the change as being "single digits" was lame. The hypocrisy around keyword privacy sucks. And their motivations are questionable at best. But the crummiest part is the impact the change will have. It won't put any black hats out of business, won't stop any malware or hacking, and won't add a shred of value to the Internet. But it will make it harder for marketers and site builders to measure, understand and improve for their audience. The net impact will be a slightly worse web, and Google's claim of privacy will only protect them from criticism because it's a far easier explanation than the truth.
Sometimes, it sucks living in an ecosystem with an 800-pound gorilla.
In the last week of October, Google announced and rolled out a major change to the way its search engine interacts with the rest of the web. Google now uses an SSL-encrypted experience for anyone who uses the Google.com homepage while logged in as a Google user. As a result of this change and how the web works, Google no longer passes search referral data to the destination website when a logged-in user clicks on a search result.
This can sound really technical, but what it means is that, as a website owner, you lose visibility into which keywords a person logged into Google.com searched for before they arrived at your website. Even if you use Google-owned website analytics like Google Analytics, you no longer have access to specific keyword information about how those searchers discovered your website.
Google’s Anticipated Effect vs. Reality
According to Google, this change was supposed to represent a single-digit percentage of traffic to most website owners. (Google estimated that the change would impact less than 10% of searches conducted on a daily basis.) Google’s rationality was that most people using Google just don’t use the www.google.com homepage, and if they do, they may not be logged into Google’s services at the time of their search.
However, watching our analytics for HubSpot, we noticed that the amount of traffic to www.hubspot.com that came in without keyword information was immediately about 13% of our overall search engine traffic — quite a bit more than Google’s prediction in the single digits. As a result, we wanted to examine what this looked like for our 5,600+ customers as a result to determine if we were just one unlucky outlier and what the average business owner or marketer on the web is experiencing.
To address the specifics of our analysis, the following data is based on 5,644 HubSpot customers who use HubSpot’s analytics package on their website. The data sample includes all visits from November 1st through November 9th for these sites. All statistics were calculated with a 95% confidence interval.
Google’s Change Actually Affects 11.36% of HubSpot Customers’ Organic Search Traffic
Across all sites, 11.36% of organic search traffic has arrived without a keyword set during this time period for the average HubSpot customer. However, some of our users have seen a dramatically greater loss of intelligence than that. 423 HubSpot customers have experienced more than 20% of their organic search traffic getting stuck in this #SSLpocolypse black hole, and 15 others have lost more than 50% of their traffic’s keyword data. For those 438 websites, this change has had a dramatic impact on how they plan their websites’ future content, and how they understand the leads that convert on their website. In order to better understand how different types of businesses were affected by this change, we broke down this data across some of the available metrics we had on these websites.
Are Different Types of Websites Affected Differently?
For one segment, we cut the data set down to only include websites that received more than 500 organic search visits in that week — meaning they were are a better optimized site that contained a greater number of pages. However, the data in this set was very similar. The average customer in this set had lost 10.69% of the tracking on their organic search traffic, with a very small standard deviation of 4.91% and just a 0.09% magnitude of error. If you’re not very familiar with statistics, this means that just 16% (or about 200) of the websites in this data set had lost more than 15.6% of their search engine intelligence. While no individual customers in this set had lost more than 50% of their search referrers, 54 of these 1,255 companies still lost more than 20% of their search intelligence. It is clear that this change has had a dramatic impact on the search marketing and SEO efforts of many businesses around the world.
We also segmented this across companies that were actively blogging vs. companies that do not blog actively. Because companies that are blogging regularly are more likely to be chasing a healthy long-tail keyword strategy, we thought we might see a difference here and that companies blogging more would be losing more of their traffic intelligence. Surprisingly, there was no significant difference here. The average website with a regularly updated blog lost 10.41% of the intelligence on their search engine traffic, with a small standard deviation of 6.34% and just a 0.22% magnitude of error. We also looked at cutting this data across other segments, such as number of employees, B2B vs. B2C, and industry, but we did not uncover any significant patterns or differences. It appears that for once, a change by Google may not have impacted different types of businesses in a dramatic fashion.
What have you noticed about the search referral data for your website? Have you seen different levels of success or intelligence since the Google SSL change. Consider joining the discussion and sharing your specific results on Twitter using the hashtag #SSLpocolypse.
Connect with HubSpot:
The news that Internet marketers will no longer be able to receive certain keyword-specific search referral data from logged-in Google.com users should be a concern to B2B marketers. With such an emphasis on lead evaluation and the importance of ROI in their marketing programs, every bit of data…
Please visit Search Engine Land for the full article.
It’s a matter of framing the assumptions. If you assume he wanted to come in, then, yes, he’s an idiot. But if you understand that he just wanted to sit on the deck and stare at his reflection in the window, then he’s just a chill dog. (And he doesn’t appreciate his owner filming him and trying to make him look like an idiot, thankyouverymuch.)
Wow Melanie! Bring out the wisdom!
All of this reminded me of the recent debate over Google’s decision to block keywords in referral data for users that are logged in. The short and sweet version of this story is that now when you are logged in to Google, run a search, and click a result; the keyword you used will not be passed onto the site. Some SEOs are upset about this because they use this data as a vital metric to gauge everything they do. Other SEO’s use different metrics and thus don’t see the problem.
Despite the fact that we all do things differently it seemed like many still had (and still have) very passionate feelings about this issue. Some thought SEO’s were freaking out over nothing. Which is a fair point, given that we sometimes do that! Others thought that perhaps people weren’t getting mad enough. However, the one thing that has been missing from this conversation is that, nobody is wrong.
Each one of us practice SEO differently. For some this change will impact their entire business. For others, they will never notice any changes. It is this diversity of perspectives and viewpoints that make working in SEO so damn interesting! And, at the end of the day despite all our bickering and shouting we are all just a “chill dog” with our own perspectives trying to enjoy what’s right in front of us!
Posted by Dr. Pete
Last week, when the SEO world was distracted by revelations that Google was blocking keyword referral data and nostalgic mania over MC Hammer’s search engine, Search Engine Land released a leaked Google document outlining Google’s official guidelines for quality raters. I read the 125-page document out of curiosity, and I decided to share some valuable insights it contains into the mind of Google.
Sorry, No Secrets Here
If you’re looking for SEO “secrets,” you’ll be disappointed by this post. Although this is an internal document, and Google may not be happy about it being leaked, you won’t find a smoking gun here. What you will find is a training manual on Google’s philosophy of quality. The key to proactive SEO is to understand how Google thinks. If you only chase the algorithm, you’ll always be reacting to changes after they happen. Since the document in question is proprietary, I’m not going to link directly to copies of the document or quote large chunks of it. I’m writing this post because I sincerely believe that understanding Google’s philosophy of quality is a fundamentally “white hat” proposition.
What Is A Quality Rater?
Quality raters are Google’s fact checkers – the people who work to make sure the algorithm is doing what it’s supposed to do. Data from quality raters not only serves as quality control on existing SERPs, but it helps validate potential algorithm changes. When you consider that Google tested over 13,000 algorithm changes last year, it’s a pretty important job.
This particular document focuses on rating SERP quality based on specific queries. Essentially, a rater reviews the sites returned by a given query and evaluates each result based on relevance. Raters also flag sites that they consider to be spam. One last note: Google’s philosophy is not always reflected in the algorithm. The algorithm is an attempt to code quality into rules, and that attempt will always be imperfect. The document, for example, says almost nothing about back-link count, unique linking domains, linking C-blocks, etc. Those are all metrics that attempt to quantify relevance.
Here are 16 insights into the human side of Google’s quality equation, in no particular order…
(1) Relevance Is A Continuum
I think the biggest revelation of the document, in a broad sense, is that Google’s view of relevance is fairly sophisticated and nuanced. Raters are instructed to rate relevance along a continuum with 5 options: “Vital”, “Useful”, “Relevant”, “Slightly Relevant”, and “Off-topic”. Of course, there is always a certain amount of subjectivity to ratings, but Google provides many examples and detailed guidelines.
(2) Relevance & Spam Are Independent
Relevance is a rating, but spam is a flag. So, in Google’s view, a site can be useful but spammy, or it can be irrelevant but still spam-free. I think we see some of that philosophy in the algorithm. Content is relevant or irrelevant, but spam is about tactics and intent.
(3) The Most Likely Intent Rules
Some queries are ambiguous – “apple”, for example, can mean a lot of things without any context. Google instructs raters to, in most cases, use the dominant interpretation. What’s interesting is that their dominant interpretations often seem to favor big brands. In specific examples, the dominant interpretation of “apple” is Apple Computers and the dominant interpretation of “kayak” is the travel site Kayak.com.
Other interpretations (like “apple” the fruit or “kayak” the mode of transportation) automatically get lower relevance ratings if there’s a dominant interpretation. I think the notion of a dominant interpretation makes some sense, and it may be necessary for a rater to do their job, but it’s also highly subjective. In some cases, I just didn’t agree with Google’s examples, and I felt that the dominant interpretation unfairly penalized legitimate sites. Most people may want to buy an iPad when they type “apple”, but a site that specializes in online organic apple sales is still highly relevant to the ambiguous query, in my opinion.
(4) Some Results Are “Vital”
The “Vital” relevance rating is a special case. Any official entity – a company, an actor/actress, a politician, etc., can have a vital result. In most cases, this is their official home-page. Only a dominant interpretation can be vital – Apple Vacations will never be the vital result for “apple” (sorry, Apple Vacations; I don’t make the rules). I suspect this is a safety valve for checking the algorithm – if “vital” results don’t appear for entity searches, many people would question Google’s results, even if the SEO efforts of those entities don’t measure up.
Social profiles can also be vital, if those profiles are for individuals or small groups. So, a politician, actress or rock band could have multiple “vital” pages (their home-page, their Facebook page, and their Twitter profile, for example). Interestingly, Google specifically instructs that social media profiles for companies cannot be considered vital.
(5) Generic Queries Are Never Vital
Obviously, Walmart.com is a vital result for the query “walmart”, but Couches.com is not a vital result for the query “couches”. An exact-match domain doesn’t automatically make something vital, and some queries are inherently generic.
(6) Queries Come in 3 Flavors
Query intent can be classified, according to Google, as Action (“Do”), Information (“Know”) or Navigation (“Go”). Like ice cream, queries can come in more than one flavor (although Neapolitan ice cream should never substitute banana for vanilla). This Do/Know/Go model comes up a lot in the document and is a pretty useful structure for understanding search in general. Relevance is determined by intent – if a query is clearly action-oriented (e.g. “buy computer”), then only an Action (”Do”) result can be highly relevant.
(7) Useful Goes Beyond Relevance
This is wildly open to interpretation, but Google says that “useful” pages (the top rating below “vital”) should be more than just relevant – they should also be highly satisfying, authoritative, entertaining, and/or recent. This is left to the rater’s discretion, and no site has to meet all of these criteria, but it’s worth nothing that relevance alone isn’t always enough to get the top ratings.
(8) Relevance Implies Language Match
If a search result clearly doesn’t match the target language of the query, then in most cases that result is low-relevance. Likewise, if a query includes or implies a specific country, and the result doesn’t match that country, the result isn’t relevant.
(9) Local Intent Can Be Automatic
Even if a query is generic, it can imply local intent. Google gives the example of “ice rink” – a query for “ice rink” should return local results, and clearly non-local results should be rated as off-topic or useless. This applies whether or not the location is in the query. Again, expect Google to infer intent more and more, and local intent is becoming increasingly important to them.
(10) Landing Page Specificity Matters
A good landing page will fit the specificity of the query. A detailed product page, for example, is a better match to a long-tail query for a specific item. On the other hand, if the query is broad, then a broader resource may be more relevant. For example, if the query is “chicken recipes”, then a page with only one recipe isn’t as relevant as a list of recipes.
(11) Misspellings Are Rated By Intent
If a query is clearly misspelled, the relevance of the results should be based on the user’s most likely intent. In the old days, targeting misspellings was a common SEO practice, but I think we’re seeing more and more that Google will automatically push searchers toward the proper spelling. It’s likely Google is only going to get more aggressive about trying to determine intent and even pushing users toward the dominant intent.
(12) Copied Content Can Be Relevant
This may come as a surprise in a Post-Panda world, but Google officially recognizes that copied content isn’t automatically low quality, as long as it’s well-organized, useful, and isn’t just designed to drive ad views. Again, this is a bit subjective, and it’s clear that you have to add value somehow. A site with nothing but copied content (whether legitimately syndicated or scraped) isn’t going to gain high marks, and a site that’s only using copied content to wrap ads around it is going to be flagged as spam.
(13) Some Queries Don’t Need Defining
Dictionary or encyclopedia pages are only useful if a query generally merits definition or more information. If most users understand the meaning of the query word(s) – Google gives the example of “bank” – then a dictionary or encyclopedia page is not considered useful. Of course, tell that to Wikipedia.
(14) Ads Without Value Are Spam
One quote stood out in the document – “If a page exists only to make money, the page is spam.” Now, some business owners will object, saying that most sites exist to make money, in some form. When Google says “only to make money”, they seem to be saying money-making without content value. It’s ok to make money and have ads on your page, as long as you have content value to back it up. If you’ve just built a portal to collect cash, then you’re a spammer.
(15) Google.com Is Low Relevance
By Google’s standards, an empty search box with no results displayed is off-topic or useless. Ironic, isn’t it? Joking aside, the document does suggest that internal search results pages can be relevant and useful in some cases.
(16) Google Raters Use Firefox
I said no secrets, but I guess this is a little bit of inside information. Google raters are instructed to use Firefox, along with the web developer add-on. Do with that as you will.
Knowing Is 53.9% of The Battle*
So, there you go – 16 insights into the mind of Google. Advanced SEO, in my opinion, really comes down to understanding how Google thinks, and how they translate their values and objectives into code. You can lose a lot of time and money only making changes when you’ve lost ranking – really understanding the mind of Google is the best way to future-proof your SEO efforts.
*I always wondered what the other half was – blowing stuff up, apparently.
Posted by Aaron Wheeler
On Tuesday, Google announced that signed-in users will, by default, be routed to the SSL version of Google (https://www.google.com). Before Tuesday, most users used non-SSL Google for their searches. Now, according to Google, "…a web site accessed through organic search results on http://www.google.com (non-SSL) can see both that the user came from google.com and their search query… However, for organic search results on SSL search, a web site will only know that the user came from google.com." The effects were obvious immediately. Here’s a screenshot of our GA account showing the quantity of "(not provided)" keywords going up from Sunday to today:
Clearly, the inbound marketing community isn’t thrilled. Take Ian Lurie of Portent, for example: he declared war with Google outright. Having a bunch of "(not provided)" referral keywords in Google Analytics is definitely not pretty. Fortunately, as Avinash Kaushik explains in this Google+ post, there’s something you can do to at least gauge the effects on your analytics, and as Rand will explain, the effects aren’t as devestating for most users as they could be. Yet.
In this emergency Whiteboard Friday, Rand will go over the changes Google has made, why it happened (and why it really might have happened), and what you can do to stay calm and fight back. Let us know how this change has affected your sites in the comments below!
Howdy SEOmoz fans. Welcome to a special emergency edition of Whiteboard Day Agnostic We’ll Interrupt Any Day to Do This. Unfortunately, Google has made a big change to the way that they are serving keyword referral data from their search results, and this is going to have an unfortunate impact on all of us who do white hat SEO, who do web analytics, and who try to learn from this practice.
I want to try in this Whiteboard video to explain why this has happened, what Google is doing, why they claim they’re doing it, and then also explore some of the reasons that they might actually be doing it, and try to provide some actual information about what folks in the web analytics and SEO spheres can do since this data may become less available.
So let’s start by explaining what happens when you do a Google search today. For example, I have done a Google search here for "learn SEO." I click the Search button and some results pop up, and here’s this nice learn SEO, SEOmoz, www.seomoz.org, learn SEO, and then there’s an ad over here, "Learn SEO from PayMeBucks.com." Click on my ad. Dude, I need your visits bad. That probably would not get approved by the AdWords people, but you can get the idea.
Now previously, if I were to click this result or this result, the web analytics tool, whatever it is – your Webtrends, your Omniture, your Google Analytics – at the other end would get some referral data, so with your log file, get some referral data about what sent that visit, which keyword sent that visit. So in this case, it would be "learn SEO" sent a visit from Google.com search over to my website. It would track whether it’s a paid or an organic ad.
This is changing. It is changing only for folks who are logged in. If you are searching from Google and you are logged in, this will be changing so that the logged in behavior, the keyword that referred the visit will be shown as (Not provided). This will show in your web analytics. That’s what Google will say. They will use these parenthesis. That’s how you can see it in the Google Analytics dashboard currently. However, if you click this paid search ad, they will still be providing the keyword "learn SEO." So logged out behavior in purple here. Logged out behavior always gets keyword "learn SEO" as the referrer. Logged in behavior gets keyword (Not Provided) if you click on organic results. But if you’re paying Google, you will still be able to see the referral information.
Now Google claims they’re doing this to protect user privacy so that users who are logged in will by default not be showing their searches to the websites that they visit. Unfortunately, I think that there are a lot of people in the search world and folks who observe this who have rightfully stated, well, if Google were trying to protect privacy, they’ve already to some extent done that by providing a secure search – https search, which is what’s doing this as well, the SSL search – for those people who would not like to provide that information. Some very small portion of people do use that form of Google search, the sort of protected search.
So it’s already available. The reason they’re doing this by default I think that many people suspect . . . I’ll link to a great article by Ian Lurie of Portent Interactive, who I think prognosticates or posits the actual reason for this is that ad networks today are being very successful using search referral data from visitors, and they’re able to leverage that data across multiple websites. So Google is hoping to remove that ability and be the only ad network that can be aware of your search behavior, thus sort of blocking out other providers using their near monopoly in search to exclude other people from being able to use this data,
That’s frustrating. It’s sad. It’s upsetting. It certainly doesn’t fit with what we know about Google. But I think the unfortunate thing here is that those of us in the web analytics/SEO sphere are going to have a tough battle to fight from a PR angle because Google can play the "no this is to protect your privacy" card and use that as their excuse. Of course, if that were the case, it seems very odd that you can pay them and still get the data. But I’m going to reserve judgment on that, and I’ll let folks make their own decisions. I do think it’s very important that we not just get upset about this, but we also think about what we can do actionably. Anytime a major player in the search world or social world or inbound world makes a big change, we need to figure out what is it, how is that we can best respond, how can we use data, how can we continue to be great marketers.
There are a couple of things that I would recommend. First off, you should be measuring the quantity and percent of the lost keyword data. That is a very important metric that you’re going to want to track over time. To do this, you simply go to your web analytics tool, you grab the number of (Not Provided) keywords or referrals, visits that came to, divide that by your total visits from Google organic, and you will get the percent of search referrals affected by this. You want to track this over time because you want to know if that’s going up, if more people who are logged into Google are searching and finding your site, what percent of data you’re losing, whether this is going to be a big problem as Google rolls it out more broadly, and you can see some data from SEOmoz.
So let’s take a look at our own data. This is from Sunday to Thursday of this week, so ending yesterday. We’re filming this on Friday for release tomorrow, Saturday. You can see (Not Provided) was 1,062 or 1.2% of the visits over these 5 days. However, the number is going up. So as of Sunday, we had zero visits that did not contain any keyword data. Monday had 90. Tuesday had 111. Wednesday had 381. Thursday had 421. That is 2.2%. So you can see that we’ve lost keyword information on a little over 2% of our visits and climbing. So this is frustrating. Google has said that they expect this will be less than 10% for most websites. So we hope to continue to get 90% of the data.
That leads me to number two. You can continue to leverage data from sources like the existing Google data, which should be hopefully around 90% of what you have today, Bing and Yahoo data, of course, which are responsible for around anywhere between 10% and 20% of your search referrals depending on your industry and niche, and of course, your internal search query data. This data is invaluable not only for doing keyword research and targeting, but also figuring out conversion rates, trying to optimize for those visitors, make their user experience better. It’s really only for white hat types of activities. So it’s frustrating that Google pulled this, rather than maybe tackling something more black hat focused. But we have what we have.
Number three, if you do feel strongly about this issue, there are lots of opportunities – I don’t want to say complain – but lots of opportunities to let Google know how you feel. This is a change that they are making, and they are currently planning on making and rolling on and have been rolling out. But that doesn’t mean that they might not backtrack if user feedback is overwhelmingly negative, and certainly that would be nice for those of us in the analytics sphere who like to use this data.
So you can obviously blog about it, write about it. You could even write to your congressional rep. There are several forums. The Google blog post announcing this accepts comments. The Google Webmaster Tools forum certainly accepts comments. You can also contact your AdWords representatives and let them know that you’re not totally thrilled by this move either. Remember AdWords data is still passing the refer. It’s organic search that is affected.
So hopefully this won’t affect too big a percentage of search queries and thus will still continue to have some good data, but given Google’s efforts to try and make more people be logged into Google Plus, to Gmail, to Google hosted apps, I don’t know. There is a lot of, I think, fear and uncertainty right now in the analytics world.
But with that said, you have some actionable things you can do. You should definitely start tracking this data, and hopefully we will see you again next week for another edition of Whiteboard Friday, rather than an emergency, interrupting version. We hope we don’t have too many of these. Take care everyone.