What Link Building Success Really Looks Like

Posted by mark-johnstone

A few weeks ago, a post was published entitled The SEO Myth of Going Viral. It referenced 8 pieces of content across 4 different sites that went viral and, most importantly for SEO, gained hundreds of linking root domains. I was the creative director on a lot of those campaigns while working as the VP of Creative at Distilled. Today, I’d like to add some important context and detail to the original post.

I actually agree with much of what it said. However, it’s based on the assumption that one big viral piece of content would result in a visible jump in rankings across the domain within about 3 months of the content being released. There are a few challenges with this as a basis for measuring success.

I wouldn’t advise setting your hopes on one big viral hit boosting your rankings across the domain. Not by itself. However, if that viral hit is part of ongoing link building efforts in which you build lots of links to lots of pieces of content, you can begin to see an upwards trend.

“Trend” is the important word here. If you’re looking for a dramatic step or jump as a direct result of one piece of viral content, this could cause you to overlook a positive trend in the right direction, and even tempt you to conclude that this form of content-based link building doesn’t work.

With regards to this type of link building and its impact on domain-wide rankings, I’d like to focus on the follow 4 points:

  1. How success really looks
  2. Why success looks like it does
  3. Other factors you need to consider
  4. How we can improve our approach

What successful link building really looks like

Simply Business was held up in the SEO myth post as an example of this kind of link building not working. I would argue the opposite, holding it up as an example of it working. So how can this be?

I believe it stems from a misunderstanding of what success looks like.

The post highlighted three of the most successful pieces of content Distilled created for Simply Business. However, focusing on those three pieces of content doesn’t provide the full picture. We didn’t make just three pieces of content; we made twenty-one. Here are the results of those pieces:

Note: Data missing for the first two pieces of content

That’s links from 1466 domains built to 19 pieces of content over a period of 3 years.

The myth in question is as follows:

Building lots of links to one piece of content will result in a jump in domain-wide rankings within a reasonable timeframe, e.g. 3 months.

Though this wasn’t the hypothesis explicitly stated at the start of the post, it was later clarified in a comment. However, that’s not necessarily how this works.

An accurate description of what works would be:

Building lots of links to lots of pieces of content sustainably, while taking other important factors into consideration, can result in an increase in domain-wide rankings over time.

To hold up, the myth required a directly attributable jump in rankings and organic traffic within approximately 3 months of the release of each piece of content. So where was the bump? The anticipated reward for all those links?

No. The movement we’re looking for is here:

Not a jump, but a general trend. Up and to the right.

Below is a SEMRush graph from the original post, showing estimated organic traffic to the Simply Business site:

At first glance, the graph between 2012 and 2014 might look unremarkable, but that’s because the four large spikes on the right-hand side push the rest of the chart down, creating a flattening effect. There’s actually a 170% rise in traffic from June 2012 to June 2014. To see that more clearly, here’s the same data (up to June 2014) on a different scale:

Paints quite a different picture, don’t you think?

Okay, but what did this do for the company? Did they see an increase in rankings for valuable terms, or just terms related to the content itself?

Over the duration of these link building campaigns, Simply Business saw their most important keywords (“professional indemnity insurance” and “public liability insurance”) move from positions 3 to 1 and 3 to 2, respectively. While writing this post, I contacted Jasper Martens, former Head of Marketing and Communications at Simply Business, now VP of Marketing at PensionBee. Jasper told me:

“A position change from 3 to 1 on our top keyword meant a 15% increase in sales.”

That translates to money. A serious amount of it!

Simply Business also saw ranking improvements for other commercial terms, too. Here’s a small sample:

Note: This data was taken from a third-party tool, Sistrix. Data from third-party tools, as used both in this post and the original post, should be taken with a grain of salt. They don’t provide a totally accurate picture, but they can give you some indication of the direction of movement.

I notice Simply Business still ranks #1 today for some of their top commercial keywords, such as “professional indemnity insurance.” That’s pretty incredible in a market filled with some seriously big players, household UK names with familiar TV ads and much bigger budgets.

Why success looks like it does

I remember the first time I was responsible for a piece of content going viral. The social shares, traffic, and links were rolling in. This was it! Link building nirvana! I was sitting back waiting for the rankings, organic traffic, and revenue to follow.

That day didn’t come.

I was gutted. I felt robbed!

I’ve come to terms with it now. But at the time, it was a blow.

I assume most SEOs know it doesn’t work that way. But maybe they don’t. Maybe there’s an assumption that one big burst in links will result in a jump in rankings, as discussed in the original post. That’s the myth it was seeking to dispel. I get it. I’ve been there, too.

It doesn’t necessarily work that way. And, actually, it makes sense that it doesn’t.

  • In two of the examples, the sites in question had one big viral hit, gaining hundreds of linking root domains, but this on its own didn’t result in a boost in domain-wide rankings. That’s true.
  • Google would have pretty volatile search results if every time someone had a viral hit, they jumped up in the rankings for all their head terms.
  • But if a site continues to build lots of links regularly over time, like Simply Business did, Google might want that site to be weighted more favorably and worthy of ranking higher.

The Google algorithm is an incredibly complex equation. It’s tempting to think that you put links in and you get rankings out, and a big jump in one will correspond to a big jump in the other. But the math involved is far more complicated than that. It’s not that linear.

Other factors to consider

Link building alone won’t improve your rankings.

There are a number of other influential factors at play. At a high level, these include:

  1. A variety of onsite (and technical) SEO factors
  2. Algorithmic updates and penalties
  3. Changes to the SERPs, like the knowledge box and position of paid results
  4. Competitor activity

I’m not going to go into great detail here, but I wanted to mention that you need to consider these factors and more when reviewing the impact of link building on a site’s rankings.

Below is the graph from SearchMetrics for Concert Hotels, also via the original post. This is another site to which Distilled built a high volume of links.

As you can possibly tell from the large drop before Distilled started working with Concert Hotels, the site was suffering from an algorithmic penalty. We proceeded under the hypothesis that building high-quality links, alongside other on-site activity, would be important in the site’s recovery.

However, after three or four large link building successes without any corresponding uplift, we recommended to the client that we stop building links and shift all resources to focus on other activities.

As you’ll see at the end of the chart, there appears to be some positive movement happening. If and when the site fully recovers, we’ll never be able to tell exactly what contribution, if any, link building made to the site’s eventual rankings.

You can’t take the above as proof that link building doesn’t work. You have to consider the other factors that might be affecting a site’s performance.

How can we improve our approach?

As I mentioned at the start of this post, I actually agree with a lot of the points raised in the original post. In particular, there were some strong points made about the topical relevance of the content you create and the way in which the content sits within the site architecture.

Ideally, the content you create to gain links would be:

  • Topically relevant to what you do
  • Integrated into the site architecture to distribute link equity
  • Valuable in its own right (even if it weren’t for links and SEO)

This can be a challenge, though, especially in certain industries, and you might not hit the sweet spot every time.

But let’s look at them in turn.

Topical relevance

If you can create a piece of content that gains links and is closely relevant to your product and what you do for customers, that’s great. That’s the ideal.

To give you an example of this, Distilled created a career aptitude test for Rasmussen, a career-focused college in America. This page earned links from 156 linking root domains (according to the Majestic Historic Index), and the site continues to rank well and drive relevant search traffic to the site.

Another example would be Moz’s own Search Engine Ranking Factors. Building lots of links to that page will certainly drive relevant and valuable traffic to the Moz site, as well as contributing to the overall strength of the domain.

However, your content doesn’t have to be about your product, as long as it’s relevant to your audience. In the case of Simply Business, the core audience (small business owners) doesn’t care about insurance as much as it cares about growing its businesses. That’s why we created several guides to small business marketing, which also gained lots of links.

As Jasper Martens explains:

“Before I left Simply Business, the guides we created attracted 15,000 unique visits a month with a healthy CTR to sign-up and sales. It was very effective to move prospects down the funnel and make them sales-ready. It also attracted a lot of small business owners not looking for insurance right now.”

Integrating the content into the site architecture

Distilled often places content outside the main architecture of the site. I’ll accept this isn’t optimal, but just for context, let me explain the reasons behind it:

  1. It creates a more immersive and compelling experience. Consider how impactful New York Times’ Snowfall would have been if it had to sit inside the normal page layout.
  2. It prevents conflicts between the site’s code and the interactive content’s code. This can be particularly useful for organizations that have restrictive development cycles, making live edits on the site difficult to negotiate. It also helps reduce the time, cost, and frustration on both the client-side and agency-side.
  3. It looks less branded. If a page looks too commercial, it can deter publishers from linking.

While it worked for Simply Business, it would make sense, where you can, to pull these things into the normal site architecture to help distribute link equity further.

Content that’s valuable in its own right (even if it weren’t for links and SEO)

Google is always changing. What’s working now and what’s worked in the past won’t necessarily continue to be the case. The most future-proof way you can build links to your site is via activity that’s valuable in its own right — activities like PR, branding, and growing your audience online.

So where do we go from here?

Link building via content marketing campaigns can still make a positive impact to domain-wide rankings. However, it’s important to enter any link building campaign with realistic expectations. The results might not be as direct and immediate as you might hope.

You need to be in it for the long haul, and build links to a number of pieces of content over time before you’ll really see results. When looking for results, focus on overall trends, not month-to-month movements.

Remember that link building alone won’t solve your SEO. You need to make sure you take other on-site, technical, and algorithmic factors into consideration.

It’s always worth refining the way you’re building links. The closer the topics are aligned with your product or core audience’s interests, the more the content is integrated into your site’s architecture, and the more the content you’re creating is valuable for reasons beyond SEO, the better.

It’s not easy to manage that every time, but if you can, you’ll be in a good position to sustainably build links and improve your site’s rankings over time.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Tactical Keyword Research in a RankBrain World

Posted by Dr-Pete

Summary: RankBrain represents a more advanced way of measuring relevance, built on teaching machines to discover the relationships between words. How should RankBrain change our approach to SEO and specifically to keyword research?

This story starts long before RankBrain, but the action really kicked in around May of 2013, when Google announced conversational search for desktop. At the time, voice search on desktop may have seemed like a gimmick, but in hindsight it was a signal that Google was taking natural language search seriously. Just a few months later the Hummingbird update rewrote Google’s core engine, and much of that rewrite was dedicated to dealing with natural language searches.

Why should you care about voice? For most sites, voice is still a relatively small percentage of searches, and you’ve got other priorities. Here’s the problem, illustrated by the most simplistic Google algorithm diagram I’ve ever created…

If there were two algorithms – one for text search and one for voice search – then, yes, maybe you could drag your feet. The reality, though, is that both text and voice search are powered by the same core algorithm. Every single change Google has made to adapt to natural language searches impacts every search, regardless of the source. Voice has already changed the search landscape irreversibly.


Natural language in action

You may be skeptical, and that’s understandable. So, let’s take a look at what Google is capable of, right now, in 2016. Let’s say you wanted to find the height of Seattle’s iconic Space Needle. As a seasoned searcher, you might try something short and sweet, like this…

“Space Needle height”

Google understands this question well enough to attach it to the corresponding Knowledge Graph entity and return the following:

The corresponding organic results appropriately match the informational query and are about what we’ve come to expect. Google serves this search reasonably well.

“What is the height of the Space Needle?”

Let’s try to shake off our short-form addiction and try a natural language version of the same search. I won’t repeat the screenshot, because it’s very similar, as are the organic results. In 2016, Google understands that these two searches are essentially the same.

“How tall is the Seattle Space Needle in meters?”

Let’s try another variant, switching the “What” question for a “How” question, adding a location, and giving it a metric twist. Here’s what we get back:

Google understands the question and returns the proper units. While the organic results vary a bit on this one, reflecting the form of the question, the matches remain solid. Natural language search has come a long way.


Build great concepts!

This all may be a bit alarming, from a keyword research perspective. Natural language searches represent potentially thousands of variants of even the simplest queries. How can we possibly operate on this scale as search marketers?

The popular notion is that we should stop targeting keywords and start targeting concepts. This approach has a certain logic. The searches above share a general notion of “tallness,” which might look something like this:

“Tall” and “height” are fairly synonymous, words like “size” and “big” are highly related, and units like “feet” and “meters” round out this concept. In theory, this makes perfect sense.

In practice, the advice to target concepts is a bit too much like saying “build great content.” It’s a good goal, in theory, but it’s simply not actionable. How do we build great concepts? We all intuitively understand what a concept is, but how does this translate into specific search marketing tactics?

There’s an even bigger problem, and I can illustrate it with one box:

Ok, one box, a logo, and two buttons. At the end of the day, you can’t type a concept. Search users, whether they’re typing or speaking, have to put words into that box. So, how do concepts, which we all agree exist and are useful, translate into keywords, which I hope we can all agree are still unavoidably necessary?


Language in action, part 2

We need to take a side path on this journey for a moment. Part of rethinking keyword research is understanding that we’re no longer bound by an exact-match world. This isn’t a bad situation to be in, just a complex one. I’d like to tell a story with examples, showing just how far Google has come in understanding the ways that different keywords relate to each other…

Plurals (“scarf” & “proxies”)

While we all know the dangers of keyword stuffing, it originated out of a certain necessity. Search engines simply weren’t capable of equating even simple terms, like plurals. Those days are long behind us. Google understands, for example, that a search for “scarf” should also return results for “scarves”:

In these examples, I’ll be using Google’s own highlighting (the bold text; I’ve added the green boxes) to show where Google seems to understand equivalence or related concepts. Of course, Google’s core relevance engine and highlighting engine are not exactly the same, but I think it’s safe to say that the latter is a useful window into the former.

Google is also fully capable of understanding the reverse. Let’s say, for example, that a “friend” of mine wants to buy proxy IPs. He might search for “proxies”:

Google can easily understand even irregular plurals in both directions.

Stemming (“ballroom dancer”)

Plurals are relatively easy. Let’s step it up a little. Another frequent problem in search is dealing with stemming, which relates to root words and the forms they can take, such as “run” vs. “running.” Here’s a sample search for “ballroom dancer”:

Google is perfectly capable of equating “dancer” to other forms of the word, including “dances,” “dance,” and “dancing.” Once again, keyword stuffing is at best outdated thinking.

Abbreviations (“Dr. Who”)

Can Google recognize common abbreviations? Let’s try a search for our second-favorite doctor (hint, hint, wink), “Dr. Who”:

Google easily makes the connection between “Dr.” and “Doctor.” Interestingly, none of the organic titles or snippets I see on page one contain the word “Dr.”

Acronyms (“SNL skits” & “TARDIS”)

How about acronyms? Here’s a search for “SNL skits”:

Google has no problem interpreting “SNL” as equivalent to “Saturday Night Live.” Interestingly, they also understand that “skits” is synonymous with “sketches.” What if we spell out an acronym that isn’t usually spelled out, such as “Time And Relative Dimension In Space”?

Here, Google is happy to tell us “Hey, nerd, just say ‘TARDIS’ like everyone else.” The six-letter acronym is interchangeable with even the much longer search string.

Acronyms+ (“NJ DMV”)

This is where things get interesting. Here’s a search for “NJ DMV.” Look closely:

Not surprisingly, Google understands that “NJ” equals “New Jersey.” There’s a problem with this search, though – New Jersey doesn’t call their motor vehicle office the DMV, they call it the MVC (Motor Vehicle Commission). Google understands not only how to expand an acronym, but that the acronyms DMV and MVC are conceptually equivalent.

Synonyms (“discount airfare”)

The flip-side of no longer being confined to exact-match keywords is that you might just be finding yourself faced with a lot more competition for any given keyword. Let’s look at a competitive, commercial query, such as “discount airfare”:

Here, “discount airfare” gets matched to “airfare deals,” “discount tickets,” and “cheapest flights,” with even more variations on the rest of page one.

Synonyms+ (“upscale department stores”)

Wait, it gets worse. Google can go beyond traditional synonyms. Consider this search for “upscale department stores” (run from my home-base in the Chicago suburbs):

Not only does Google recognize that “upscale” is synonymous with “luxury,” but they’ve matched on actual examples of luxury department stores, including Bergdorf Goodman, Saks Fifth Avenue, and more.

Answers (“Doctor Who villains”)

We’ve moved from simply synonyms to a world of answers. Here’s another example, a search for “Doctor Who villains”:

It’s a parlor trick to tell you that “villains” is synonymous with “monsters” and “enemies.” What you really want to know is that Doctor Who’s rogue’s gallery includes Daleks, Cybermen, and Weeping Angels. Google can make this connection.

These aren’t just exceptions

It’s easy to cherry-pick examples, but are these edge cases or the new normal? I ran an analysis on 10,000 keywords (page one only) and found that only 57% of results had the search phrase in both the title and snippet. I used a pretty forgiving match (allowing for plurals, for example) and the keyword set in question is mostly shorter terms, not long-tail queries. I also allowed the terms to occur in any order. Keep in mind, too, that display snippets aren’t always META descriptions – they’re chosen by Google to be good matches.

All of this is to say that, even with a fairly forgiving methodology and a loose definition of a “match,” just over half of page-one results in my data set matched the search query. The examples above are not outliers – they are our immediate, unavoidable SEO future.


The Algorithm is learning

This deep into the article, you may be wondering what any of this has to do with RankBrain. There’s been a lot of speculation around RankBrain, and so I’m going to do my best to work from the facts as we understand them. You’re going to need some essential background information…

What, exactly, is deep learning?

First, the one thing we all seem to be able to agree on is that RankBrain uses machine learning, thus the “brain” part. Specifically, RankBrain uses “deep learning.” So, what is deep learning? According to Wikipedia:

Deep learning is a branch of machine learning based on a set of algorithms that attempt to model high-level abstractions in data by using a deep graph with multiple processing layers, composed of multiple linear and non-linear transformations.

Crystal clear, right? To understand deep learning and the state of modern machine learning, you have to understand neural networks. Let’s start with a simple neural network, the kind that were popular in the early 1990s:

Neural networks were built on a basic understanding of the human brain as a system of “nodes” (neurons) and connections between those nodes. At scale, the human brain is capable of learning incredibly complex ideas using this system of nodes and connections.

So, how do we put this model to work? Let’s start with what’s known as “supervised learning.” In a neural network like this, we have a known set of inputs and a desired set of outputs. Given a certain X, we want to teach the system to return Y. We use these inputs and outputs to train the system, gradually weighting the connections. The hidden layer adds computational complexity, giving the machine enough connections to encode interesting data.

Training itself uses methods that are cousins of linear regression (at the risk of oversimplification). Over a large set of inputs and output, we want to minimize the error of our model. In some cases, we work backward from the output(s) back to the input(s), in much the same way you might work a difficult paper maze from the finish back to the start.

Why go to all this trouble? If we know the inputs and outputs (sticking just to supervised learning, to keep this simple), why don’t we just have a lookup table? If X, then Y – simple. What happens when we get an input that isn’t in the table? The system fails. The magic of neural networks is that, if the system is properly trained, it can return outputs for completely new inputs.

To make a very long story only medium-long, these simple neural networks were interesting playthings, but weren’t capable of solving many complex problems. So, we put them aside. Then, the inevitable happened – computing power increased exponentially and got cheaper (thanks, Gordon Moore!). Specifically, we invented the GPU. You might think of the GPU as something built for gamers, but it is, in essence, a very powerful math machine.

At some point, simple neural networks scaled up massively, and I mean massively – on the order of 1,000,000X larger. These new machines were able to perform much more interesting tasks, and a new age of neural networks was born. These new machines required more complex methods, and thus, at the risk of oversimplifying a very complex topic, deep learning was born.

How does Google use deep learning?

Fortunately, we know a bit more about RankBrain. In Steven Levy’s excellent article about Google’s machine-learning ambitions, he quotes the following from Jeff Dean, head of the broader Google Brain group…

By early 2014, Google’s machine learning masters believed [Amit’s approach] should change. “We had a series of discussions with the ranking team,” says Dean. “We said we should at least try this and see, is there any gain to be had.” The experiment his team had in mind turned out to be central to search: how well a document in the ranking matches a query (as measured by whether the user clicks on it). “We sort of just said, let’s try to compute this extra score from the neural net and see if that’s a useful score.”

Amit Singhal, the head of Google’s Search team until early 2016, pioneered the heuristic approach – what we might call the “ranking factors.” Machine learning (ML) advocates at Google eventually were able to convince the team to test ML in a ranking context. By all accounts, that experiment went very well and the score was indeed useful.

It’s also worth noting that Amit, who was reported to be skeptical of using ML in organic search, left Google and was replaced by John Giannandrea, who was instrumental in many ML projects at Google. I won’t speculate on Amit’s motivations, but the shift in leadership to a strong ML advocate clearly implies that Google considered the RankBrain experiment a success.

Of course, it begs the question: How exactly are ML and deep learning in play in organic search? Google teaches a deep learning course on Udacity, and I was intrigued to find this screenshot in a quiz. The quiz asked how Google might use deep learning in rankings, and this was the answer:

When we train an ML model, the “classifier” is essentially the resulting decision machine. In this case, that classifier takes in a search term and web page as inputs and decides how relevant they are to each other.

Two things are worth nothing in this deceptively simple screenshot. First, ML is being used as a relevance engine. I think it’s safe to say that the quiz is not entirely hypothetical. Second, notice the query and the matching page. The query is “Udacity deep learning”, but the matching result title contains the related phrases “machine learning” and “supervised learning.” This is starting to look like some of the examples we saw earlier.

Another resource we have is the original Bloomberg article about RankBrain, which is still one of the more comprehensive pieces on the subject. The article quotes senior Google research scientist Greg Corrado and makes the following very specific claim:

RankBrain uses artificial intelligence to embed vast amounts of written language into mathematical entities – called vectors – that the computer can understand. If RankBrain sees a word or phrase it isn’t familiar with, the machine can make a guess as to what words or phrases might have a similar meaning and filter the result accordingly, making it more effective at handling never-before-seen search queries.

Again, RankBrain is being called out as essentially a relevance engine, a machine for better understanding the similarities and relationships between words. What are these vectors the article mentions, though? In the general sense, vectors are a mathematical concept – a point in space with both direction and magnitude. Vectors are a way of encoding complex information.

Thankfully, we have another clue, from Google’s public ML project, TensorFlow. One of Google’s side projects is a library called Word2Vec that, as the name implies, uses ML to convert words into vectors. Traditional methods of encoding words for information retrieval can deal with simple problems like pluralization and stemming, but have little or no sense of relationships. Word2Vec and similar models are capable of learning relationships like the examples below (Source: Tensorflow.org, ©2016 Google):

Here, Word2Vec has learned that the relationship between man and woman is the same as the relationship between king and queen (encoded in the direction of the vector). Similarly, the relationship between the verb tense walking to walked is the same as the relationship between swimming and swam. More importantly, these rules didn’t need to be specified. The machine learned them by studying large collections of real words in context.

Google’s actual algorithms are almost certainly more complex than the publicly available Word2Vec library, and researchers have combined vector-based approaches with other approaches, such as the more familiar LDA (latent dirichlet allocation), but it seems very likely that an approach like this is in play in RankBrain.

RankBrain is NOT query translation

It’s easy to mistakenly jump to the conclusion that RankBrain simply translates unfamiliar queries into more familiar ones, or long queries into short queries. This is not the case. RankBrain seems to operate in real-time and can compare multiple versions of a search phrase at once.

If I mistakenly type a search like “Benedict Crumblebatch,” Google will tell me this:

In this case, Google has tried to interpret my intent and has replaced my query with what it thinks is a better version. This is query translation. In this case, all of the results match the translated query and it overrules my original search.

Revisiting an example from above, if I search for “scarf,” I can get back matches on both “scarf” and “scarves” (even in the same result):

Google is not translating “scarf” –> “scarves” and then returning matches on the new term. Google is applying a powerful relevance engine that recognizes these matches in real-time.

Are we sure it’s RankBrain?

Let me be clear on one thing – relevance is a very complex process, and it’s hard to know for sure where traditional information retrieval methods end and RankBrain begins. I can’t say with certainty that all of the examples I showed previously represent RankBrain in action.

However, there is one more piece of evidence. Remember the “NJ DMV” example? Google was able to understand that “DMV” (Department of Motor Vehicles) and “MVC” (Motor Vehicle Commission) are equivalent concepts in New Jersey.

Our data science team, led by Matt Peters, put together an ML prototype that uses a method similar to Word2Vec. If you input search terms into this tool, it looks at the corresponding Google results and calculates the similarity between those results and the original query:

This screenshot has been edited, but the data is real. What the tool is saying is that a page with the title “State of New Jersey – Motor Vehicle Commission” is a good match (93%, although the system is a little forgiving) for “NJ DMV.” The fact that we can train an ML system to perform this task doesn’t prove RankBrain does it, but it does at least show that it is well within Google’s ML capabilities.

When did RankBrain roll out?

Please note that RankBrain is often tied to the announcement date in October of 2015, but that article also says that RankBrain was in play “for the past few months.” Steven Levy’s article on ML in Google gives a date of April 2015 for the rollout, and we believe that timeline is accurate. RankBrain has probably been in play for at least 1 1/2 years at the time of this writing.


How do we adapt to RankBrain?

In a world where Google can understand stemming, synonyms, and even answers, how do we approach keyword research? Let’s go back to our Space Needle example. I’m going to use Moz’s Keyword Explorer as a backdrop for the rest of this discussion. Let’s say I fire up my trusty keyword research tool and enter the phrase “space needle height”:

Even out of the gate, we’ve got 1,000 keywords to deal with, many of which are fairly similar. How do we go about targeting these 1,000 variations?

Option 1 is to write 1,000 pages, each laser-targeted at a single phrase. We know, practically, that either this is going to be a huge amount of work or is going to lead to thin content. Sites filled with templated pages that only vary by a few keywords are a lousy user experience and prime bait for Google’s Panda algorithm.

Option 2 is to take as many of these phrases as possible and just stuff them into a single paragraph. I’ve done this for you, and here’s the kind of result you can expect:

SPACE NEEDLE HEIGHT
The Space Needle height (Seattle) is 605 feet. The Space Needle height in stories is just over 60. It’s interesting to note that the Space Needle height comparison to the Empire State Building is about half as high. In contrast, the Seattle Space Needle height comparison to Chicago’s Willis Tower is only about one-third the height.

The bolded phrases are my target phrases. I hope we can all agree that this isn’t optimal content crafting if our goal is to convince our audience that we’re a credible source of information.

I propose a third option. You may have noticed a pulldown in Keyword Explorer for [Group Keywords]. This does exactly what it sounds like it does. Let’s take all of these very similar keywords (and you could do this by hand as well, if you’re willing to put in the time) and try to group them. We end up with something like this:

The system has tried to bucket the keywords into broader, more useful groups, allowing us to ignore some of the minor variants. So, let’s pick three groups from this list:

  1. “space needle height”
  2. “space needle height in stories”
  3. “space needle how tall”

What if we chose representative, natural language phrases within each of these groups? Think of them as exemplars of the group. We might pick something like this:

  1. “height of the Space Needle”
  2. “Space Needle is ___ stories”
  3. “How tall is the Space Needle?”

Now, let’s craft a paragraph around these more natural, diverse phrases:

HOW TALL IS THE SPACE NEEDLE?
The height of the Space Needle in Seattle, Washington is 605 ft. (184 m), including the antenna. Interestingly, while the Space Needle is approximately 60 stories tall, it only occupies 6 floors, with most of the tower being structural. While it was once the tallest building in Seattle, the Space Needle now ranks only 7th.

Not only have we written a paragraph that might actually be valuable to humans, but we’ve covered our three target phrases and even had room for a fourth (“tallest building in Seattle”). What’s more, each of these phrases represent groups of dozens or hundreds of similar keywords. By writing to the groups or broader concepts instead of narrowly targeted phrases, we’re able to cover many keyword variants efficiently.


3 Gs: Gather, Group, Generate

I’ve taken to calling this approach to keyword research the 3 Gs, and it goes likes this:

  1. Gather keywords
  2. Group keywords into clusters
  3. Generate exemplars

Another way to think of this process is that we’re grouping keywords into concepts, and then converting each concept back into a representative keyword/phrase: Keyword –> Concept –> Keyword*. The result is a specific search phrase to target, but that phrase represents potentially dozens or hundreds of similar keywords.

Let’s work through another example, but one with commercial intent. Pretend you’re working in the Seattle apartment space and are looking to write an article about rental costs. Just to pick a starting point, you enter “Seattle rental prices” into your keyword research tool of choice and gather your keyword list:

Naturally, we get back a list of related but sometimes very similar keywords. Even in this list, we can start to see some interesting variations (“average rent”, prices by year, mapped prices, etc.), but let’s take it to step two and group these keywords:

In a real-world keyword research scenario, we’d want to thoroughly explore all of the groups, but I’ve picked three for now that caught my eye (underlined in green). They are:

  1. “Seattle average rent by neighborhood”
  2. “Seattle housing prices skyrocket”
  3. “cheapest Seattle apartments”

How do we go about generating an exemplar from each group? Sometimes, intuition is fine. For example, the keywords our system has grouped under #2 turn out to be a bit of an odd mix, but I really like how “skyrocket” resonates and “housing prices” is a good keyword variant, so I’ll pick a phrase. For something like #3, we may choose to just see what variation has the highest potential for traffic. In Keyword Explorer, we can simply expand that group, select the keywords, and add all of them to a list, like this:

Once the stats for the list are collected, we can take a look and see that “cheapest apartments in Seattle” has both the highest traffic volume and Keyword Potential, according to our metrics:

For the final group (“Seattle average rent by neighborhood”), I browsed the grouped keywords, and one caught my eye: “average rent downtown seattle.” I like this one because it’s specific to an actual neighborhood, although we might choose to craft content around some kind of neighborhood-by-neighborhood theme as well. What I like about trying to understand our keywords as groups/clusters is that it’s also a great process for generating content ideas.

So, let’s put some exemplars against our three groups. We might end up with something like this:

  1. “average rent in downtown Seattle”
  2. “Seattle housing prices are skyrocketing”
  3. “cheapest apartments in Seattle”

These are all rich phrases that we can use to craft content, and they’re built on a logical framework of keyword research. Even using just this single list, our system claims these three groups represent at least 64 keyword phrases. Factoring in the long-tail, they potentially represent hundreds more.

Eventually, we may have ML tools that can take large groups of related phrases and help find the perfect exemplar. Even now, Keyword Explorer’s grouping engine is built on ML. There will come a time very soon when ML is part of our everyday work as SEOs.

There’s a fourth, unofficial G: Gap. As our British friends might say, mind the gap. The exemplars you build in this process are meant to be natural-language phrases that represent dozens of keywords, but our understanding of a concept and Google’s won’t always match, and some searches you hoped you’d rank for will fall through the cracks. It’s important to continue to monitor and track a large set of keywords. If you see that some aren’t improving, consider generating new exemplars or targeting them separately. This is an iterative process, and we still have to get our hands dirty with real searches every day.


Bonus: Keyword brainstorming

Here’s something fun to try. In Keyword Explorer, you can specifically request keyword phrases that contain none of the words in your original phrase. Why would you want to do this? It can help you find related concepts that you might not have considered.

From the [Display keyword suggestions that] pulldown, select “exclude your query terms to get broader ideas.” Here are some of the results I get on a search for “Seattle rental prices” with grouping on (I’ve edited this list a bit just to show some of the more interesting results in the space allowed):

Some of these are obvious (although still interesting), like searches that use specific neigbhorhood names (e.g. “best Capitol Hill apartments”). Some are less obvious and open up some new avenues. “Kirkland apartments under $1000” reminds us that both neighborhood and price sensitivity matter in similar searches. These are aspects we can’t ignore in our broader keyword research on this topic.

The second to the last is really interesting, IMO: “apartments near Amazon headquarters.” Being such a big employer (we know all too well, given the competition for talent in Seattle), a content focus on just apartments near Amazon’s headquarters could get a lot of traction. Finally, while it’s not the most useful topic or keyword to target, “too damn expensive” is certainly a good headline phrase to tuck away.


Why not just write for people?

If Google is really understanding natural language searches and becoming more intelligent, why don’t we just write content for people and forget about this whole process? It’s a fair question. If your choices are 2005-era keyword stuffing and thin content or writing for people, then please, for the love of all that this is holy, write for your human site users (and, by extension, search users).

There’s a problem, though, and it’s probably easier to show than tell…

Google has come a long way in their journey from a heuristic-based approach to a machine learning approach, but where we’re at in 2016 is still a long way from human language comprehension. To really be effective as SEOs, we still need to understand how this machine thinks, and where it falls short of human behavior. If you want to do truly next-level keyword research, your approach can be more human, but your process should replicate the machine’s understanding as much as possible.

I hope you’ll give the 3 Gs a try and let me know what you think. I’ll freely admit I’m biased and hope you’ll also give Keyword Explorer a try, if you haven’t yet (and if you have, test out some of the new tricks I’ve talked about).

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Mastering the Owner Response to the Quintet of Google My Business Reviews

Posted by MiriamEllis

Two dates to know: August 4, 2010 – the day Google enabled owner responses to Google My Business reviews; November 17, 2016the day Moz enabled incredibly easy GMB owner response functionality in the Moz Local dashboard. Why are these noteworthy events in Local Search history?

Because reviews and owner responses are direct reputation management, free marketing, free advertising, damage control, and quality control all wrapped up in one multi-voice song about your brand.

reviewquintet.jpg

What’s missing from the picture of this free-for-all of voices caroling sentiment about your brand? You arethe conductor! If you’re not leading the tune — from setting customer service policies, to training staff, to managing complaints, to engaging directly with consumers online — you’re giving up available reputation management controls.

Make no mistake: No brand can prevent every sour note, but with owner response functionality, you can not only retune relationships with valuable customers, but can also protect revenue by keeping those customers instead of having to invest 25x as much in obtaining new ones. Owner response mastery is, indeed, smart business.

For the past six years, since Google launched owner responses as part of its local product, I’ve been studying them and acting as a consultant to a variety of local business owners and agencies regarding effective usage of this remarkable capability. Today, in celebration of Moz Local’s support of this function, I’m going to break down the types of reviews into 5 categories and offer you my tips for skilled management. With reputation and revenue on the line, every local brand needs an intelligent strategy!

Getting up-to-speed on owner responses

During our recent launch, a Moz community member let us know he’d never heard of owner responses before, so real quick: Many review platforms give you the option, as the business owner, to respond to reviews your customers have left you. This is normally done from within your dashboard on that platform, or, in some cases, via mobile apps.

In the Moz Local dashboard, the Google My Business owner response function is a real time-saver. We alert you when new reviews come in, and you simply click the ‘reply’ link to write your response. A little form pops up in which you can type away handily:

reviewquintet2.jpg

Now let’s delve into responding to the five basic types of reviews most local brands can expect to receive.

Type 1: “I love you!”

Real-world example:

reviewquintet3.jpg

Diagnosis: This is the customer every brand wants to have: the delighted evangelist who goes to the lengths of saying that nothing else on the local scene can compare to what the business offers. Honestly, reviews like this are like beautiful greeting cards validating that your business is getting it right on all points. Pure music to your ears!

Owner response strategy:

Many business owners ask if it’s necessary to respond to positive reviews. My short answer is yes, if you wish your business to come across as courteous and engaged. Part of conducting the flow of your reputation is acknowledging customer satisfaction and thanking them for the time they invest in writing such nice things about your company. It’s just good manners.

Having said this, I’ll qualify it by mentioning scale. If your enterprise has 100+ locations which each have 100+ positive Google My Business reviews, responding to every single one may not be the best use of your resources. Prevent the appearance of ungrateful neglect by aiming for a percentage — maybe 10% — of ‘thank yous’ in response to your best reviews.

Pro tips:

  • Your thanks can be brief, but avoid repetitiousness. Write a unique response each time. There are owner response profiles out there that have made me strongly suspect robots manage them, as in ‘thank you for your review’ written on 30 different responses. Avoid that.
  • Remember that owner responses are content consumers read. They are, in essence, free advertising space. Don’t go over the top with this, but if a customer mentions something they love, latch onto that. In our sample review above, the owner could mention that comments like this one inspired them to bottle their hot sauce for retail sales, or they could mention that they actually just won a best-in-Bay-Area award from X publication. Think products, services, and hyperlocal/local terminology. No, don’t put the hard sell on the customer in the owner response, but use this real estate with savvy. If there’s something you think a happy customer would be excited to know, promote it in a nice, friendly way!
  • Positive reviews indicate that a customer is already in a good, receptive mood. The more personable your owner response, the more of an impression your business can make, encouraging the customer to come back for more. Here, company culture, personality, and fun can shine. Your customer thinks you are special — act like it in the response.

Suggested owner response:

Hi Charley!

We were just thrilled by your review — in fact, we showed it to Chef Rosa, because the pique sauce you love is based on her grandmother’s traditional recipe brought from Puerto Rico in the 1930s. It’s the real deal, and we’re actually offering it bottled for retail now right next to the hostess stand at both our San Rafael and San Francisco locations, based on diner requests. Hint: one secret ingredient is apple cider vinegar, but that’s all we can say! We’d love to see you back soon, and Chef Rosa says, “Thank you for the lovely compliment.” ‘Best in the Bay Area’ makes us all proud!

Good Eating!

Marta Sanchez, Owner

Type 2: “My mind isn’t made up yet.”

Real-world example:

reviewquintet4.jpg

Diagnosis: A 3-star rating is the hallmark of the consumer who likes some things about your business, but isn’t totally loyal yet. They may/may not return and may/may not recommend you to others. Undecided patrons represent an exciting challenge to transform dissatisfactory aspects of your business and specific consumer sentiment, all at the same time.

Owner response strategy:

The honesty of a less-than-5-star review, when written in detail, delivers two valuable assets to your brand: it tells you where you’re hitting and where you’re missing, giving you the opportunity to improve and turn a lukewarm consumer into a loyal one.

Strategy for the owner response involves thanking for praise, accepting responsibility for faults, apologizing for disappointments, and making some kind of an offer. This offer, meant to sweeten the pitch that you hope the consumer will give your company a second chance, could be a comp or a coupon for future use, or it could simply be an explanation of how you have heard their feedback and made changes.

Pro tips:

  • Express gratitude for consumer complaints — they are valuable. Do not attempt to shift blame onto anyone else, including the customer or staff members.
  • Document both the positive and negative sentiment of so-so reviews and use it as your playbook for keeping what’s good and improving what isn’t excellent.
  • Be sure the customer feels heard. Cite their complaints back to them. By doing so, you are demonstrating to all future potential customers that your brand is responsive to feedback.

Suggested owner response:

Dear Yesenia,

We’re so grateful to you for letting us know that our prices, staff, and in-hotel restaurants pleased you, and, I also want to express my thanks to you for mentioning that the housekeeping wasn’t exceptional. I need to hear that, and take full responsibility for the dusty room. I have been trying a variety of cleaning services this past year, with the goal of finding the best.

While I want to be sure that every guest knows we honor any requests during their stay (just dial 9 on your in-room phone), I also want to let you know that, based on your comments, I held an all-staff meeting with our current cleaning service and have issued a new 10-point cleaning checklist (including dusting all surfaces) for each housekeeper. Should you honor us with a second stay, I personally guarantee you will find your room immaculate, and I would also like to offer your party a free breakfast in the Palm Room, as you enjoyed our restaurants. Just tell them Rob sent you, and it will be our pleasure to serve you! Thank you for your valuable and honest review.

Cordially,

Rob Brown, Owner

Type 3: “There was hair in my taco…”

Real-world example:

reviewquintet5.jpg

Diagnosis: The dreaded 1-star review! The customer has a specific, legitimate complaint, and your job as the owner is to address their dissatisfaction, take responsibility, and, whenever possible, make an offer to make things right. A negative review is likely the last life preserver an unhappy customer will throw you — a last chance to earn them back with superior responsiveness. Given the cost of replacing them, rewards for the effort can be great. When a customer ‘saves you’ by making their complaints known, an adept response from you may ‘save them’ in return, earning their repeat business.

Pro tips:

  • Apologize!!! Say the words, “I’m sorry, I apologize.”
  • No blame shifting, no lectures — just total accountability, humility, and a willingness to learn.
  • Be as honest as possible about whatever circumstance led to the customer’s bad experience, and state what you’re doing to improve that circumstance. Sometimes, the circumstances may include faults on the customer’s part. If you have to mention these in order to be honest, do so with great care and no blame, as in the sample response below.
  • Negative reviews often run on for agonizing paragraphs and chapters, but your response should not. Be thorough, but concise.
  • Offer something, even if it’s just a few minutes of your time on the phone, to try to make it right.
  • Aim for a ‘wow’ factor — as in you want future potential customers to say, “Wow, this business really cares!” when they read the response.
  • For more tips on managing negative reviews, please read Diagramming The Story of a 1 Star Review.
  • Document all complaints; they are incredibly valuable both in terms of damage control and quality control. Consider doing a full review audit on a set schedule to catch emerging problems and resolve them.

Suggested owner response:

Dear Vivi,

This is Dr. Tom, and I want to begin by apologizing for the inconvenience you experienced. I hate to think of you having wasted both time and gas on this. I’m so sorry.

I regret that you missed the message about hours for the shot clinic on our homepage, and your review has made me concerned that other patients may be missing it, too. Thanks for alerting me to this. Here’s what we’ve done:

  • Enlarged the homepage hours message + included those hours in the header of every website page
  • Put this at the top of our Facebook page
  • Updated our off-hours phone message to include the info that folks need to come in by 3:00 to ensure walk-in service.

Will you give me a second chance to make this right for you? It’s so important that your pet gets proper shots. Please phone and let my receptionist know Dr. Tom is offering you a priority appointment, any day of the week, and I’d like to make friends with your pup by treating him to one of our wonderful new chew toys. Hoping to have the great pleasure of caring for you and your awesome companion animal!

Kindly,

Dr. Tom

Type 4: “I’m actually your competitor.”

Real-world example:

reviewquintet6.jpg

Diagnosis: Unfortunately, fake reviews happen. They may stem from unscrupulous competitors, disgruntled past employees, or individuals with personal grudges against someone at the company. The line to walk here is whether the reviews are simply false (warranting a response + Google action) or citing such defamatory or illegal practices that you should consult with a lawyer before taking any further action. Our real-world example is of the former kind — it illustrates what a fake review might look like with sentiment that is negative but not accusing the business of criminal activity.

Pro tips:

  • If research has made you aware that a review has been left by a competitor or by someone who is not a customer, that’s a violation of Google’s Review Policy.
  • First, leave a brief owner response to the review (as shown in my sample response below) to alert consumers to the falsity of the review. Note: I don’t advise ‘outing’ the bad actor — it’s not professional.
  • Second, follow Google’s steps for flagging the review. I suggest waiting 24 hours after doing this before moving on.
  • Next, on that same page, you will see options for speaking directly with Google via phone, chat, or email. Contact Google to let them know about the fake review you have flagged. Hopefully, they will be able to rectify this for you and remove the review.
  • However, if you get a rep who doesn’t seem to understand your issue, turn to the Google My Business Community, post the complete details of your scenario, and beg for a Top Contributor to help escalate your issue.
  • Don’t expect a quick fix. You may have to be persistent to obtain resolution.
  • But, again, please don’t take these steps if a review accuses your business of something illegal. We’ll cover that, below, in Type 5.

Suggested owner response:

To Our Valued Customers,

Sadly, after researching this, our company discovered that this review was left by a competitor. We are taking the appropriate steps to report this to Google, and we hope having this fake review removed will encourage this unfortunate competitor to seek other, more honest forms of promoting his business. If he persists, we will engage appropriate legal counsel.

SMH,

Jim Davis, Owner

Type 5: “I’m citing illegal stuff.”

Real-world example:

reviewquintet7.jpg

Diagnosis: Whether a negative review is true or false, any time illegal or dangerous behavior is cited, it’s a cue to you that you need to speak with an attorney before taking any further steps. Don’t respond and don’t attempt to have the review removed, as both could be used as evidence in a court of law. Seek an attorney well-versed in cyber law and act on his or her advice, rather than on any advice you may read on the Internet or receive from marketers, friends, etc. And if you run an SEO agency, I urge you not to advise clients on Type 5 reviews — we’re SEOs, not attorneys, and shouldn’t be consulting on legal matters.

Orchestrating the ideal owner response environment

If you already have an excellent customer service training program in place at your business, chances are good that you will mostly be managing Type 1 and Type 2 reviews with only the occasional Type 3. Types 4 and 5 will hopefully be the exception rather than the rule. Given that one 2016 survey found that 57% of consumer complaints relate to employee behavior, we can estimate that at least half of your reputation is anchored to the quality of your staff hiring and training practices. So, definitely place first and fundamental focus there, and then manage the ensuing consumer sentiment as it flows in with these tips:

  • Observe the typical rate at which you normally receive reviews. It could be a few per week, or if you’re managing multiple locations, numerous reviews per day.
  • What you observe dictates how frequently you need to monitor your reviews. If you’re a Moz Local customer, we’ll conveniently alert you as each new review comes in, and you can check that as often as makes sense.
  • Avoid unnecessary customer frustration and bad reviews stemming from bad data. There must be literally millions of negative reviews on the web citing wrong phone numbers, wrong hours of operations, wrong addresses. Do a quick citation health check to see if your major local business listings are fomenting negative sentiment. Correct problems.
  • I’ve seen various theories about how quickly an owner should respond to reviews; my own opinion is ASAP, particularly when it comes to Type 2 and Type 3 reviews. If you are trying to catch complaints for the purpose of resolving them and winning back unhappy customers, there may be circumstances (like our example with the puppy shots) that make it vital to respond quickly to avoid customer loss.
  • While it may be ideal to have owners be the authors of all owner responses, scale may make that an untenable situation. If you are designating a staff member or marketer to represent the owner, prevent mistakes by clearly outlining company policies, voice, permissions, and objectives with that person.
  • Responsiveness can be a competitive difference maker. Observe your direct competitors; if they are careless about active management of reviews, you can take advantage by making your brand the one that always responds, demonstrating care and accessibility.
  • Know that expert owner responders experience thrilling victories, like having an unhappy customer update their review and raise their star ratings after receiving a great owner reply. These are rewards that make the input of effort well worth it!

Six years into Google’s rollout of the owner response function, I still encounter many business owners expressing fear of reviews. At the root of this, I often find that they feel powerless and overwhelmed by the prospect of managing their brand’s reputation.

It’s my hope that this post signals to every local business owner that you do, indeed, have significant power in this regard. Via the the right combination of skilled customer service and active review management, you can orchestrate an exceptional online reputation for your brand in concert with your customers, in harmony with your professional goals and dreams.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Which Page Markup + Tags Still Matter for SEO? – Whiteboard Friday

Posted by randfish

Should you focus on perfecting your H1s and H2s, or should structured data demand all your on-page attention? While Google hasn’t completely pulled the rug out from under us, don’t let the lack of drastic change in page markup fool you. In today’s Whiteboard Friday, Rand outlines where to focus your efforts when it comes to on-page SEO and offers some tools to help with the process.

http://fast.wistia.net/embed/iframe/cx5rryebcs?seo=false&videoFoam=true

http://fast.wistia.net/assets/external/E-v1.js

Click on the whiteboard image above to open a high-resolution version in a new tab!

Video Transcription

Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. This week we are going to chat about page markup and tags and which ones still matter for SEO.

Now, weirdly enough, you would think that over the last, say, seven or eight years we would’ve had an enormous growth in the number of tags and the optimization options and what you have to do on a page, but that’s not actually the case. Google kind of gave us a few that were important — things like rel=author — and then took some away. So it’s changed a little bit, but it is not as overhauled massively as you might think, and that’s a good thing.

Old-school SEO markup

Old-school SEO best practices were sort of like, okay, I had to worry about my title, my meta description and keywords tag — keywords a little less though, keywords haven’t been worried about for maybe 15 years now — my robots tag certainly, especially if I was controlling bot behavior, rel=canonical and the rel=alternate tag for things like hreflang, which came about six or seven years ago, and my headline tags. Some potential basically markup or text tags that could change the format of text, like strong and bold and EM, these have gotten less important. I’ll talk about that in a sec. Obviously, with URLs worrying about rel=nofollow and other forms of the rel tag, and then image source having the alt attribute.

This was kind of the basic, bare-bones fundamental minimums. There were other tags that some people employed and obviously other tags that Google added and took away over time or that they paid attention to a little bit and then didn’t. But generally speaking, this was the case.

Modern SEO markup

Nowadays there are a few more, but they’re really centered around just a few small items. We do have metadata now. I’m going to call this SEO even though technically it is not just for the search engines. Those are Open Graph, Twitter Cards, and the favicon. I’ll talk about that in a sec why that actually changed even though favicon has been around for a long time. Then, things like the markup for Google itself, the structured data markup that’s part of schema.org that Google is employing.

I want to be clear. Google is not using every form of schema. If you go to schema.org, you can find schema markup for virtually anything. Google only uses a small portion of that. While certain websites have seen an uptick in traffic or in prominence or in their visibility and display in the search engine results, it is not a guaranteed rank booster. Google says they don’t typically use it to boost rankings, but they can use it to better understand content, which in my opinion, better understanding content is something that often leads to better rankings and visibility, so you should be doing it. As a result, many of these old-school tags still apply of course — alt attributes and in the header tag the title and the meta description, meta robots, canonical.

What’s changed?

Really what’s changed, the big things that have changed, added to the header of pages, I would tell you generally speaking that you should think and worry about:

  • Twitter Cards
  • Open Graph markup
  • The favicon

Twitter Cards is pretty obvious. Basically, because Twitter is such a big distribution network for content and can be, it pays to have your cards optimized rather than to just have the URL exist on its own. You can stand out better in Twitter that way.

Open Graph markup, this is basically used by Facebook, an even bigger distribution platform than Twitter, and so of course you want to be able to optimize how you appear in those. Because social media in general is so well correlated with all sorts of positive SEO things, you want to put your best foot forward there. Therefore, I’m going to say this is an SEO best practice as well as a social media marketing one.

Favicon is a little weirder. Favicon’s been around for forever. It’s the little graphic that appears in your browser window or at the top of the browser tab. The reason that it matters is because so many sites — social media platforms and many distribution sites, places like Pocket, places that scrape, places that will show your stuff including sometimes, at least in the past, Google’s knowledge cards — will sometimes use that favicon in their display of your site. For that reason, it certainly can pay to have a good favicon that stands out, that’s obvious and clear, much more so than it was, say, a decade ago.

Not as important…

The H1, H2, and H3

I know what you’re going to say. You’re looking around like, “Wait a minute. I still see a lot of recommendations from tools, even like Moz Pro, that say I should use H1, H2, H3.” It is a best practice. I’d say H1 and H2 are best practices, but they are not going to transform or massively help your rankings. They’re not very well correlated with better rankings. In lots of testing, folks could barely ever observe a true, reconcilable difference between using the headline tag and just having those headlines be big and bold at the top of the page. However, I’m saying this alone. If you are using itemprop to describe a headline, an alternate headline, in your schema.org markup, that actually can be more useful. We do think that Google is at least using that, as they say, to better understand your content. I think that’s a positive thing. Then, there are lots of other sites that can use schema as well. Google is not the only place. That can certainly help your visibility too.

Strong, bold, and EM

It just kind of doesn’t matter as much. With CSS taking things over, you don’t need to worry about visual display of text in your HTML code nearly as much and certainly not from the search engine perspective.

Added to body

I’m adding to the body tag of course all of the schema.org options. I’m just showing the article ones here, but you should consider any of the ones you’ve got — recipes or news or videos or all sorts of stuff.

What about…?

Questions that folks might have around page markup:

  • What about other metadata? There’s the Dublin Core Metadata Initiative and other forms of open metadata and other forms of markup that you could put in there. I’m going to say no, don’t bother. Until and unless something gets truly popular and used by a lot of these different services, Google included, it just doesn’t pay, in my opinion, and it adds a little bit of extra weight to a page that just doesn’t matter.
  • W3C validation, does it matter if I have valid HTML code that’s sort of very, very perfect? Nope, it doesn’t seem to matter much at all. It didn’t matter back in the day. It doesn’t matter now. I would not worry about it. Most of the most popular and most visible sites in Google do not actually validate at all.
  • Schema that Google hasn’t adopted yet? I’m going to be a little controversial and say it’s probably worthwhile. If Schema has already stated this is how this format works, but you don’t yet see Google using it, it could still pay to be an early adopter, because if and when Google does do that, it could bring benefit. Now, if you’re worried about heavy page load or if this is very time-consuming for you or your dev team, don’t worry about it too much. You can certainly wait until Google actually implements something before you go and add that relevant schema to your site.
  • Other forms of semantic markup? I know there are lots of people who believe semantic markup is the future and those kinds of things, but I don’t. I don’t think that until and unless the engines adopt it, it probably does not pay. Certainly we have not seen browsers, we have not seen search engines, and we have not seen big organizations that in the social media world start to adopt this semantic markup stuff, so I would worry less about that. I think, to be honest, the engines of the future are worried about parsing the content themselves, not about how you mark it up on your pages.
  • Header, footer, sidebar labels in CSS? This was like a spam or manipulation or link counting thing for a long time, where SEOs worried that page markup that called out this is in the header, this is in the footer, this is in the sidebar of the visual of the page, like I’m saying these links are in here or these links are over here or these links are down here, this was a concern. I am less worried about it nowadays. If you are very paranoid or concerned, you certainly could use alternate things. I just wouldn’t worry about it very much.

Want to check your pages?

If you want to check these pages, you want to go through a process of actually reviewing all this stuff, there are a few tools that will do all of this stuff for you. They’ll look at all of these different tags and markup options.

The free one I love the most happens to be a Moz tool. I just really like it.

  • MozBar. You can download it for free. There are almost 400,000 people who use it regularly for free, and that’s awesome. It does have a little on-page checking option. It’ll run through all this different stuff for you.
  • View source and do it manually in your browser.
  • Google Structured Data Checker tool, which is linked to from the MozBar’s on-page checker, but also you can Google it yourself and then plug stuff into it. You don’t need to be logged in to your Webmaster Tools or Search Console account. It will validate at least the schema.org options that Google considers, which is great, and some ones that they don’t use, but that’s cool too.
  • Facebook has the same thing with Open Graph checking.
  • Twitter with their Card Validator.

If you want to use a paid service to go crawl your site automatically and surface all these issues for you:

With these options, I would love to actually hear from you in the comments if you have seen markup or tag options that are not covered here that you think are influencing SEO for a wide range of folks. Please bring them up. Let’s talk about them. Let’s talk about any of these you disagree with.

We’ll see you again next week for another edition of Whiteboard Friday. Take care.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

5 Takeaways from Earning Links in 130 Countries

Posted by kerryjones

I was in Peru earlier this year for a digital marketing conference, and I overwhelmingly heard the same frustration: “It’s really hard to use outreach to earn links or PR coverage in our country.”

This wasn’t for lack of trying. As I continued to hear this sentiment during my visit, I learned there simply weren’t a lot of opportunities. For one thing, in Peru, there aren’t nearly as many publishers as in more populous countries. Most publishers expected payment for mentioning a brand. Furthermore, journalists did a lot of job-hopping, so maintaining relationships was difficult.

This is a conundrum not limited to Peru. I know many people outside of the US can relate. When you see the Fractl team and others sharing stories about how we earn hundreds of links for a single content piece, you might think it must be nice to do outreach somewhere like the US where online publishers are plentiful and they’ll feature great content with no strings attached. While the work my team does isn’t easy by any means, I do recognize that there are ample opportunities for earning links and press coverage from American publishers.

What can you do if opportunities are scarce in your country?

One solution is focusing your outreach efforts on publishers in neighboring countries or countries with the same language and a similar culture. During conversations with the Attachmedia team (the company hosting the conference I was at), I learned they had much greater success earning media stories and building links outside of Peru because publishers in surrounding South American countries were more receptive to their email pitches and publishing third-party content.

But you may not need to do any international outreach if you know how to create the type of content that will organically attract attention beyond your borders.

At Fractl, many of our top-performing client campaigns have secured a lot of international links even without us doing much, or any, international outreach. To dig deeper, we recently conducted an analysis of 290 top-performing client content campaigns to determine which content naturally attracted coverage from international publishers (and thus, international links). Altogether, these campaigns were featured by publishers in 130 countries, earning more than 4,000 international media stories.

In this post, I’ll share what we found about what causes content to spread around the world.

1. Domestic success was a key factor in driving international placements for Fractl’s campaigns.

For years, we’ve noticed that if content gets enough attention in the US, it will organically begin to receive international press and links. Watch how this happens in the GIF below, which visualizes how one of our campaigns spread globally after reaching critical mass in the US:

Mapping-Viral-Content.gif

Our study confirmed that there’s a correlation between earning a high number of links domestically and earning international links.

When we looked at our 50 most successful client campaigns that have earned the highest number of media stories, we discovered that these campaigns also received the most international coverage. Out of the 4,000 international placements we analyzed, 70 percent of them came from these 50 top-performing campaigns.

We also found that content which earned at least 25 international media pickups also earned at least 25 domestic pickups, so there’s a minimum one-to-one ratio of international to domestic pickups.

2. Overcome language barriers with visual formats that don’t rely on text.

Maps showing a contrast between countries were the visualizations of choice for international publishers.

top-50-by-format.jpg

World maps can be easily understood by global audiences, and make it easy for publishers to find an angle to cover. A client campaign, which looked at how much people eat and drink around the world, included maps highlighting differences between the countries. This was our fourth-highest-performing campaign in terms of international coverage.

calories-map.png It’s easy for a writer whose primary language isn’t English to look at a shaded map like the one above and pick out the story about his or her country. For example, a Belgian publisher who covered the consumption campaign used a headline that roughly translated to “Belgians eat more calories than Americans”:

belgian-publisher.png

Images were the second most popular visual format, which tells us that a picture may be worth a thousand words in any language. One great example of this is our “Evolution of Miss Universe” campaign, where we created a series of animated and interactive visualizations using photos of Miss Universe winners since 1952:

https://onlinedoctor.superdrug.com/services/widgets/evolution-miss-universe/miss-universe-timeline/

The simplicity of the visuals made this content accessible to all viewers regardless of the language they spoke. Paired with the international angle, this helped the campaign gain more than 40 pickups from global sites.

As we move down the rankings, formats that relied on more text, such as infographics, were less popular internationally. No doubt this is because international audiences can’t connect with content they can’t understand.

When creating text-heavy visualizations, consider if someone who speaks a different language can understand it — would it still make sense if you removed all the text?

Pro tip: If your outreach strategy is targeting multiple countries or a country where more than one language is widely spoken, it may be worth the effort to produce text-heavy visuals in multiple languages.

3. Topics that speak to universal human interests performed best internationally.

Our top-performing international campaigns show a clear preference for topics that resonate globally. The six topics that performed best internationally were:

  1. Drugs and alcohol
  2. Health and fitness
  3. Entertainment
  4. Sex and relationships
  5. Travel
  6. Technology

Bear in the mind that these topics are reflective of our client campaigns, so every topic imaginable was not included in this study.

We drilled this down a little more and looked at the specific topics covered in our top 50 campaigns. You’ll notice many of the most popular topics would make your grandma blush.

international-data-by-topic.jpg

We know that controversial topics are highly effective in grabbing attention, and the list above confirms that pushing boundaries works on a global scale. (We weren’t exactly surprised that a campaign called “Does Size Matter?” resonated internationally.)

But don’t look at the chart above and assume that you need to make your content about sex, drugs, and rock and roll if you want to gain international attention. As you can see, even pedestrian fare performed well globally. Consider how you can create content that speaks to basic human interests, like technology, food, and … Instagram.

4. A global angle isn’t necessary.

While our top five international campaigns did have a global focus, more than half of our 50 top-performing international campaigns did not have a global angle. This tells us that a geographic angle doesn’t determine international success.

Some examples of non-geographic ideas that performed well are:

  • A tool that calculates indirect sexual exposure based on how many partners you’ve had
  • The types of white lies people commonly tell and hear
  • A face-off between Siri, Cortana, and Google Now performance
  • A sampling of how many bacteria and germs are found in hotel rooms

We also found that US-centric campaigns were, unsurprisingly, less likely to succeed. Only three of our campaigns with America-focused titles received more than 25 international placements. If your content topic does have a geographic angle, make sure to broaden it to have a multi-national or worldwide focus.

Pro tip: Consider how you can add an international twist to content ideas that already performed well domestically. The Miss Universe campaign example I shared above? That came to fruition after we successfully did a similar campaign about Miss America. Similarly, we could likely reboot our “Tolerance in America” campaign to look at racism around the world and expect it to be successful, as this topic already proved popular at home and is certainly relevant worldwide.

5. The elements of share-worthy content hold true internationally.

Over the years, we’ve seen time and time again that including certain elements in content greatly increases the chance of success. All of our content that achieved international success included some combination of the following:

  • Surprising information
  • An emotionally resonant topic
  • A universally appealing topic
  • Comparison or ranking of multiple places, things, or ideas
  • A geographic angle
  • A pop culture angle

Look back at the content examples I shared in this post, and make note of how many of the characteristics above are present in each one. To increase the likelihood that your content appeals to global audiences, be sure to read this post about the vital role these elements play in creating content that earns a lot of links and social shares.

What has your experience been like using content to attract international press and links? I’d love to hear what’s worked for you — leave a comment below!

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

The 7 Citation Building Myths Plaguing Local SEO

Posted by JoyHawkins

Previously, I wrote an article unveiling some of the most common myths I see in the Local SEO space. I thought I’d do a follow-up that specifically talked about the myths pertaining to citations that I commonly hear from both small business owners and SEOs alike.

Myth #1: If your citations don’t include your suite number, you should stop everything you’re doing and fix this ASAP.

Truth: Google doesn’t even recognize suite numbers for a whopping majority of Google business listings. Even though you enter a suite number in Google My Business, it doesn’t translate into the “Suite #” field in Google MapMaker — it simply gets eliminated. Google also pays more attention to the location (pin) marker of the business when it comes to determining the actual location and less to the actual words people enter in as the address, as there can be multiple ways to name a street address. Google’s Possum update recently introduced a filter for search queries that is based on location. We’ve seen this has to do with the address itself and how close other businesses in the same industry are to your location. Whether or not you have a suite number in Google My Business has nothing to do with it.

Darren Shaw from Whitespark, an expert on everything related to citations, says:

“You often can’t control the suite number on your citations. Some sites force the suite number to appear before the address, some after the address, some with a # symbol, some with “Ste,” and others with “Suite.” If minor discrepancies like these in your citations affected your citation consistency or negatively impacted your rankings, then everyone would have a problem.”

In summary, if your citations look great but are missing the suite number, move along. There are most likely more important things you could be spending time on that would actually impact your ranking.

Myth #2: Minor differences in your business name in citations are a big deal.

Truth: Say your business name is “State Farm: Bob Smith,” yet one citation lists you as “Bob Smith Insurance” and another as “Bob Smith State Farm.” As Mike Blumenthal states: “Put a little trust in the algorithm.” If Google was incapable of realizing that those 3 names are really the same business (especially when their address & phone number are identical), we’d have a big problem on our hands. There would be so many duplicate listings on Google we wouldn’t even begin to be able to keep track. Currently, I only generally see a lot of duplicates if there are major discrepancies in the address and phone number.

Darren Shaw also agrees on this:

“I see this all the time with law firms. Every time a new partner joins the firm or leaves the firm, they change their name. A firm can change from “Fletcher, McDonald, & Jones” to “Fletcher, Jones, & Smith” to “Fletcher Family Law” over the course of 3 years, and as long as the phone number and address stay the same, it will have no negative impact on their rankings. Google triangulates the data it finds on the web by three data points: name, address, and phone number. If two of these are a match, and then the name is a partial match, Google will have no problem associating those citations with the correct listing in GMB.”

Myth #3: NAP cleanup should involve fixing your listings on hundreds of sites.

Truth: SEO companies use this as a scare tactic, and it works very well. They have a small business pay them for citation cleanup. They’ll do a scan of your incorrect data and send you a list of hundreds of directories that have your information wrong. This causes you to gasp and panic and instantly realize you must hire them to spend hours cleaning all this up, as it must be causing the ranking of your listing on Google to tank.

Let’s dive into an example that I’ve seen. Local.com is a site that feeds to hundreds of smaller directories on newspaper sites. If you have a listing wrong on Local.com, it might appear that your listing is incorrect on hundreds of directories. For example, these three listings are on different domains, but if you look at the pages they’re identical and they all say “Local.com” at the top:

http://directory.hawaiitribune-herald.com/profile?listingid=108895814

http://directory.lufkindailynews.com/profile?listingid=108895814

http://flbiz.oscnewsgazette.com/profile?listingid=108895814

Should this cause you to panic? No. Fixing it on Local.com itself should fix all the hundreds of other places. Even if it didn’t, Google hasn’t even indexed any of these URLs. (Note: they might index my examples since I just linked to them in this Moz article, so I’m including some screenshots from while I was writing this):

If Google hasn’t even indexed the content, it’s a good sign that the content doesn’t mean much and it’s nothing you should stress about. Google would have no incentive or reason to index all these different URLs due to the fact that the content on them is literally the same. Additionally, no one links to them (aside from me in this article, of course).

As Darren Shaw puts it,

“This one really irks me. There are WAY more important things for you to spend your time/money on than trying to fix a listing on a site like scranton.myyellowpageclassifieds.biz. Chances are, any attempt to update this listing would be futile anyway, because small sites like these are basically unmanaged. They’re collecting their $200/m in Adsense revenue and don’t have any interest in dealing with or responding to any listing update requests. In our Citation Audit and Cleanup service we offer two packages. One covers the top 30 sites + 5 industry/city-specific sites, and the other covers the top 50 sites + 5 industry/city-specific sites. These are sites that are actually important and valuable to local search. Audit and cleanup on sites beyond these is generally a waste of time and money.”

Myth #4: There’s no risk in cancelling an automated citation service.

People often wonder what might happen to their NAP issues if they cancel their subscription with a company like Yext or Moz Local. Although these companies don’t do anything to intentionally cause old data to come back, there have been some recent interesting findings around what actually happens when you cancel.

Truth: In one case, Phil Rozek did a little case study for a business that had to cancel Moz Local recently. The good news is that although staying with them is generally a good decision, this business didn’t seem to have any major issues after cancelling.

Yext claims on their site that they don’t do anything to push the old data back that was previously wrong. They explain that when you cancel, “the lock that was put in place to protect the business listing is no longer present. Once this occurs, the business listing is subject to the normal compilation process at the search engine, online directory, mobile app, or social network. In fact, because Yext no longer has this lock in place, Yext has no control over the listing directly at all, and the business listing data will now act as it normally would occur without Yext.”

Nyagoslav Zhekov just recently published a study on cancelling Yext and concluded that most of the listings either disappear or revert back to their previous incorrect state after cancelling. It seems that Yext acts as a sort of cover on top of the listing, and once Yext is cancelled, that cover is removed. So, there does seem to be some risk with cancelling Yext.

In summary, there is definitely a risk when you decide to cancel an ongoing automated service that was previously in place to correct your citations. It’s important for people to realize that if they decide to do this, they might want to budget for some manual citation building/cleanup in case any issues arise.

Myth #5: Citation building is the only type of link building strategy you need to succeed at Local SEO.

Many Local SEO companies have the impression that citation building is the only type of backlinking strategy needed for small businesses to rank well in the 3-pack. According to this survey that Bright Local did, 72% of Local SEOs use citation building as a way of building links.

Truth: Local SEO Guide found in their Local Search Ranking Factors study that although citations are important, if that’s the only backlinking strategy you’re using, you’re most likely not going to rank well in competitive markets. They found also found that links are the key competitive differentiator even when it comes to Google My Business Rankings. So if you’re in a competitive industry or market and want to dominate the 3-pack, you need to look into additional backlinking strategies over and above citations.

Darren adds more clarity to the survey’s results by stating,

“They’re saying that citations are still very important, but they are a foundational tactic. You absolutely need a core base of citations to gain trust at Google, and if you don’t have them you don’t have a chance in hell at ranking, but they are no longer a competitive difference maker. Once you have the core 50 or so citations squared away, building more and more citations probably isn’t what your local SEO campaign needs to move the needle further.”

Myth #6: Citations for unrelated industries should be ignored if they share the same phone number.

This was a question that has come up a number of times with our team. If you have a restaurant that once had a phone number but then closes its doors, and a new law firm opens up down the street and gets assigned that phone number, should the lawyer worry about all the listings that exist for the restaurant (since they’re in different industries)?

Truth: I reached out to Nyagoslav Zhekov, the Director of Local Search at Whitespark, to get the truth on this one. His response was:

“As Google tries to mimic real-life experiences, sooner or later this negative experience will result in some sort of algorithmic downgrading of the information by Google. If Google manages to figure out that a lot of customers look for and call a phone number that they think belongs to another business, it is logical that it will result in negative user experience. Thus, Google will assign a lower trust score to a Google Maps business record that offers information that does not clearly and unquestionably belong to the business for which the record is. Keeping in mind that the phone number is, by design and by default, the most unique and the most standardized information for a business (everything else is less standardize-able than the phone number), this is, as far as I am concerned, the most important information bit and the most significant identifier Google uses when determining how trustworthy particular information for a business is.”

He also pointed out that users finding the phone number for the restaurant and calling it continually would be a negative experience for both the customer and the law firm (who would have to continually confirm they’re not a restaurant) so there would be added benefit in getting these listings for the restaurant marked closed or removed.

Since Darren Shaw gave me so much input for this article, he also wanted to add a seventh myth that he comes across regularly:

Myth #7: Google My Business is a citation.

“This one is maybe more of a mis-labelling problem than a myth, but your listing at Google isn’t really a citation. At Whitespark we refer to Google, Bing, and Apple Maps as ‘Core Search Engines’ (yes, Yahoo has been demoted to just a citation). The word ‘citation’ comes from the concept of ‘citing’ your sources in an academic paper. Using this conceptual framework, you can think of your Google listing as the academic paper, and all of your listings out on the web as the sources that cite the business. Your Google listing is like the queen bee and all the citations out there are the workers contributing to keep the queen bee alive and healthy.”

Hopefully that lays some of the fears and myths around citations to rest. If you have questions or ideas of other myths on this topic, we’d love to hear about it in the comments!

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

5 Lead Generation Ideas to Help You Increase Your Website’s Conversion Rates

Posted by lkolowich

It’s been years since the power’s shifted away from marketers and advertisers and in favor of Internet consumers. Now more than ever, people are empowered to choose their own experiences online. They’re actively avoiding ad content — and instead of living by advertisers’ rule books, they’re deciding what to click on, what to read, what to download, and what to buy … and what not to.

And they have a lot of choices.

When inbound marketers like us are looking to generate more leads from our website, we need to think not just about how to capture people’s attention, but how to capture it in a way that makes people want to learn more from us. A smart lead generation strategy includes creating valuable offers and experiences that fit seamlessly into the context of what people already like and want to do online. It’s the consumer’s world; us marketers are just living in it.

People read calls-to-action that say things like “Sign up here!” as basically synonymous with “We’re gonna spam you.” If you’re recycling these same old lead generation tactics over and over again, it’s quickly going to become white noise. But calls-to-action that fit into the context of what a person’s doing already? That’s smart marketing.

If you want to increase the conversion rate on your website, you need to get smart and creative with your lead generation tactics. Asking for blog subscriptions and gating high-quality content like comprehensive guides, ebooks, and whitepapers behind landing pages still works, but you have to be smart about where you’re offering them on your website. And they shouldn’t be your only lead generation plays.

There are many ways to get creative with lead generation to make sure you’re reaping the benefits of the traffic you’re working so hard to get. Here are some lead generation ideas for B2B and B2C marketers to try. Test them out, tweak them according to your audience’s preferences, and share your own ideas you have in the comments.

1) Put your calls-to-action in people’s natural eye path.

CTA placement can have a profound effect on the number of leads you’re generating from your site. And yet, not many marketers are spending a whole lot of time thinking about, testing, and tweaking CTA placement to optimize their conversions. Many claim that as long as they place their primary CTA above the fold, they’re good to go. (Side note: Even though putting primary CTAs above the fold is often considered a best practice, even that is still up for debate.)

Start your CTA placement tests by putting them where people’s eyes naturally go on a webpage. An eyetracking study found that when people read a webpage, we naturally start by looking in the upper lefthand corner of the page, and then move our eyes in an F-shaped pattern.

f-pattern-eye-tracking.jpg

[Image credit: Nielsen Norman Group]

Here’s what that looks like:

f-pattern-wireframe.jpg

[Image credit: Envato Studio]

You can capitalize on this natural eye path by placing important information in these key spots. Here’s an example of what that might look like on a website:

f-pattern-with-content.jpg

[Image credit: Envato Studio]

Notice how the business name is placed in the top left, which is where a person would look first. The navigation bar takes over the #2 spot, followed by the value proposition at #3 and the primary CTA at #4.

Does this order look familiar to you? When you’re browsing the web, you might have noticed that many of them put the primary CTA in the top right corner — in that #2 spot. Here are a few real-life examples:

prezi-business-homepage.png

[Prezi’s homepage]

uber-homepage.png

[Uber’s homepage]

barkbox-homepage.png

[BarkBox’s homepage]

In the last example from BarkBox, you’ll notice that the secondary CTAs still follow that F-pattern.

Keep this in mind when you’re placing your CTAs, especially on your homepage and your other popular webpages — and don’t be afraid to experiment based on how it makes sense for your own marketing story should be told.

2) Use pop-up and slide-in forms the right way.

Pop-ups have been vilified in the last few years — and quite understandably, too. Far too many marketers use them in a way that disrupts people’s experience on their website instead of enhancing it.

But pop-ups do work — and, more importantly, when they’re used in a way that’s helpful and not disruptive, they can be a healthy part of your inbound strategy. So if you’re wondering whether you should be using pop-up forms, the short answer is yes — as long as you use them in an inbound-y way. First and foremost, that means offering something valuable and relevant to the people visiting that site page.

When you’re considering what type of pop-up to use and what action should trigger them, think about how people are engaging with your pages. When someone reads a blog post, for instance, they’re typically going to scroll down the page to read the content. In that case, you might consider using a slide-in box that appears when someone’s scrolled a certain percentage of the way down the page.

Here’s a great example from a post on OfficeVibe’s blog about how managers gain respect. While I was scrolling, a banner appeared at the bottom of the screen offering me a live report of employee engagement — an offer that was perfectly relevant, given the post was aimed at managers.

officevibe-banner-pop-up.png

It felt helpful, not disruptive. In other words, it was a responsible use of a pop-up.

Similarly, someone who’s spending time reading through a product page might find value in a time-based pop-up that appears when a visitor’s been on the page for a certain number of seconds, like this one from Ugmonk:

ugmonk-pop-up.png

The most important takeaway here is to align what you offer on a pop-up with the webpage you’re adding it to, and make sure it’s actually adding substantial value.

If you’re looking for a good free tool to get started with inbound-y pop-up forms, I’d recommend you try HubSpot Marketing Free. We built the Lead Flows feature within this free tool to help marketers generate more leads across their entire website without sacrificing user experience.

3) Add anchor texts to old blog posts that align closely with your gated offers.

It’s common for business bloggers to add an end-of-post banner CTA at the end of every one of their blog posts, like this one:

hubspot-banner-cta-example.png

In fact, you might already be including CTAs like this on your own business blog posts. At HubSpot, we include an end-of-post banner CTA on every single one of our posts, and we also add slide-in CTAs to blog posts that prove themselves to convert visitors into leads at a high rate via organic traffic.

But let’s admit it: At first glance, these types of CTAs look a little bit like ads, which can result in banner blindness from our readers. That’s why it’s thanks to a recent study conducted by my colleague Pam Vaughan that our blogging team has added one more, highly effective lead generation tactic to their arsenal: anchor text CTAs.

In Vaughan’s study, she found that anchor text CTAs are responsible for most of our blog leads. On blog posts that included both an anchor text CTA and an end-of-post banner CTA, she found that 47–93% of a blog post’s leads came from the anchor text CTA alone, whereas just 6% of the post’s leads came from the end-of-post banner CTA.

What’s an anchor text CTA, you might be wondering? It’s a standalone line text in a blog post linked to a landing page that’s styled as an H3 or an H4 to make it stand out from the rest of the post’s body copy. On HubSpot’s blog, we’ll typically put an anchor text CTA between two paragraphs in the introduction, like this:

hubspot-anchor-text-cta-example.png

What makes anchor text CTAs so effective? Let’s say you search for “press release template” in Google, and you click on the first organic search result — which is currently our blog post about how to write a press release, which I’ve screenshotted above.

As a searcher, the next thing you’d probably do is quickly scan the post to see if it satisfies your search. One of the first things that’ll catch your eye is an anchor text that reads, “Download our free press release template here” — which happens to be exactly what you were looking for when you searched “press release template.” There’s a pretty good chance you’re going to click on it.

This is where relevancy becomes critical. The anchor text CTA works really well in this case because it satisfies the visitor’s need right away, within the first few paragraphs of the blog post. The more relevant the anchor text CTA is to what the visitor is looking for, the better it’ll perform. Simply adding an anchor text CTA near the top of every blog post won’t necessarily mean it’ll generate a ton more leads — and frankly, you’ll risk pissing off your loyal subscribers.

If you decide you’d like to experiment with anchor text CTAs, be selective about the posts you add them to. At HubSpot, we typically add them to old posts that rank well in search. We purposely limit our use of anchor text CTAs on brand new posts — because most of the traffic we get to those posts are already leads and some of the biggest fans of our content, whom we want to have the best possible user experience. (You can read more about anchor text CTAs here.)

4) Support the launch of a new campaign with a launch post and other blog posts on related topics.

Every time you launch a new marketing campaign, posting the good news on your blog should be a key part of your launch plan. It’s a great way to let your existing subscribers know what new content, products, and features you’re putting out there, and it also helps introduce these launches to brand-new audiences.

At HubSpot, we’ve found the best strategy for promoting campaigns on the blog is to write one official launch post, followed by a handful of follow-up posts that are relevant to the campaign but are written in the style of a normal blog post. We typically scatter these follow-up posts over the weeks and months following that initial launch.

When done correctly, launch posts and their supporting blog posts have very different formulas:

  • A launch post is between 150–300 words long. It includes a captivating introductory paragraph on the general topic or pain point the campaign is about, followed by a paragraph or two describing how the offer can help and a list of 4–6 bullet points on what the offer includes. It includes one or two in-line text CTAs leading to the campaign, followed by a banner CTA at the end of the post.
  • A supplemental blog post can take on any post format and length typical of what you’d normally publish on your blog, such as a how-to post, a list-based post, or a curated collection post. It includes an end-of-post banner CTA leading to the campaign, and an anchor text CTA in the introduction, if applicable.

Let me show you an example. Earlier this year, HubSpot partnered with Iconosquare to write an ebook on how to use Instagram for business. A few days after we launched the offer online, we published a launch post on HubSpot’s Marketing Blog specifically promoting it to our own audience. Here’s what that launch post looked like:

hubspot-launch-post.png

Notice it has a brief introduction of the topic, an introduction of the ebook as a helpful resource, a bulleted list of what’s inside the ebook, two in-line text CTAs pointing toward the ebook, and an end-of-post banner CTA.

Once we published that initial post, we published a series of follow-up blog posts about the same topic — in this case, Instagram for business — that supported the launch, but promoted it much more subtly. These posts covered topics like:

In each of these cases, we used keyword research to find long-tail keyword phrases related to our offer topic, and then wrote blog posts related to those highly searched terms and included CTAs to our offer.

The goal here? Both to expose our own audience to more content related to the offer and to expose our offer to a new audience: specifically, people who were searching for related topics on search engines, as we’ve found visitors who find our posts through organic search tend to convert at higher rates.

When you’re planning out your next campaign, be sure to include both a launch post and supportive, follow-up blog posts like these — and plan them all out using a blog editorial calendar like the simple one HubSpot’s blogging team uses with Google Calendar.

5) Use social media strategically for lead generation.

Top-of-the-funnel marketing metrics like traffic and brand awareness isn’t all social media is good for. It can still be a helpful — not to mention low-cost — source for lead generation.

In addition to promoting new blog posts and content to your Twitter, Facebook, LinkedIn, and other social sites, be sure to regularly post links to blog posts and even directly to the landing pages of offers that have historically performed well for lead generation. You’ll need to do a lead generation analysis of your blog to figure out which posts perform best for lead generation.

When you link directly to landing pages, be sure the copy in your social posts sets the expectation that clicking the link will send people to a landing page, like Canva did in this Facebook post:

canva-facebook-page.png

Contests are another way to generate leads from social. Not only are they fun for your followers, but they can also teach you a whole lot about your audience while simultaneously engaging them, growing your reach, and driving traffic to your website.

In addition to posting links to lead generation forms, you’ll also want to make sure you’re using the real estate for lead generation that’s available to you on the social networks that you’re using. On Facebook for example, use the feature available for Pages that lets you put a simple call-to-action button at the top of your Facebook Page. It can help drive more traffic from your Facebook Page to lead generation forms like landing pages and contact sheets.

dollar-shave-club-facebook-CTA.png

Here are more lead generation tips for Facebook, and for Twitter.

In addition to optimizing your webpages and social presence for leads, always be looking for opportunities to increase the traffic of your highest-converting pages by optimizing these pages for the keywords they’re already ranking for, and linking to these pages internally and externally.

I hope this list has helped spark some ideas for lead generation tactics to test for your own audience. If you’ve tried any of the tactics I’ve listed above, tell us about your experiences in the comments — and feel free to add more ideas to the list.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!