Yet, MSV, like a good ol’ flip phone, remains indestructible.
MSV acts as a warm blanket for anyone wanting a marketing strategy, without needing more brainpower than a potato:find the topics with the most search volume, add the keywords to your content, bid on the terms, run the ads, and voila – done. ✔️
MSV isn’t a small business problem. Fortune 50 companies across the globe are still relying on this misleading metric every day to determine where to spend billions across Google & Bing.
It’s being used to build out pages & pages of junk content to drive “visibility” and “brand awareness,” that we’re all subjected to on your blog.
And MSV has also been the number one driver of the ancient fallacy that getting a lot of traffic means your marketing worked (and leads to the inevitable abuse of the phrase “brand awareness”).
Simply put, Monthly Search Volume is a metric that drives decisions based only on the potential for traffic, with zero regard for the potential for revenue.
Unless you can pay your bills in visits, this ain’t it chief.
Am I saying don’t use Monthly Search Volume, at all, ever? No, not necessarily.
Can you still use it directionally? Sure.
Can you still use a map from 1995 “directionally” for your next road trip? Sure.
Good luck. You might get where you’re going (it might take longer) or, you might end up in a Netflix documentary.
The same idea applies to MSV and Keyword Planner.
They can be useful, however, as an industry, we’ve been abusing them as a single, independent source of truth without any thought to other data sets (real pitfall examples ahead!).
…abusing MSV as a single, independent source of truth…
So, feel free to use MSV; but take it all with a grain of salt, save your valuable marketing lettuce 💲, and for the love of God, please don’t base your entire marketing strategy on it.
OK, so which tools and metrics should I use instead?
Seer has found success, not in using just a single metric, but in using several metrics from paid search data.
From click data to conversion data, PPC metrics put ‘traditional SEO data’ to shame.
Here’s what I mean.
In 2010, if I were selling peppermint essential oils, I would have built out tons of content and bid on these terms based on their search volume alone.
With paid data in 2020, I can see they don’t directly convert and I save myself valuable time and resources by not trying to rank for these keywords.
On the flip side, let’s say it’s 2010 and I’m offering payroll services.
Using MSV, I would have prioritized my organic and paid strategies based on onlymajor, high-volume head terms.
In 2020, with paid data, I can uncover loads of long-tail queries (with technically ‘no’ search volume), that generated clicks andconversions, I never would have found through Keyword Planner.
Each one of these terms generates one, maybe two clicks each, but they offer major advantages because they:
convert at much higher rates (we’re talking each click often a direct conversion, people!)
are way less competitive to go after via paid, or to rank for organically
in aggregate, generate a large amount of conversions
That’s OK, too. The whole idea is to broaden our data horizons, people!
There are more resources to come, but in addition to using paid search data for keyword research, other great data sets we’ve had success with, that you can use to validate Monthly Search Volume and/or layer into your strategy include:
Search Console Data: to cross-reference Clicks and Impressions with MSV
Audience Research: such as customer interviews, customer reviews & forum scraping (webinar on the way!)
Site Search Data: to see how your users seek out certain content topics, and how that may contrast with demand data
Back-end Lead Data: to see the vernacular your qualified vs. unqualified leads use in form fills
Lastly, as an industry, search marketing is still struggling to be taken seriously, 10 years later. If we don’t advance how we approach search marketing, how can we expect to be?
Cheers to upping our game in this new decade.
Want to learn more about how to leverage paid data in your search marketing strategy? Sign up for the Seer newsletter for more tutorials and cross-divisional strategies.
Virtual private network (VPN) is a technology that stretches a private network across a public network. It helps you build a secure connection to another network over the World Wide Web. In other words, it shields your browsing activity from almost all kinds of investigation on a public network.
VPN is prevalent nowadays, but not for the reasons it was originally invented for. Initially, VPNs were set up to link different business networks together safely over the internet or allow people to access their business networks from home or any remote place.
Today, the virtual private network has been used for a variety of purposes, ranging from hiding/changing IP addresses, accessing blocked websites, encrypting data, and more. In this article, we will learn about how a VPN works and how you can use it for SEO purposes. However, before that, let’s have a quick look at the VPN market growth over the years.
VPN global market and users
According to stats by Statista, 26% of global internet users accessed the internet through a VPN in the first quarter of 2018. While in North America alone, 18% of people used VPNs and proxy in the first quarter of 2018.
In 2018, the global market for VPN was worth $20.60 billion that has constantly been increasing with each passing year.
How does a VPN work?
A virtual private network routes your device’s internet connection via your selected VPN’s private server instead of your internet service provider (ISP). For that reason, when your data is transferred to the internet, it comes from a VPN rather than your own device. In other words, VPN acts as an intermediator when you connect to the World Wide Web. It hides your IP address and safeguards your identity.
A virtual private network builds a secret “tunnel” from your device to the internet and keeps your crucial data protected through encryption.
Why should you use a VPN service as an SEO expert?
1. Search results across the globe
One of the most significant advantages of a VPN network for SEO experts is, it helps them see what search results look like in different countries. Although there are plenty of tools that can help with the same, nothing is like a VPN. It allows you to appear as a local, surfing the internet from the location of your VPN service. So, if you need to see the search results of a specific location for SEO purposes, you can choose the VPN provider having a server in that particular location.
2. Mask your real IP address
As an SEO professional, you have to work with sensitive accounts. If you don’t have that proper protection, the security of your client will be at risk. Luckily, a VPN can help with this situation.
With a VPN, it’s easy to change your IP address and stay anonymous online. Since a virtual private network gives you a provisional IP address from the chosen VPN server, it hides your real IP address. Therefore, people can’t track your internet activities. VPN allows you to surf the internet in absolute anonymity. Moreover, you can bypass filters and unblock websites. This is something that can be very handy when SEOs are up to research, especially competitor research.
3. Safe remote control
As an SEO expert, you might have to work away from your desks many times a day, which may result in security risks. Therefore, you must connect to a secure network that lets you access the information securely from any remote location.
A virtual private network helps remote users connect to their work network without revealing their work computer system to the entire internet. With a VPN, you can access your drives, email servers, and more safely because it creates an encrypted path for you to communicate with a server or other devices.
4. Improved internet security
Security is one of the biggest challenges most small SEO companies deal with. If you think cyber-attack is a big-business issue, you’re mistaken. According to an article published in Forbes, around 58% of cyber attack victims were small businesses in the year 2018.
Therefore, you must connect to a safe network when surfing the internet for performing different SEO activities. When you connect via a virtual private network, your data is kept encrypted. Thus your browsing information is kept protected from hackers. You won’t need to worry about as no one can track your internet and data activities. After all, VPN gives you a temporary IP address from your chosen VPN server and hides your original IP address. Therefore, you can transact sensitive information and other essential schedules for your business without worrying about hackers.
If you’re into any online business that requires the transmission of sensitive information or money, you must invest in good VPN software. It wouldn’t be wrong to say that it is one ofthe tools for startups to grow their business successfully. After all, until you have a safe network, you can’t create successful SEO Strategies.
5. Safe file sharing
An SEO expert has to share files with their clients and co-workers many times a day. As a result, it is crucial to look for a safe way to share important data.
Nowadays, there are plenty of ways to share files online. However, not all are safe. Most SEO experts feel paranoid when they share an important file that they can’t afford to get into the wrong hands.
Fortunately, a virtual private network can help you stay relaxed while you can share your files safely with a VPN connection. VPNs are commonly used for folder and file sharing. It helps you access files and folders stored on a network computer securely, regardless of where that computer is physically located.
Considering everything together, we can say a virtual private network is an ideal way to mask your internet protocol address, so all your SEO activities are almost untraceable. It establishes a secure and encrypted connection to offer great privacy. Unlike most web proxies and hide IP software, VPN helps you access both websites and web applications in complete anonymity. If you’re also planning to set up a VPN, it is advisable to choose the one that is renowned and reputable. After all, not all VPN providers are equal when it comes to performance and security.
Roman Daneghyan is the Chief Marketing Officer at Renderforest. He can be found on Twitter @roman_daneghyan.
SEMrush identified politicians with the largest amount of followers on Facebook, Instagram, Facebook, and YouTube. Discover the top 10 politicians, that really know how to attract audience to their profiles.
If you want to become a good SEO, you need to have a holistic view on all SEO related topics and there are some technical elements that you must understand, even if they’re not quite easy to digest. One of these topics is schema markup.
Schema markup and structured data had a role in SEO for years now and it seems that major search engines recommend them. But what exactly is schema markup? And, more importantly, how does it impact the SEO process?
After reading this article, you’ll know exactly: what schema markup is, how it affects SEO & search engines, how to correctly implement it on websites and how it can help you get better rankings.
Schema markup is a code (semantic vocabulary) that you put on your website, with the purpose of helping the search engines return more informative results for users. Schema markup allows you to create enhanced descriptions that appear in search results, just like in the screenshot below.
Due to its standardized semantic vocabulary, schema markup added to your site’s HTML helps the major search engines understand your page’s information better and return richer, more informative results.
Schema markup has the advantage to be easily stored, retrieved, displayed and analyzed. In a nutshell, when Google doesn’t know if your information is about an artist or a concert of the artist, you can make things clear using structured data markup.
2. What Is Structured Data
Structured data (or linked data) is a way of organizing information for better accessibility. It might be hard to understand for some because of its relation to coding. However, in simple terms, it’s also called metadata or information behind the information.
It’s similar to a database, in which terms are stored in relation to other terms. Think of it as an Excel Spreadsheet, where you have the head of the columns as the terms and under them come their values. Together, this data forms a structure which defines something.
For example, you can have a product in your store. The structured data could contain a list of terms and their values. The product can be “Lenovo IdeaPad 510” and it could have a list of the following items/terms, with their values:
Name > Lenovo IdeaPad S145
Rating > 4.2
Review Users > 925
Price > $239.99
Stock > In Stock
3 The difference between Schema, Microdata, and Structured Data
To make things easier, let’s shortly recap what these terms mean and what the differences between them are:
Structured Data is a general term that represents binding items to values to better structure information. It can be related to SEO as much as to anything else which contains information.
Microdata is a format and it represents the way the data is structured… in a ‘visual manner’, let’s say. In simple terms, think of it as text vs audio or video. You can say the same thing in both, but it will appeal to different people. You can have the same data structured in Microdata format or in JSON-LD format, for example.
Schema is a vocabulary that defines the terms and values. There are other vocabularies such as Dublin Core. In simple terms, think of them as languages. The good thing with Schema.org is that it has been accepted by very many platforms, making it the best option for multiple scenarios. That’s why many people use Schema Markup as a synonym for implementing Structured Data.
Here are some takeaways:
You can have data structured in multiple formats, such as microdata of JSON-LD.
You can define terms using multiple vocabularies such as Schema.org or Dublin Core.
You can use either vocabularies with either of the formats, resulting in your markup.
When people refer to Schema Markup, they generally refer to everything related to structured data, but using the Schema.org vocabulary.
4. What Are Schema Markup & Structured Data in SEO?
When it comes to SEO, structured data represents some markup that is implemented on a website which search engines like Google can use in order to display information better. SEOs very often refer to structured data as Schema Markup because it’s one of the most popular markups used to structure data. We’ll talk about it soon.
Using that markup, Search Engines can display what are known as “rich search results” or “rich snippets”. They are called “rich results” because they contain more elements than regular results, making them stand out.
The rich snippet/rich result for the example above looks really good when Google picks up the metadata and displays it properly.
The code is a little bit uglier than that and looks something like this:
It might look complicated, but if you read it, it makes sense. You can see things like “@type”:”AggregateRating” with the values and the review count, and then the “@type”Offer” with the price and availability. The code above is in the JSON-LD format, which is one of the more complicated ones to understand. We’ll talk about formats and which ones to use soon, so keep reading.
You can also use structured data to enrich a recipe search result. It can also have ratings but, instead of displaying the price, it displays how long it takes to make the dish, which is always useful when seeking for a recipe.
These are just basic rich snippets, which affect the regular results you see in Google’s organic search results. However, Google supports a number of different types or rich snippets, some of them which I will present soon. But first, let’s talk about vocabularies and data markup.
5. How Does Schema Markup Affect SEO & Search Engines
To put it simply, structured data is not a ranking factor / signal. But if you’ve been doing SEO for a while, you know I’m lying and it’s not actually that simple.
You see, that’s the general consensus, or at least what Google officials tell us. In reality, opinions vary. Some say that it does affect rankings and some say it doesn’t. One thing we know for sure is that we cannot 100% trust what Google says. It’s not that they’re not transparent, but they have to keep the algorithm secret.
But let’s see punctually how schema markup impacts SEO:
CTR (Click Through Rate)
Structured Data might not be a ranking signal, but it sure can help with rankings, at least indirectly. You see, any modification to your search result will have an impact on your CTR (click through rate). A negative one will drop your CTR and a positive one will boost it.
With a higher CTR, your rankings will actually be higher.
If more people click on your search engine result, this sends Google a signal that they want to read your content.
To honor that demand, Google might rank your article higher so that more people will see it. This happens constantly, so don’t expect your article to stay there. Tomorrow, a competitor might change their title and their CTR might be higher than yours. Google will notice that.
Structured data can help you with CTR because rich results catch the eye easier than regular search results. Sure, those snippets usually display the information directly on Google’s landing page, but some of that organic traffic will still forward to your website.
This might sound counter-intuitive but, with all the rich snippets in the search results, you might end up having a lower CTR and less organic traffic going to your website.
Why? Because a user can find the answer directly in the search results and they don’t need to click.
For example, most nutrition websites have structured data implemented, which means most of them get rich snippets. If you’re in the top 3 results and all the results display the recipe duration, if your duration is the highest, users might decide to not click your search engine result and go for a faster recipe instead.
So, while you can get higher CTR if your rich result stands out (not everyone has rich snippets), it can also lower your CTR if everyone has rich snippets and the client browses based on that info.
You shouldn’t prioritize the addition of structured data on your website unless you’ve finished dealing with other, more important issues, such as keyword research, content optimization and other OnPage SEO factors.
Why? Because Google said it understands the content and the information required to display rich snippets without structured data, although it’s recommended that you use it.
Google can understand your content to display it in rich snippets even without the addition of structured data. However, it’s safer if you do use the markup.
So, for example, if you have some HTML with 5 stars and the text “Rating: 4.7 – 24 Reviewers”, Google might figure that out on its own and display a Review rich snippet even without structured data.
However, if you want to have a higher chance of the reviews being displayed, then add the SEO structured data so that Google understands the content perfectly.
But remember, prioritize! Keyword research, title and content optimization, website speed and quality backlinks are much more effective in ranking you higher. So if you don’t have those in place, you can postpone the structured data markup.
You should prioritize other things such as good crawling and indexation, keyword research and title/content optimization before going for structured data (the SEO Tools from cognitiveSEO can help you with that).
Schema markup isn’t (probably) the future of Search Engine Optimization & Digital Marketing but, for now, once you have finished other, more important search engine optimization tasks, you can make good use of it. Some studies even show that implementing structured data on your website can boost CTR up to 30%.
6. Schema Markup Types Supported by Google
You might be wondering what important types of schema markup are there? Well, there’s pretty much a markup for anything you can probably imagine.
However, there are only a limited number of rich snippet types that Google has developed and improved over the years, each unique in its own way.
Organization Schema Markup
The Organization Schema markup isn’t a rich snippet on it’s own but it is a very important part of it because it is found in almost all the snippets. It represents the author of the content so it can also be a single person, such as an author, for example.
This is good for making sure the content is associated with the proper brand / name.
The Breadcrumbs Schema Markup is crucial for representing website structure. The structure of the site is represented
The most popular markup out there is probably the review & product one. I’ve presented it in the beginning of the article. There are multiple items that can be added to the product rich snippet, from the product name and price to details, such as the lowest price and highest price, or offer expiry dates.
Recipe Schema Markup
I’ve also shown an example of the recipe snippet above. You can specify things such as ingredients and how much time the recipe takes.
FAQ Schema Markup
The FAQ Schema Markup lists answers to the related questions around your topic / page in a drop down format. Neil Patel used this FAQ schema technique to greatly improve his search engine traffic. However, it seems like this can be abused and Google might fix it.
How to Schema Markup
Similar to the FAQ Schema Markup. A drop down type snippet with step by step answers.
Q&A Schema Markup
The Q&A Schema Markup is specially designed for websites like Quora or Yahoo Answers. It can also be applied in other scenarios, of course. Google recommends linking to individual answers (via anchors, for example) to provide the best user experience.
A carousel in which your article can be displayed at the top of the page that can be swiped, above ads and organic search results. Visible only on mobile devices.
Video Schema Markup
A visual snippet which displays the thumbnail of a video next the the title and description. It is very useful for organic video marketing.
Event Schema Markup
A visual snippet where the date is very visible and with quick access to Google Calendar bookings.
Local Business Schema Markup
If you have a local business or are doing local SEO for a client, then you might want to add local business schema markup to the website. The markup itself is formed out of multiple data items, such as Organization, Description, Logo, Address, Phone and even Reviews.
A list of full rich snippets that Google supports can be viewed here (browse them from the menu).
Also, note that different search engines such as Yandex and Yahoo! (Bing) might also use other types of structured data or schema markup on their platforms.
However, we do know that both Yandex and Bing accept and recommend schema.org, so it’s a good idea to only implement this one, unless other 3rd party apps that you use require other types of markup.
7. How To Add Schema Markup On Your Website (The Right Way)
If you’re interested in schema markup, you’re probably also wondering how to use Google structured data on your website.
If you want to use structured data markup on your website so that Google can pick it up, you’ll either have to code it or make use of some plugins / extensions that will add the structured data for you.
The thing is, you have to implement it correctly, otherwise it might do more harm than good.
If you implement structured data wrong, your rich snippets might display the wrong information, they might not display it at all and you might even get penalized for it.
Here’s how you can implement structured data correctly on your website, on different platforms:
How to add schema markup in WordPress & Blogs
As you know, adding things on WordPress is generally very easy because there’s a ton of plugins you can choose from and, best of all, most of them are free. Implementing Schema Markup doesn’t make an exception.
To add Schema Markup to your WordPress blog, check out the structured data & schema markup plugins in the WP repository. Choose the one with the features you need and with good reviews. The SEO plugin also adds basic structured data functionality to most of your pages, so make sure you don’t have duplicate codes.
Note that these plugins implement basic structured data for your articles & pages. You might want to look for something specific if you have a recipes website, for example. If you have an eCommerce store on WordPress, the WooCommerce plugin already implements products structured data for you.
How to add schema markup in Magento & eCommerce
As for WordPress websites, most eCommerce platforms such as Magento, OpenCart or Prestashop will come with structured data already integrated.
If there’s no Product section, it means your implementation is missing. There are always plugins and extensions so do a Google search and find what suits your platform.
Make sure to fix the warnings too, although they won’t stand in the way of your rich snippets displaying.
Local SEO structured data
If you have a local business, structured data can really help your local SEO. You can mark up your NAP (name, address, phone) so that search engines can better understand that information.
This plugin for WordPress seems to support structured data for Local Businesses.
Custom schema markup implementation
Sometimes, adding markup to your website can be more difficult. If you have a custom platform, you don’t have a plugin to simply… plug in.
Step 1: Find out the type of page you have and which type of schema markup fits it best. For example, informational pages go well with FAQ or How to schema markup. Products on eCommerce sites, on the other hand, go well with the Product schema markup.
Note that it’s important not to try to trick Google into making your result more appealing if it doesn’t make sense. So only pick what Google recommends from the types of rich snippets it supports.
Step 2: Generate the schema markup. Generating JSON LD structured data is pretty easy. You can use an online schema markup generator such as this one to easily generate your code.
However, you’ll have to manually add it in your head section. Which means this would be a static implementation.
If you have thousands of pages, that might not be easy. You’re better off developing a dynamic system with a programmer, where the platform automatically picks up the information from the database and compiles it into a JSON format to display it in the HTML for each product/page.
So although the template for the Product Schema Markup in the JSON-LD format stays the same the values such as Price, Currency, Product name or Rating might change from page to page and website to website.
Sometimes, you can also manually add schema into your HTML with Microdata. However, it’s best if you use the JSON-LD format, as suggested by Google.
For structured data, you need two things: a list of item names and a way to display them. So we have vocabularies and formats which, together, result in markup.
The list of items is called vocabulary. You can think of it as languages. Different words can mean different things in different languages and not everybody speaks every language.
There are multiple types of schema and vocabularies available:
Schema.org is the most popular vocabulary for structured data. Why? Well, because it has been accepted by major search engines and companies, such as Yahoo, Bing, Google, etc. It’s sort of an… international language, like English.
As I said above, because Schema Markup is so popular, SEOs often refer to structured data directly as Schema Markup. You could have structured data implemented on your website without Schema Markup, by using another vocabulary. However, you will use Schema Markup of your own free will 🙂 Got it?
The Schema.org Markup supports a very big variety of items and elements. You can view the entire list of supported items on http://schema.org. We’ll soon discuss which ones are the most important schema markup elements, which Google actually uses in the search engine results.
You might be familiar with Open Graph. It’s not used for search engines, but social media platforms, such as Facebook, use it to display titles and images.
They are useful for SEO & Facebook Marketing because you can separate the regular <title> tag used for search engines from the Facebook title. This way, you can keep the keywords in the <title>, which is important for SEO, and you can also have a catchy headline for social media, which is important for clicks & engagement.
Dublin Core is another vocabulary, similar to schema.org but much more limited. It’s probably the second most popular one. Unless you have solid motives to use Dublin Core, such as a 3rd party app your site is hooked to uses it, use schema.org vocabulary instead.
9. Schema Encoding Types & Examples
First let’s take a look at how the information for the address of an organization would look without any structured data, in plain HTML code:
The following information was taken from http://schema.org/address. You can view examples for most of the schema.org vocabulary properties there (some of them are still marked as “To Do”).
Address Structured Data Plain HTML code (source: schema.org)
Here’s an example of how the above information would be displayed using JSON. Google also recommends using JSON LD for displaying structured data on your website. Again, it looks complicated but you won’t have to write it yourself, as it can be generated with tools we’ll soon talk about.
Address Structured Data in JSON LD Format (source: schema.org)
With Microdata you can specify the structured data information within the HTML code itself, using HTML tag attributes. This makes it easier for many people to understand. However, while this is easy to add manually on a case by case basis, it’s difficult to scale and automatize when required for bigger websites (such as eCommerce ones).
Address Structured Data in Microdata Format (source: schema.org)
With JSON LD, you’ll have a lot of standardized plugins for different purposes on most platforms. However, if the element you’re trying to specify isn’t included in the plugin and thus doesn’t display in the outputted JSON LD code, you can add it easily in the HTML using Microdata.
RDFa is similar to Microdata, which means it’s also added through HTML tag attributes. The difference is that RDFa is older and more complex. It has other uses outside of the HTML realm and this means integration with other apps/platforms/servers is easier if they use the technology.
Address Structured Data in RDFa Format (source: schema.org)
Whether you want to go with RDFa or Microdata is your choice, they’ll both do just fine. However, do it as an alternative. Using JSON-LD is the recommended way to go.
Notice how all the formats above, although different, use the Schema.org data markup vocabulary.
10. Why Doesn’t My Website Display a Rich Snippet?
So you’ve finished implementing structured data on your website, but the rich snippets don’t show up in search. What do you do?
Implementing structured data on your website correctly doesn’t guarantee rich snippets.
First, make sure that your code is implemented correctly by testing it. You can do this using the following tools from Google:
Google Structured Data Testing Tool: This is the most popular tool for testing out JSON LD markup and structured data.
Rich Snippet Validator: This is still in beta, but it’s useful. You can find it here.
10.1 What is Unparsable Structured Data?
Unparsable structured data is data markup on your website that could not be properly parsed (or understood) by Google. This, most likely, means that you have not implemented things correctly on your site.
In programming, parsing is the separation of a cluster of strings into separate ones. In other terms, the strings could not be correctly read or understood, which indicates an error in how the strings were presented.
These errors shows either in the Google validator as an error, or in the Google Search Console, under Enhancements > Unparsable Structured Data.
Compared to the other validators above, it’s very useful because it will highlight errors for multiple pages at once, although it doesn’t highlight all the details of the issue.
Make sure you use Google’s SEO Tools to your advantage when implementing schema markup and structured data on your website.
10.2 Structured Markup Penalties
If you implement structured data wrong, you probably won’t get penalized. However, if you try to cheat, Google might apply a structured markup penalty on your website.
For example, if you just want the star ratings and number of reviewers, you can simply add them manually to your page. Your product could be 3 stars, but you might want to display 5 stars in the search engines. You could also add a smaller structured data price, while on the website, the real price is higher.
That’s not fair and the Google algorithm updates might punish you!
If you get a similar message in your Search Console or your organic traffic to all the pages with structured markup has suddenly dropped, make sure to read this article about structured markup penalties to find out how to fix things.
11. Common Schema Markup Myths
There are a few myths that go around regarding rich snippets and structured data. Most of them are simply implementation mistakes and misconceptions.
However, even though we’ve already talked about this and covered these topics above, it’s a good idea
1. Schema markup guarantees rich snippets:They don’t. Google will pick whatever it wants regardless of whether you have structured data on your website or not. That’s why it’s a good idea to implement other, more important things first instead of focusing on structured data.
2. Schema markup is a ranking factor:It’s not. At least, that’s what the Google officials have stated over and over again. However, CTR is a ranking factor and since Structured Data can affect the CTR, your rankings might improve. But Google won’t care if you simply implement markup on your website.
3. You need schema for answer boxes:You don’t. Answer boxes and structured data might have something in common since Google has recently implemented the Q&A markup but that doesn’t mean you can’t get an answer box without structured data.
Since major search engines recommend adding structured data, go ahead and add it, especially if you have an eCommerce website. Make sure to implement it correctly and validate it with the above-mentioned tools. However, you should prioritize other important SEO tasks first.
What’s your experience with structured markup? Do you use it in your SEO & digital marketing strategy? Does it help with your clickthrough rates? Let us know in the comments section below.
Follow the 10 tips below to take your SEO to the next level.
1. Create a logical site structure
A good site structure helps users navigate your content more easily. It also helps search engines understand your site, which may help you rank higher.
How do you create a good site structure?
It’s all about hierarchy; you want your main categories at the top, followed by subcategories, followed by products.
For example, if you own an online guitar store, one of your categories might be “electric.” Under this, you might have a few subcategories, like “Ibanez” and “Jackson,” both of which are makers of electric guitars.
Under these sub-categories will be the range of products you’re selling.
How then do you organize them?
In Shopify, category pages are known as “collections,” and product pages are known as “products.”
To create a collection page, go to Products > Collections, then click “Create collection.” (You’ll learn how to fill in the titles and descriptions later on.)
To create a product page, go to Products > All Products, then click “Add product.”
While you’re creating the product page, make sure you add it to a collection:
You can also use automated collections to add products to collections automatically. This is useful if you have lots of products.
When you’re done, go to Online store >Navigation to add these collections to your main menu.
What about sub-categories?
Shopify does not differentiate between categories and sub-categories. If you want to create a sub-category page, you’ll have to create “collection” pages and nest them within a hierarchy in the menu.
If you want a free way to do this, search Google for a word or phrase that describes your page. If the top-ranking pages are similar to yours, look at their title tags to try to deduce the keywords they’re targeting.
Sometimes it’s obvious:
Most of these pages are clearly targeting the keyword “men’s shirts.”
Sometimes it’s not so obvious:
It’s unclear which keyword these pages are targeting. It could be “wheeled duffel bag,” “wheeled holdalls,” or something else entirely.
Either way, this method is far from foolproof. You have no idea if the keyword is actually popular or if competing sites optimized for it properly. It’s all guesswork.
A better (and more data-driven) way is to plug your best guess into Ahrefs’ Keywords Explorer, then look at the Parent topic. This tells you the most popular way people are searching for a product or type of product.
Here, our best guess of “wheeled duffel bag” gets around 200 searches a month in the US, but the Parent topic gets about 1,800.
So, in this case, “rolling duffel bag” is probably the best keyword to target.
Just remember that deducing the best keyword to target from the Parent topic still isn’t 100% foolproof. Due to the way it’s calculated, it can sometimes kick back topics that are too generic.
For example, the Parent topic for “duffel bag with wheels” is simply “duffel bag.”
This is too broad. People searching for “duffel bag” probably aren’t looking specifically for duffel bags with wheels.
If this happens, scroll down to the “SERP Overview” and look at which keywords send the most traffic to the top-ranking pages.
Choose the most relevant one with the highest search volume—where your page aligns with search intent.
What does this mean?
Google wants to show the most relevant results for a query. That’s why almost all of the results for “duffel bag with wheels” are category pages, not product pages. Google knows that searchers are looking for choice, not a specific product.
Some of the top-ranking pages for “duffel bag with wheels.”
The opposite is true for a keyword like “eagle creek no matter what rolling duffel,” where all of the top-ranking pages are product pages.
Some of the top-ranking pages for “eagle creek no matter what rolling duffel.”
Trying to rank a product page for a query where Google mostly ranks category pages will be an uphill battle. Always make sure the type of page you’re trying to rank aligns with what currently ranks.
3. Optimize your titles, meta descriptions, and URLs
Now we know which keywords and terms to optimize pages for, it’s time to implement those findings.
You can start by optimizing the title tags, meta descriptions, and URLs of your product and category pages.
Editing them is pretty easy with Shopify.
Choose one of your collection or product pages, scroll down, and you’ll see “Search engine listing preview.” Click “Edit website SEO” to edit the metadata.
If you rename the URL slug, make sure the box is checked to redirect the old URL to the new.
Here are some best practices:
Write a unique title tag and meta description for each page.
Include your target keyword (where appropriate).
Avoid truncation—follow the character recommendation in Shopify.
Not only that, but a good product description can also entice a visitor to buy. This applies even if you’re selling a third-party product, as you can always create unique descriptions that align with your brand.
Unfortunately, most stores either don’t pay enough attention to this aspect, or they simply copy the product descriptions from manufacturers’ websites.
Here’s an example. This book entitled “How to Swear Around the World” has the exact same description on both WHSmith and Waterstones:
Waterstones’ product description
WHSmith’s product description
But Firebox—a retailer with a quirky brand voice—created a product description in-line with their brand:
Looks like it’s paying off for them—Firebox outranks both WHSmith and Waterstones:
Firebox’s page has more backlinks from unique websites than the pages from WHSmith and Waterstones. Given the correlation between backlinks and search traffic, this is likely a big part of the reason they’re outranking the competition. However, writing a unique product description certainly didn’t do any harm. It may even be part of the reason they were able to attract some of those backlinks in the first place, as people are more likely to link to a unique page than a generic one.
Takeaway: be more like Firebox.
You can also do the same for your category pages, but keep it short and sweet. Here’s an example from ASOS:
Google uses alt text along with computer vision algorithms and the contents of the page to understand the subject matter of the image.
Here’s how to add alt text in Shopify. Upload an image > Click “…” > Click “Edit alt text”
When it comes to writing good alt text, Google’s guidelines suggest some great tips:
When choosing alt text, focus on creating useful, information-rich content that uses keywords appropriately and is in context of the content of the page. Avoid filling alt attributes with keywords (keyword stuffing) as it results in a negative user experience and may cause your site to be seen as spam.
If you ran Site Audit earlier, it should have also identified images that have missing alt text. Fix them in bulk with Smart SEO.
Have you seen results in Google that look like this?
How do you get your pages to look like this? Use Schema markup.
Schema markup provides Google with structured data about the product or category, which it can then use to display rich results. These can often lead to more clicks, which may lead to more traffic and sales.
Fortunately, plenty of Shopify themes make adding “Product” markup easy. For example, all I had to do was fill in the details on the backend and the Schema markup was added automatically in the code.
If your theme doesn’t do this for you, don’t worry. There are plenty of apps in the Shopify App Store (e.g., Smart SEO) that can help.
Schema markup as generated by the Smart SEO app.
9. Build links
Backlinks are important because they help with rankings. Our study has shown a clear correlation: the more backlinks a page has, the more organic traffic it gets from Google.
Beardbrand gets an estimated 136,000 search visits per month.
Take a closer look and you’ll see that their blog accounts for 77% of their total estimated search traffic.
Traffic aside, blogging is useful for many other reasons:
If you target the right keywords, you can easily “pitch” your product within the post (Beardbrand does this well);
People who are ready to buy make up asmall percentage of the marketing funnel. Creating informational content like blog posts helps you educate readers at different stages of the buyer’s journey and increase the chances of them buying from you in the future.
Acquiring links to product or category pages is notoriously hard. It’s much easier to get links to informational content. You can then use internal links to send some of that “authority” to the pages that matter. This is known as the “Middleman Strategy.”
Your Shopify store automatically includes a blog named “News.” If you want to create a new blog, just go to Blog posts > Create a new post > Create a new blog
Ready to start blogging? Learn the basics in this video:
PPC specialists are tasked with keyword research and expansion frequently. We use Google Keyword Planner, search query data, landing page content, and various other tools to guide strategy. However, at Seer, we like to think about search holistically – as one results page, regardless of which channel the results come from. If we have organic data, why not use it to our advantage in our PPC campaigns?
Below, I will walk you through how to use organic data to help inform your PPC keyword expansion recommendations and provide additional value and validation to your recommendations.
Throw Your Keywords into STAT
What is STAT?
STAT is a tool that tracks the SERP, collecting data to help SEO specialists gather insights on the search landscape and allows you to pull various reports, including rankings, ranking trends, related searches, top 20 comparisons, and more.
For the purpose of informing and adding value to PPC keyword expansion, using STAT is a piece of cake! There are a few simple steps:
Go to getstat.com/login
Create a new project and type in your domain name
Click on your newly created project and select the “Add Keywords” option in the bottom left-hand corner under “Site Tools”
Go ahead and paste your list of new keywords into the keyword box & tag them if you wish
Select the market and location in which you’re looking for data from
Choose “Add to Preview” & make sure they look good
Click “Import All”
Once your keywords have been imported, you have to wait a minimum of 24 hours for data to populate. Once all of the MSV has collected (meaning no N/A values in that column), you will be able to download the reports of your choosing! For keyword expansion, I choose the Rankings report and Top 20 Comparison report. To download reports, take the following steps:
Under Site Tools in the bottom left-hand corner, click “Reporting”
Click “Create Report” at the top of the page
Choose the reports that you would like to download (Rankings & Top 20 Comparison)
Run the report only once and use a date range that makes sense for your data
Once you have downloaded your reports, use PowerBI, Excel, or your choice of Visualization tool to look at this data. We used PowerBI to look at these reports side-by-side and it looked like this:
We bridged the two reports together so that we could click on a result type and see all of the keywords that showed that specific result type, as well as its organic ranking.
How to Use STAT Reports to Inform Keyword Expansion
Prioritize Low Organic Rankings
Prioritize keywords in your PPC campaign that are not ranking on the first page organically. Use the Rankings report from STAT to determine these keywords (rank >10) and label them as “High Priority” when you implement them. As high priority keywords, you will want to ensure that these keywords are not limited by budget (to an extent) and maintain top Impression Share since your (or your clients’) site is not showing on the first page of the SERP organically.
High Organic Rankings = Must Add
A keyword with a high organic ranking indicates that the keyword is organically relevant and that the landing page content is relevant to the intent of the keyword. Keywords with high organic rankings should definitely be added to your account.
Use SERP Features to Determine Keyword Intent & Drive Future Recommendations
Use the Top 20 Comparison Report to look at SERP features ranking on the first page and what that tells you about the intent of your keywords. These features will help you determine if your keywords are too broad if intent is misaligned, or at the least, help you understand what the SERP is showing organically where your future PPC ads will show.
Are Answer Boxes showing for your keywords? Use this information in one of two ways:
Use these keywords – Look at keywords that are showing Answer Boxes on the first page of the SERP and label these keywords as such when you upload to Google Ads. If your (or your client’s) site isn’t currently owning the answer box, keep an eye on these keywords. If they begin to generate decent conversion volume, you can recommend to your SEO team or your client that they may want to optimize and build out content in order to win owning the Answer Box, since that landing page should be able to provide information and answer the question being asked, which will better match the intent of the keyword.
Lose these keywords (or lower bids) – If your conversions are heavily focused on actions such as downloading content, signing up, etc., an Answer Box or Image Carousel showing on the first page for that keyword may indicate that your keyword is too broad as informative content (and SERP features) are showing on page 1 of the SERP.
Job Posting Features
Are Job Posting features showing on page 1 of the SERP for your keywords?
Label these keywords – If your campaigns are not set up for recruitment purposes, looking at which keywords are driving Job Posting SERP features can be helpful in determining if your keyword has misaligned intent. Despite having job and job title keywords negated across your PPC campaigns, utilizing the SERP features report can help you determine if a specific keyword is being searched with the intent of finding a job at your company (or clients’ company), without explicitly typing “job” or “careers”.
For example, we recommended implementing the keyword “____ companies near me” for a client showing job results on the first page of the SERP, however, we have separate recruitment campaigns in the same GoogleAds account for this client. We decided to test this keyword:
Determining the intent: We labeled this keyword as “Job Results Show on SERP” in order to easily track conversions being driven by this keyword.
Results: After 2 months of the keyword being live, we drove a number of recruiting conversion actions, but 0 of our priority conversion actions within these campaigns.
Recommendation: We ended up recommending to pause the keyword in our primary campaigns and move it to our recruitment campaigns, which will drive users directly to page with job postings, providing a better overall user experience.
Find this helpful? Be sure to subscribe to our newsletter to get more content like this and stay informed on how Seer is utilizing data cross-divisionally to provide our clients the most value – delivered straight to your inbox!
Google’s “People also ask” boxes are widely discussed within the SEO industry as they take a lot of SERP real estate while providing little to no organic visibility to the publishers’ sites.
That said, “People also ask” listings are probably helpful for Google’s users allowing them to get a better understanding of a topic they are researching. Yet, whether they do send actual clicks to publishers’ pages remains a huge question.
While we have no power over Google’s search engine page elements, our job as digital marketers is to find ways to take any opportunity to boost our clients’ organic visibility.
Is there any way for marketers to utilize this search feature better? Let’s see.
1. Understand your target query intent better
One of the cooler aspects of “People also ask” boxes is that they are dynamic.
When you click one question, it will take you in a new direction by generating more follow-up questions underneath. Each time you choose, you get more to choose from.
The coolest thing though is that the further questions are different (in topic, direction or intent) based on which question you choose.
Let me explain this by showing you an example. Let’s search for something like – “Is wine good for your blood?”
Now try clicking one of those questions in the box, for example, “What are the benefits of drinking red wine?” and watch more follow-up questions show up. Next, click a different question “Is red wine good for your heart and blood pressure?”. Do you see the difference?
Source: Screenshot made by the author, as of Feb 2020
Now, while this exercise may seem rather insignificant to some people, to me, it is pretty mind-blowing as it shows us what Google may know of their users’ research patterns and what may interest them further, depending on their next step.
To give you a bit of a context, Google seems to rely on semantic analysis when figuring out which questions fit every searcher’s needs better. Bill Slawski did a solid job covering a related patent called “Generating related questions for search queries” which also states that those related questions rely on search intent:
Providing related questions to users can help users who are using un-common keywords or terminology in their search query to identify keywords or terms that are more commonly used to describe their intent.
For a deeper insight into the variety of questions and types of intent, they may signal, try Text Optimizer. The tool uses a similar process of extracting questions Google does. For example, here are intent-based questions that refer to the topic of bitcoin.
Source: TextOptimizer’s search screenshot, as of Jan 2020
2. Identify important searching patterns
This one somewhat relates to the previous one but it serves a more practical goal, beyond understanding your audience and topic better. If you search Google for your target query enough, you will soon start seeing certain searching patterns.
For example, lots of city-related “People also ask” boxes will contain questions concerning the city safety, whether it is a good place to live in and what it is famous for:
Identifying these searching patterns is crucial when you want:
Identify your cornerstone content
Re-structure your site or an individual landing page
Create a logical breadcrumb navigation (more on this here)
Consolidate your multiple pages into categories and taxonomies
3. Create on-page FAQs
Knowing your target users’ struggles can help in creating a really helpful FAQ section that can diversify your rankings and help bring steady traffic.
All you need to do is to collect your relevant “People also ask” results, organize them in sections (based on your identified intent/searching patterns) and answer all those questions on your dedicated FAQ page.
When working on the FAQ page, don’t forget to:
Use FAQPage schema to generate rich snippets in Google search (WordPress users can take advantage of this plugin). If you have a lot of questions in your niche, it is a good idea to build a standalone knowledge base to address them. Here are all the plugins for the job.
Set up engagement funnels to keep those readers interacting with your site and ultimately turn them into customers. Finteza is a solid option to use here, as it lets you serve custom CTAs based on the users’ referral source and landing page that brought them to your site:
If you have an established competitor with a strong brand, their branded queries and consequent “People also ask” results will give you lots of insight into what kinds of struggles their customers are facing (and how to serve them better).
When it comes to branded “People also ask” results, you may want to organize them based on possible search intent:
ROPO questions: These customers are researching a product before making a purchasing decision.
High-intent questions: Customers are closest to a sale. These are usually price-related queries, for example, those that contain the word “reviews”.
Navigational questions: Customers are lost on your competitor’s site and need some help navigating. These queries can highlight usability issues for you to avoid when building your site.
Competitive questions: These queries compare two of your competitors.
Reputation questions: Those customers want to know more about your competitor’s company.
Source: A screenshot made by the author in January 2020
This information helps you develop a better product and a better site than those of your competitors.
With the changes in search algorithms over the years, the dropping and adding of key search elements, the evolution of Google’s SERPs, navigating digital marketing trends seems almost treacherous.
Yet, at the core of things, not much has really shifted and much of what we do remains the same. In fact, some of those changes have made it even easier to make an impact on the web than ever before. While we may welcome or frown upon each new change, there’s still some competitive advantage in each of them.
Our job, as digital marketers, is to distinguish that competitive advantage and make the most of it.
I hope the above ideas will help you use “People also ask” results to your advantage.
Ann Smarty is the Brand and Community manager at InternetMarketingNinjas.com.
Have you always wanted to create your own website? WordPress is the way to go! Whether you want to create your own personal blog, an online store, or a business website – with WordPress, creating your own website is easy as pie. If you know what you’re doing, of course. In this video, we’ll guide you through the steps needed to make a WordPress website.
New to WordPress? Don’t worry! Our FREE WordPress for beginners training is here to help. Find out how to set up your own site, learn the ins and outs of creating and maintaining it, and more. Soon you’ll be able to do it all by yourself!
WordPress is a content management system (CMS), which means it allows you to build a website and publish content that you want to share with the world. There are basically three elements that are central to what WordPress does:
It is a text editor that allows you to create content
It offers ways to manage and structure your content
It has options to customize how your site works and what it looks like
One of the cool things about WordPress is that there are gigantic libraries of ready-to-use templates and features that people have created for you to hand-pick and use on your site. Some of those are free, and for some you’ll have to pay.
WordPress itself is free, which means you won’t have to pay license fees to use it: you can just download it. WordPress is open-source software. Open-source software is a software that is developed within a community. Everyone can use, alter and distribute the code.
The difference between WordPress.com and WordPress.org
There are two different versions of WordPress: WordPress.com and WordPress.org. In short, WordPress.com is basically a site where you can create an account and, like that, create your own site or blog. WordPress.org allows you to download the WordPress software so you can install it on your own site. While the latter sounds more difficult, we also think it has a lot of advantages.
We recommend using WordPress.org. This option has much better customization options, and you can install any theme, plugin, and do just about anything with your site.
2. Getting started with WordPress
Getting WordPress up and running
When setting up a WordPress.org site, the first thing you’ll have to decide is what hosting company and what hosting plan suits your needs. You can think of the following elements:
the server location
the type of site you’re building
how much storage space and bandwidth you need
what type of hosting you need (shared, dedicated or VPN)
When you’ve decided which hosting party is fit to take care of hosting your site, you should think about which domain name you want to have for your site. A domain name is a convenient way to point people to that specific spot on the internet where you’ve built your website. Domain names are, generally, used to identify one or more IP addresses. So for Yoast, that domain name is yoast.com.
Domain names are a very important aspect of building a brand and thus require some thinking. You want people to be able to recognize you and find you if they’re looking for you. That’s why it’s often a good idea to have a short domain name. If it’s catchy, that helps as well, of course. Ideally, you also want a domain name that’s easy to spell. If you have a company, you’ll probably want your company name as a domain name.
Another very important thing to consider is what top-level domain (TLD) you choose. A TLD is, to keep it simple, te last part of the domain name. So for yoast.com, Yoast is obviously the brand, and .com is called a TLD. Generally speaking we distinguish between several types of TLDs:
Country code TLDs (ccTLDs): Each country has its own TLD, e.g. .nl for the Netherlands, .fr for France, and .de for Germany
Generic TLDs (gTLDs) These are basically all other domains, like .com, .info, .net, .edu and .gov
Infrastructure TLDs Infrastructure TLDs are TLDs like .arpa. You can’t register a domain under .arpa, it’s only used for infrastructure purposes.
When setting up your site, you have to pay attention to picking a strong username and password for the admin account. The administrator (also: admin) account in WordPress is the first account you set up. It’s the most powerful user role, which is reserved for site owners and gives you full control of your WordPress site. Make sure that you never use admin as a username!
You also have to make sure your website is on HTTPS. When users engage with your site, they do so over what’s called an HTTP or HTTPS connection. The difference between these two connections is that, under HTTPS, all the connections between your site and the visitor are encrypted. This means no one else can read what’s going on between the visitor and the website. Thus, using HTTPS is a safer way of sending data than HTTP.
Navigating the WordPress backend
Now that you have installed WordPress, it’s time to explore the WordPress back end! You can see the WordPress back end as your control room: this is from where you can add, edit, and remove the content on your site, as well as control what your site looks like.
To access the WordPress Dashboard, you need to follow these steps:
Go to your WordPress login URL. If you haven’t changed this, this URL is www.yoursite.com/wp-admin
Enter your WordPress username and password and click ‘Login’. If you’re already logged in, you’ll be taken to your WordPress Dashboard without having to enter your login credentials.
This takes you to a screen like the image below, where you see the WordPress Dashboard (1), the admin toolbar (2), and the admin menu (3).
The WordPress Dashboard
The WordPress Dashboard gives you an overview of what’s happening with your site. It’s the control room of your site, where you have a bird’s eye view of operations. The Dashboard contains widgets, which you can customize to make the Dashboard fit your needs. You can rearrange the widgets, simply by dragging and dropping them. Here we’ll discuss the default widgets:
At a Glance As the name already implies, the At a Glance widget offers an ‘at-a-glance’ look at your site’s posts, pages, comments, and theme. As you can see in the image, it shows you the number of current posts (1), pages (2), and comments (3). It also shows you what version of WordPress you’re running (4) and which theme is active on your webpage (5).
Activity For a more detailed preview of your posts and comments, you should take a look at the Activity widget. As you can see in the image below in this widget, you see your most recent published posts (1) and the recent comments (2). In the recent comment section, you can see who commented and clicking on the user link lets you visit their profile. If you hover over a comment with your mouse, a list of actions will appear. From here, you can choose to approve or disapprove, reply, edit, see the history, mark as spam or put it to the trash. You can reply to a comment from the WordPress Dashboard. Simply click ‘reply’ and a form in which you can type your comment will appear.
Quick Draft The Quick Draft widget contains a mini post editor that allows you to create drafts instantly from the Dashboard. If you have a new idea, and you want to jot it down quickly, you can do this via the Quick Draft widget. All you have to do is enter a title, write a piece of text and click ‘Save Draft’. As the title already suggests, this is just a draft, and therefore it won’t be published on your site yet. Clicking on the link of the draft will take you to the post editor, where you can edit the draft and write a longer and complete version of your idea.
WordPress Events and News Finally, in the WordPress Events and News widgets, you are offered a quick glimpse of what’s happening in the WordPress world. It shows an overview of WordPress-related news and upcoming WordPress events, like Meetups and WordCamps.
The admin toolbar
At the top of the WordPress Dashboard, you see a black bar. This is your administrator toolbar, or in short: admin toolbar and it’s just visible to you, not to people visiting your site.
The admin menu
The admin menu is the sidebar on the left side of your screen. As shown in the image below, you can view the admin menu expanded or collapsed. The admin menu consists of various menu items. In this lesson, we’ll go into the default menu items of the admin menu. Please note that if you have any plugins installed, there may be extra, plugin-specific, items in your admin menu. For example, in the image below, you’ll see the menu item of the Yoast SEO plugin at the bottom of the list.
3. Customizing your site
When you start with WordPress, you’ll probably want to brand your website and give it a personal touch. Eager to start customizing your website? The settings menu in WordPress is a great place to start!
A theme handles the way your WordPress site looks. It serves as a representation for your brand, but – at the same time – takes care of the visual representation of WordPress content and data, like pages and posts. Simply put, a theme is what a person will see when visiting your website.
With the Customizer, you can quickly make changes to the design of your site or the elements on your pages, such as your site branding, menus, and widgets. And you won’t have to touch any code!
A WordPress widget is a simple, pre-built block you can add to your site that serves a specific function, like a search bar, a list of your most recent posts, or an archive of your posts. Widgets can only be put into so-called widget areas. These areas are defined by your theme and are usually located in the sidebar(s) and the footer of your site.
WordPress plugins are pieces of software that you install to add extra features and functionalities to your WordPress site. They allow you to go beyond what a basic WordPress installation has to offer. There are tons of plugins with a lot of different functionalities, like speed up your site, filter out spam comments, secure your website, and set up an online shop.
4. Creating content in the Block Editor
The difference between posts and pages
In WordPress, there are two primary content types: posts and pages. Depending on the type of content you want to have, and how you plan to use that content, choosing between posts and pages can make a difference. Four main things set posts and pages apart:
The time of publication is relevant for posts, but not for pages.
Post feature an author and pages don’t.
Posts are archived in WordPress and pages aren’t.
You can leave comments on posts. Comments on pages are disabled by default.
How to write a post in WordPress
Writing a post is one of the most exciting and creative things you can do in WordPress. And luckily, it’s relatively easy too. Let’s dive into how it works.
Click on the posts menu item in the admin menu.
This will open the Posts screen. See 1 in the image.
In the Posts screen, click the Add New button.
The Post editing screen will open. Note: the Post editing screen may appear slightly different, depending on the theme you’re using. See 2 in the image.
Add a title in the Add title field
See 1 in the image.
Start typing your content in the field below the Add title field
See 2 in the image.
How to write a page in WordPress
Now that you know how to write posts, writing a page will be very easy. To start writing a page, you will need to follow the steps below. Once you access the page editor, writing a page works exactly like writing a post. Just like posts, you also use blocks to create pages.
Click on the Pages menu item in the admin menu (1 in the image)
Click the Add New button in the Pages screen (2 in the image)
5. Structuring your site
Organizing your website
When creating a website, it’s crucial to think hard about how you will structure it. An organized website is one that contains posts and page, which are grouped and linked in a structured manner. A disorganized website is just a pile of content with no apparent order. Surely, you would like to have an organized site that your visitors will find easy to navigate.
Let’s start by looking at an ideal situation: if you’re starting from scratch, how should you organize your site? We think a well-organized website looks like a pyramid with a number of levels:
Subcategories (only for larger sites)
Individual pages and posts
It goes without saying that your homepage should act as a navigation hub for your visitors. You should link to the essential pages from your homepage. By doing this:
your visitors are more likely to end up on the pages you want them to end up on;
you show search engines that these pages are essential.
Of course, if your site gets bigger, you won’t be able to link from the homepage to all of your important pages. For example, if you write a lot of articles or have a lot of essential articles, you cannot link to all of them from your homepage. That would be bad for two reasons:
It would lead to clutter, which will make it hard for your visitors to notice the pages you want them to visit.
It would weaken link value for every individual link. With too many links on a page, every link is just a little less valuable for the page it links to.
In addition to having a well-structured homepage, you should also create a clear navigation path on your site. Your site-wide navigation consists of two main elements:
Archive pages are generated when you create a category, tag, or another taxonomy (in WordPress, at least). They have their own unique URL on your site. Posts or (product) pages that belong to that taxonomy are presented in a list on these archive pages. So, these archives can be based on various things: this could be categories and tags, but also the post date and post author, or something else if you created a custom taxonomy or use a plugin that creates one.
You could be managing your WordPress site with multiple users. To make this easier, WordPress created a number of user roles. A user role determines what someone can do on your site. There are six predefined user roles to choose from:
Next to these predefined roles, developers can also create extra user roles. You can assign an change user roles via the Users menu item in your WordPress admin menu.
Updating WordPress core, themes and plugins is important for three reasons:
However, updating comes with risks. To minimize risks, use high-quality plugins and themes, look at what is updated in the version details, and make a backup of your site before updating. WordPress also offers an updates overview, which you can access via the admin menu (Dashboard > Updates). Here, you’ll find all the updates available for your site.
It is important to make backups of your site because they allow you to go back to an earlier version of your site when an update causes issues, but they could also be useful when you experience trouble with your hosting provider or get hacked. There are basically three ways to create backups of your WordPress site:
By now, you have probably noticed that WordPress is relatively easy to use. Even with no prior experience in building websites, creating a website with WordPress can be a piece of cake. Still, even with all the features that make WordPress easy to use, sometimes people can make mistakes. In this post, we explore the most common beginner’s mistakes in WordPress. And, of course we’ll also give some tips on how to avoid these mistakes.
Securing your website may sound like a daunting process. But there is good news! You can improve the security of the website yourself. You do not even need to have technical or coding skills. Even some small adjustments can make the difference between a hacked site, and a happy and healthy website. You can think of the following things:
Make sure you always keep your WordPress core, plugins and themes up to date!
Make sure that your WordPress password is complex, long and unique (CLU)
We’ve discussed how you can create a great site with WordPress. But, what is a great site without visitors? Creating a great site is unfortunately not enough to make people visit your site. And that’s where the real challenge begins. Creating a WordPress site is only the first step on your way to. stardom p- for the. other steps, you need to work on your SEO (Search Engine Optimization).
SEO stands for search engine optimization and it’s the process of improving websites and content to get more traffic from search engines. It has an on-page and off-page part. So, let’s go through the things that WordPress does for your site’s SEO out of the box.
A strong foundation WordPress helps you get going quickly and it’s a pretty solid platform to work on. A basic setup can provide a strong foundation – even without extensive customization, theme optimization and plugins.
Title tag WordPress also supports the title tag. This makes sure that the title you entered is also rendered in the code, so the search engines know exactly where to find the all-important title of your post.
Duplicate content Also, WordPress automatically deals with duplicate content on some pages. By that, we mean that you sometimes show the same content on different URLs.
Redirects Out of the box, WordPress also redirects posts whenever you change their titles, which is very convenient. Imagine if you decide you don’t like the way you’ve framed your post, rewrite it and then still have the old URL, which doesn’t match the content of your post anymore! It would be very confusing for visitors and search engines alike.
Site Health Lastly, a recent addition is the Site Health dashboard that shows you how your site is doing in a technical sense.
WordPress SEO is an exciting, but also gigantic subject to tackle. That’s why we made a definitive guide to WordPress SEO. This guide gives you a lot of stuff you can do on your WordPress site. It goes from technical SEO tips to conversion tips, to content tips, to conversion tips, and a whole lot in between. So, make sure you check out this guide to higher rankings for WordPress sites.
That’s it! It was probably a lot to take in, which is why we recommend taking the WordPress for beginners training course. You’ll find everything you need to know if you’re a WordPress newbie. The training includes quizzes that test whether you have mastered the subject. And, it’s online and on-demand, so you can get started whenever you want! Good luck!
Episode Overview: Knowing what technical SEO adjustments to make to increase your ranking is difficult when search engines consistently publish new algorithm updates and SEO itself is always in a state of flux. Join host Ben as he speaks with Pathmonk founder Lukas Haensch to discuss how Google views technical SEO and what reliable technical adjustments you can implement to significantly boost your ranking.
Decreasing page load times is one of the most important SEO technical optimizations to make.
Leverage tools like webpagetest.org to test your page speed and Google Think to see industry-specific performance benchmarks to help optimize your website.
GUESTS & RESOURCES
Ben: Welcome to agency month on the Voices of Search podcast. I’m your host Benjamin Shapiro, and this month we’re focusing on understanding and optimizing the relationships SEO agencies have with their in-house SEO partners. Joining us is Lukas Haensch, who is the founder of Pathmonk, which is an AI-driven technology that provides an effortless customer acquisition channel to automatically grow clients from website and social media traffic. Prior to working at Pathmonk, Lukas also works at a company called Google, working in their mobile optimization team. Maybe you’ve heard of it. And today Lukas and I are going to talk about what agencies need to know about how Google views technical SEO.
Ben: Okay. On with the show. Here’s my conversation with Lukas Haensch, founder at Pathmonk. Lucas, welcome to the Voices of Search podcast.
Lukas: Thanks, Ben. Thanks for having me. Pleasure to be here.
Ben: Excited to have you here, and excited to not only talk to you because you’re doing some interesting things in conversion rate optimization, but honestly, you’re an ex-Googler and we don’t get to talk to many people that are working at the mothership, being an SEO podcast. Tell us a little bit about you and your background.
Lukas: Oh yeah, sure. So we’re just normal people, just to get it out the door first. Yeah, sure. So I’ve been with Google for about four years working in various teams, and one of the teams has been also the mobile optimization team and is probably one of the reasons why we were chatting here today. So, I basically spend time in Google working with some of Google’s … It’s funny actually. When I’m saying Google, my Google app goes off here, my assistant. One second. There you go. So basically spent time with Google’s mobile optimization team, which means we would be working together with Google’s larger advertising clients in order to optimize their websites from various angles from mobile websites, in particular, from a UX angle, as well as from a page speed, and performance angle given the different impact that it has on SEO. We’re going to chat about that, and obviously just purely conversions on the site.
Lukas: Yeah, I’ve basically seen a lot in what goes into conversion rate optimization, what goes into getting more leads from your site, if you’re a lead generation company. And then I moved on to Workday, is another Silicon Valley-based company, as a UX lead designer for one of the products, spent there about a year. And then after all of those learnings and the time there, moved into being co-founder in the Pathmonk, which is into how you are able to qualify and convert more people from your website through artificial intelligence.
Ben: So, you’ve had a rich history working in some pretty notable companies. I’ll be honest, the one I care the most about is Google. And specifically, we don’t get an opportunity to talk to people at Google about how they think about SEO. And you were on the mobile team, so technical SEO is something that I specifically want to focus on. Give us the lay of the land, spill the beans. How does Google think about what we should be doing? What are the major ranking factors that matter? Tell us the secret sauce.
Lukas: Yeah, so I have brought here a paper with all the ranking factors. Just kidding. Obviously, even being in Google there’s various levels of insights that you could have only to certain topics, and my focus within Google has been always on page speed optimization, performance optimization, as it is one of the ranking factors. So unfortunately I cannot come with the whole formula for you guys, but what I can do is I can share how we were thinking internally about page speed, what types of metrics to look at, how we could be measuring this, what is the main things you can be looking at, if you’re an SEO agency, maybe what is a quick check that you can do with the clients that you’re working with to maybe point them to one of the other things that it can be doing on their page to just increase page speed a little bit. So have to slightly disappoint. I don’t have the whole formula, but I would guess there’s actually not that many people in the world who would actually have a whole oversight on all of this.
Ben: I’m sure if they did, they wouldn’t be talking to me on this podcast but, boy, would I be grateful. So we all know that page speed is one of the most, if not the most, important ranking factors. And one of the biggest pieces of low hanging fruit when it comes to making drastic performance changes, when we talk about technical SEO, it’s always the first thing that we think about is how quickly are you loading your website to provide relevant content to your consumers from the inside? How does Google think about page speed? And is it really the number one ranking factor that they cared about while you were there?
Lukas: So, it’s definitely an important factor, right? Is it the number one? I wouldn’t even be able to tell you because I don’t have even that full insights there. What I can tell you-
Ben: But yes, yes. Yes it is.
Lukas: What I can say is that it’s definitely growing in importance. There has been an increasing focus on page speed optimization, and it’s actually, putting the SEO ranking aside a little bit, it is actually a very natural thing to do for Google, right? Because more and more mobile searches are happening. I think the mobile traffic has surpassed the desktop traffic quite a while ago and then basically the next step is to figure out, okay, how do the pages where the visitors land on, what experience do you provide when you do that in mobile? Do you provide an experience that is a blank screen for the next 10 seconds, or do you have a continuously good experience from the different search results into the different pages? So definitely it’s an important factor. Definitely there was a team internally in Google that was just focusing on identifying what can be gaps that Google’s clients are having in terms of page speed optimization, and sort of that’s where my learnings and my history come from, and the perspective on a very high level internally on how you can think about page speed optimization because there’s many ways to think about this.
Lukas: But what we particularly looked at and focused at was how do you make sure that you serve a first meaningful experience in the fastest way possible, right? How does that above the fold content render the quickest and shows meaningful content? Meaning a hero image, a call to action, a headline, maybe one or two other images, your icons. How long does it take until all of this is actually being displayed on the mobile device? Because there’s tons of metrics that you could be theoretically looking at, right? If you’re looking at page speed optimization, there’s the low time of the page, there’s the time to first byte, there’s the start of the render, there’s the fully loaded. What we were continuously looking at was something called speed index. And speed index, it’s basically a metric that represents how long it takes until the above the fold content of your first visit is being rendered, right? How many milliseconds does it take?
Ben: Okay, so that’s an important distinction. While we’re thinking about site speed where you can look at various metrics to understand what your site speed is, from your experience, Google cares first and foremost about what’s going to be above the fold that’s going to load? So if you have heavy pages, you have to prioritize the top of the page to load first and make sure that you’re having that first glance being visible as quick as possible.
Ben: This is agency month, and so for the agency partners that are out there or for the in house SEOs that are thinking about how to work with their agencies, when you’re thinking about a site speed audit, it’s something that a lot of agencies are doing. They’re coming in, they’re going to say, “Hey, we’re going to evaluate and audit the technical aspect of your SEO.” What are the metrics that agencies should be thinking about, and what should the in house partners be asking for when it comes to understanding the site speed and the evaluation of their pages?
Lukas: Yeah, I mean, obviously there is even Google tools where you can get your sort of ranking or your percentage points of how good your page is doing. I can tell you what we’ve been doing internally, even ask the team, talking with people. As I was describing this idea of the page speed index, I think that is a very, very good metric to look at because it describes exactly how long it takes until that first page above the fold is being rendered, and you can leverage tools like webpagetest.org, for example, which it’s not an official Google tool, but it was built actually by the Google cracks that are the super brains on that topic. Basically they were building these tools. It’s not really an official Google tool, but it’s built by Googlers and is heavily informing also internally. So webpagetest.org is a super good tool to run your page through and then checking the page speed index. And that speed index will be something around, it depends, obviously, how fast you’re loading, but it’s in milliseconds so it’s going to tell you a page speed index of around, maybe let’s say 5,000, if your first above the fold content is being loaded in five seconds. So that’s a really, really good place to start. And then from that first train of thought, there is a lot of questions kicking off from there, right, that could be asked.
Ben: So obviously there’s the site speed index, which is how long it takes to have that first top of the page load. What are the things that you can do to affect page speed when you’re … I just loaded the martechpod.com, a website that I run for another podcast, and I have a page load time of 5.2 seconds. What can I do to decrease that?
Lukas: So, you mentioned the page speed index of 5,210 then, I would assume is what you’re seeing?
Ben: Mm-hmm (affirmative).
Lukas: Yeah, there’s actually tons of benchmarks, some from a high level. Sort of the magic number that was always being used is this three second benchmark, right? 53% of users would leave a page if it takes longer than three seconds to load. So that’s been sort of the core numbers that has been used in that time. And since, a lot of research has been published, and there is an internal team in Google that is actually publishing on the Google Think website quite a lot of extensive research that are industry-specific benchmarks for the travel industry, for the financial industry. So whatever industry you’re in, check out the Google Think resources there where you can see very specific benchmarks for those performances.
Lukas: But on a high level it was just three seconds or 3,000 page speed index that we’ve been focusing at. And then there is supporting data around that then sort of looks into, okay, with every second that you’re adding to a load time, the effect that it can have on conversion rates. And yeah, so that’s basically been sort of the base thinking in order to see, okay, how fast do we have to get pages? And then obviously Google started to roll up much more with accelerated mobile pages to be even much more faster than that. But three seconds or page speed index of 3,000 for your website is pretty decent goal to have, I’d say.
Ben: So I think that as we’re reframing this in terms of agency thinking, when you’re vetting an agency if they talk about what your site speed goals are, the three second benchmark is one thing to look at and understanding if you’re working with an agency and they’re not necessarily focused on page speed or if they’re trying to get your page speed down and they can’t get to that mark, they might not be the best agency. And obviously this depends on what your website is doing as well.
Ben: One other thing I noticed looking at webpagetest.org is that at the top of the page they go through a couple of different categories of, and they actually give you grades, what was the first byte time, keeping alive, enabled, compressed transfer, compressed images, cache static content, and effective use of CDN. It seems like that if those are the five categories that are at the top of the page and you get letter grades A, B, C, D, E, F, that it’s worth thinking about page speed in terms of those categories. What can you do to optimize any of those five categories, and are they the ones that you should be paying attention to?
Lukas: I think it’s a really great question, and I think it comes down to the concept of page speed optimization does not necessarily have to be a huge task, right? I think it’s probably one of the biggest learnings that it took from all of those years, which was that you can have a big meeting, a board meeting, you look at those metrics, and then in reality what it comes down to is that there is potentially just an old fun file loading in the old fun file format, right? And therefore you don’t see the letters on the screen for quite awhile.
Lukas: And I’m sort of sliding now into speaking about of couple of tips and tricks. I’m happy to move into that, but I think to answer your question, yes, all of the categories have the reason for existence in OCD and optimization. It’s something really important. But page speed index is something you can do sitting down in a couple of hours and find the major flaws and push it down quite a bit.
Ben: The last question I’ll ask you today is give us some tools and tips that you recommend for ways, if brands can quickly lower their page speed, what are some of the ways that you’ve seen do that? Give us the secret sauce.
Lukas: Then there is obviously stuff like images, right? I mean, a lot of stuff has been talked about images. Only a few small pointers that I want to give there. So if you are loading images on your page, what you can be doing is you can be loading the image size that is matching the device size. All right? So you have an image that has a size of maybe 450 by 400 and 768 by 400. 1,200 times 800, so different versions of your image and you load it depending on which device the user’s coming from. So there will be source set in CSS. That’s something you guys can be doing in order to decrease the load time of the image. Obviously compressing images is another thing.
Lukas: Another thing that we’ve been doing quite a bit with the images, it’s a little bit debatable, but it’s the idea of Base64 encoding. You take one hero image, you Base64 encode that, Base64 encoders are all around online, which means you turn your image into a code string, you take the code string, you place it into your HTML, which means it’s going to be pushed with the first request because you said just before, in a critical rendering path, the HTML is the first file that is being pulled and parsed. So that’s when I think through.
Lukas: And then there’s, lazy loading is another way. You load your pictures on demand basically. Somebody’s scrolling the page, and if you can work with libraries such as … Lazysizes is a nice one to load pictures on demand, and not pre-loading them. Because most of the pictures on your page are images you only need once the user starts interacting, and not in that first couple of seconds. So that’s a really great thing to check.
Lukas: There’s tons more of this next small things. Some things that really fascinated me was the fonts, the font files. Super easy trick if you’re not particularly worried about which phones you’re using, it’s recommendable to use something like Open Sans or Roboto. Why? For the very simple reason, they are the most used phones on the internet, which means the likelihood that a user has already has downloaded and cached that font is super high. It’s something like 70-80%, about that, which means they already going to have, per default, your font ready on the screen and don’t have to wait for a load. Another small one, and I can go on and on and on-
Ben: Don’t stop now.
Lukas: Another small one will be, okay, you’re loading your icon fonts, right? What have been seeing again and again is you’re using something like IcoMoon or any other packages of icons and you actually use three or four icons on the above the fold screen, but you’re loading about 600 icons in the package for that first load, right? Because you just put in that file, which means, again, here you’re loading something that you don’t need. So you can take those three or four icons that you need for the above the fold content, Base64 encode them as well, and send them with the first request and then later load or the other icons that you don’t need. So it’s really all about pulling the layers and layers of files that are sort of blocking the rendering that you actually don’t really need at the very moment.
Lukas: Yeah, so there is a lot of things that you can do. I think the key perspective is to have a look into your own waterfall from webpagetest.org, see how your files are effecting, what files have been loaded when some screen is still blank, or your screen is there but just the hero image is missing. So it’s really down to that level, and I think that makes it a little bit more tangible and a little bit more actionable. So a page speed optimization can be done in a matter of couple of hours.
Ben: Lots of great advice, lots of low hanging fruit for our agency partners, for our in house SEOs as well. And that wraps up this episode of the Voices of Search podcast. Thanks for listening to my conversation with Lukas Haensch, founder of Pathmonk. We’d love to continue this conversation with you, so if you’re interested in contacting Lucas, you can find a link to his LinkedIn profile in our show notes, or you could visit his company’s website, which is pathmonk.com, P-A-T-H-M-O-N-K.com.
Ben: Just one link in our show notes I’d like to tell you about if you didn’t have a chance to take notes while you were listening to this podcast, head over to voicesofsearch.com where we have summaries of all of our episodes, the contact information for our guests. You can also send us your topic suggestions or apply to be a guest speaker on the Voices of Search podcast. Of course you could always reach out on social media. Our handle is voicesofsearch on Twitter, and my personal handle is BenJShap, B-E-N-J-S-H-A-P.
Ben: If you haven’t subscribed yet and you want a regular stream of SEO and content marketing insights in your podcast feed, in addition to part two of our conversation with Lucas Haensch, founder of Pathmonk, we’re going to publish episodes multiple times during the week. So hit that subscribe button in your podcast app and check back in your feed soon. All right. That’s it for today, but until next time, remember, the answers are always in the data.
So, I admit it: When we started looking at our own blog traffic, we realized this was one of the most historically popular blog posts on the Seer domain. After a brief moment of reflection and a swell of enthusiasm for the ever-present greatness of the Screaming Frog SEO Spider, a tool that’s been a loyal companion in our technical SEO journey, we realized we were doing a disservice–both to our readers and to the many leaps forward from the great Screaming Frog.
Though this original guide was published in 2015, in the years since, Screaming Frog has evolved to offer a whole suite of new features and simplified steps to conduct technical audits, check a site’s health, or simply get a quick glimpse of info on a selection of URLs. Below, you’ll find an updated guide to how SEOs, PPC professionals, and digital marketing experts can use the tool to streamline their workflow.
To get started, simply select what it is that you are looking to do:
When starting a crawl, it’s a good idea to take a moment and evaluate what kind of information you’re looking to get, how big the site is, and how much of the site you’ll need to crawl in order to access it all. Sometimes, with larger sites, it’s best to restrict the crawler to a sub-section of URLs to get a good representative sample of data. This keeps file sizes and data exports a bit more manageable. We go over this in further detail below. For crawling your entire site, including all subdomains, you’ll need to make some slight adjustments to the spider configuration to get started.
By default, Screaming Frog only crawls the subdomain that you enter. Any additional subdomains that the spider encounters will be viewed as external links. In order to crawl additional subdomains, you must change the settings in the Spider Configuration menu. By checking ‘Crawl All Subdomains’, you will ensure that the spider crawls any links that it encounters to other subdomains on your site.
In addition, if you’re starting your crawl from a specific subfolder or subdirectory and still want Screaming Frog to crawl the whole site, check the box marked “Crawl Outside of Start Folder.”
By default, the SEO Spider is only set to crawl the subfolder or subdirectory you crawl from forwards. If you want to crawl the whole site and start from a specific subdirectory, be sure that the configuration is set to crawl outside the start folder.
If you wish to limit your crawl to a single folder, simply enter the URL and press start without changing any of the default settings. If you’ve overwritten the original default settings, reset the default configuration within the ‘File’ menu.
If you wish to start your crawl in a specific folder, but want to continue crawling to the rest of the subdomain, be sure to select ‘Crawl Outside Of Start Folder’ in the Spider Configuration settings before entering your specific starting URL.
If you wish to limit your crawl to a specific set of subdomains or subdirectories, you can use RegEx to set those rules in the Include or Exclude settings in the Configuration menu.
In this example, we crawled every page on seerinteractive.com excluding the ‘about’ pages on every subdomain.
Go to Configuration > Exclude; use a wildcard regular expression to identify the URLs or parameters you want to exclude.
Test your regular expression to make sure it’s excluding the pages you expected to exclude before you start your crawl:
In the example below, we only wanted to crawl the team subfolder on seerinteractive.com. Again, use the “Test” tab to test a few URLs and ensure the RegEx is appropriately configured for your inclusion rule.
Running the spider with these settings unchecked will, in effect, provide you with a list of all of the pages on your site that have internal links pointing to them.
Once the crawl is finished, go to the ‘Internal’ tab and filter your results by ‘HTML’. Click ‘Export’, and you’ll have the full list in CSV format.
If you tend to use the same settings for each crawl, Screaming Frog now allows you to save your configuration settings:
How to find all of the subdomains on a site and verify internal links.
There are several different ways to find all of the subdomains on a site.
Use Screaming Frog to identify all subdomains on a given site. Navigate to Configuration > Spider, and ensure that “Crawl all Subdomains” is selected. Just like crawling your whole site above, this will help crawl any subdomain that is linked to within the site crawl. However, this will not find subdomains that are orphaned or unlinked.
Use Google to identify all indexed subdomains.
By using the Scraper Chrome extension and some advanced search operators, we can find all indexable subdomains for a given domain.
Start by using a site: search operator in Google to restrict results to your specific domain. Then, use the -inurl search operator to narrow the search results by removing the main domain. You should begin to see a list of subdomains that have been indexed in Google that do not contain the main domain.
Use the Scraper extension to extract all of the results into a Google Sheet. Simply right-click the URL in the SERP, click “Scrape Similar” and export to a Google Doc.
In your Google Doc, use the following function to trim the URL to the subdomain:
Essentially, the formula above should help strip off any subdirectories, pages, or file names at the end of a site. This formula essentially tells sheets or Excel to return what is to the left of the trailing slash. The start number of 9 is significant, because we are asking it to start looking for a trailing slash after the 9th character. This accounts for the protocol: https://, which is 8 characters long.
De-duplicate the list, and upload the list into Screaming Frog in List Mode–you can manually paste the list of domains, use the paste function, or upload a CSV.
Enter the root domain URL into tools that help you look for sites that might exist on the same IP or search engines designed especially to search for subdomains, like FindSubdomains. Create a free account to login and export a list of subdomains. Then, upload the list to Screaming Frog using List Mode.
Once the spider has finished running, you’ll be able to see status codes, as well as any links on the subdomain homepages, anchor text and duplicate page titles among other things.
Screaming Frog was not originally built to crawl hundreds of thousands of pages, but thanks to some upgrades, it’s getting closer every day.
The newest version of Screaming Frog has been updated to rely on database storage for crawls. In version 11.0, Screaming Frog allowed users to opt to save all data to disk in a database rather than just keep it in RAM. This opened up the possibility of crawling very large sites for the first time.
In version 12.0, the crawler automatically saves crawls to the database. This allows them to be accessed and opened using “File > Crawls” in the top-level menu–in case you panic and wonder where the open command went!
While using database crawls helps Screaming Frog better manage larger crawls, it’s certainly not the only way to crawl a large site.
Until recently, the Screaming Frog SEO Spider might have paused or crashed when crawling a large site. Now, with database storage as the default setting, you can recover crawls to pick up where you left off. Additionally, you can also access queued URLs. This may give you insight about any additional parameters or rules you may want to exclude in order to crawl a large site.
In some cases, older servers may not be able to handle the default number of URL requests per second. In fact, we recommend including a limit on the number of URLs to crawl per second to be respectful of a site’s server just in case. It’s best to let a client know when you’re planning on crawling a site just in case they might have protections in place against unknown User Agents. On one hand, they may need to whitelist your IP or User Agent before you crawl the site. The worst case scenario may be that you send too many requests to the server and inadvertently crash the site.
To change your crawl speed, choose ‘Speed’ in the Configuration menu, and in the pop-up window, select the maximum number of threads that should run concurrently. From this menu, you can also choose the maximum number of URLs requested per second.
If you find that your crawl is resulting in a lot of server errors, go to the ‘Advanced’ tab in the Spider Configuration menu, and increase the value of the ‘Response Timeout’ and of the ‘5xx Response Retries’ to get better results.
When the Screaming Frog spider comes across a page that is password-protected, a pop-up box will appear, in which you can enter the required username and password.
To manage authentication, navigate to Configuration > Authentication.
In order to turn off authentication requests, deselect ‘Standards Based Authentication’ in the ‘Authentication’ window from the Configuration menu.
Once the spider has finished crawling, use the Bulk Export menu to export a CSV of ‘All Links’. This will provide you with all of the link locations, as well as the corresponding anchor text, directives, etc.
All inlinks can be a big report. Be mindful of this when exporting. For a large site, this export can sometimes take minutes to run.
For a quick tally of the number of links on each page, go to the ‘Internal’ tab and sort by ‘Outlinks’. Anything over 100, might need to be reviewed.
Once the spider has finished crawling, sort the ‘Internal’ tab results by ‘Status Code’. Any 404’s, 301’s or other status codes will be easily viewable.
Upon clicking on any individual URL in the crawl results, you’ll see information change in the bottom window of the program. By clicking on the ‘In Links’ tab in the bottom window, you’ll find a list of pages that are linking to the selected URL, as well as anchor text and directives used on those links. You can use this feature to identify pages where internal links need to be updated.
To export the full list of pages that include broken or redirected links, choose ‘Redirection (3xx) In Links’ or ‘Client Error (4xx) In Links’ or ‘Server Error (5xx) In Links’ in the ‘Advanced Export’ menu, and you’ll get a CSV export of the data.
To export the full list of pages that include broken or redirected links, visit the Bulk Export menu. Scroll down to response codes, and look at the following reports:
No Response Inlinks
Redirection (3xx) Inlinks
Redirection (Meta Refresh) Inlinks
Client Error (4xx) Inlinks
Server Error (5xx) Inlinks
Reviewing all of these reports should give us an adequate representation of what internal links should be updated to ensure they point to the canonical version of the URL and efficiently distribute link equity.
After the spider is finished crawling, click on the ‘External’ tab in the top window, sort by ‘Status Code’ and you’ll easily be able to find URLs with status codes other than 200. Upon clicking on any individual URL in the crawl results and then clicking on the ‘In Links’ tab in the bottom window, you’ll find a list of pages that are pointing to the selected URL. You can use this feature to identify pages where outbound links need to be updated.
To export your full list of outbound links, click ‘External Links’ on the Bulk Export tab.
For a complete listing of all the locations and anchor text of outbound links, select ‘All Outlinks’ in the ‘Bulk Export’ menu. The All Outlinks report will include outbound links to your subdomains as well; if you want to exclude your domain, lean on the “External Links” report referenced above.
After the spider has finished crawling, select the ‘Response Codes’ tab from the main UI, and filter by Status Code. Because Screaming Frog uses Regular Expressions for search, submit the following criteria as a filter: 301|302|307. This should give you a pretty solid list of all links that came back with some sort of redirect, whether the content was permanently moved, found and redirected, or temporarily redirected due to HSTS settings (this is the likely cause of 307 redirects in Screaming Frog). Sort by ‘Status Code’, and you’ll be able to break the results down by type. Click on the ‘In Links’ tab in the bottom window to view all of the pages where the redirecting link is used.
If you export directly from this tab, you will only see the data that is shown in the top window (original URL, status code, and where it redirects to).
To export the full list of pages that include redirected links, you will have to choose ‘Redirection (3xx) In Links’ in the ‘Advanced Export’ menu. This will return a CSV that includes the location of all your redirected links. To show internal redirects only, filter the ‘Destination’ column in the CSV to include only your domain.
Use a VLOOKUP between the 2 export files above to match the Source and Destination columns with the final URL location.
After the spider has finished crawling, go to the ‘Internal’ tab, filter by HTML, then scroll to the right to the ‘Word Count’ column. Sort the ‘Word Count’ column from low to high to find pages with low text content. You can drag and drop the ‘Word Count’ column to the left to better match the low word count values to the appropriate URLs. Click ‘Export’ in the ‘Internal’ tab if you prefer to manipulate the data in a CSV instead.
Pro Tip for E-commerce Sites:
While the word count method above will quantify the actual text on the page, there’s still no way to tell if the text found is just product names or if the text is in a keyword-optimized copy block. To figure out the word count of your text blocks, use ImportXML2 by @iamchrisle to scrape the text blocks on any list of pages, then count the characters from there. If xPath queries aren’t your strong suit, the xPath Helper or Xpather Chrome extension does a pretty solid job at figuring out the xPath for you. Obviously, you can also use these scraped text blocks to begin to understand the overall word usage on the site in question, but that, my friends, is another post…
If you’ve already crawled a whole site or subfolder, simply select the page in the top window, then click on ‘Image Info’ tab in the bottom window to view all of the images that were found on that page. The images will be listed in the ‘To’ column.
Right-click on any entry in the bottom window to copy or open a URL.
Alternatively, you can also view the images on a single page by crawling just that URL. Make sure that your crawl depth is set to ‘1’ in the Spider Configuration settings, then once the page is crawled, click on the ‘Images’ tab, and you’ll see any images that the spider found.
First, you’ll want to make sure that ‘Check Images’ is selected in the Spider Configuration menu. After the spider has finished crawling, go to the ‘Images’ tab and filter by ‘Missing Alt Text’ or ‘Alt Text Over 100 Characters’. You can find the pages where any image is located by clicking on the ‘Image Info’ tab in the bottom window. The pages will be listed in the ‘From’ column.
Finally, if you prefer a CSV, use the ‘Bulk Export’ menu to export ‘All Images’ or ‘Images Missing Alt Text Inlinks’ to see the full list of images, where they are located and any associated alt text or issues with alt text.
Additionally, use the right sidebar to navigate to the Images section of the crawl; here, you can easily export a list of all images missing alt text.
Alternately, you can use the ‘Advanced Export’ menu to export a CSV of ‘All Links’ and filter the ‘Destination’ column to show only URLs with ‘jquery’.
Not all jQuery plugins are bad for SEO. If you see that a site uses jQuery, the best practice is to make sure that the content that you want indexed is included in the page source and is served when the page is loaded, not afterward. If you are still unsure, Google the plugin for more information on how it works.
In the Spider Configuration menu, select ‘Check SWF’ before crawling, then when the crawl is finished, filter the results in the ‘Internal’ tab by ‘Flash’.
This is increasingly more important to find and identify and content that is being delivered by Flash and suggest alternate code for the content. Chrome is in the process of deprecating Flash across the board; this is really something that should be used to highlight if there are issues with critical content and Flash on a site.
To find pages that contain social sharing buttons, you’ll need to set a custom filter before running the spider. To set a custom filter, go into the Configuration menu and click ‘Custom’. From there, enter any snippet of code from the page source.
In the example above, I wanted to find pages that contain a Facebook ‘like’ button, so I created a filter for facebook.com/plugins/like.php.
After the spider has finished crawling, go to the ‘Page Titles’ tab and filter by ‘Over 60 Characters’ to see the page titles that are too long. You can do the same in the ‘Meta Description’ tab or in the ‘URI’ tab.
After the spider has finished crawling, go to the ‘URI’ tab, then filter by ‘Underscores’, ‘Uppercase’ or ‘Non ASCII Characters’ to view URLs that could potentially be rewritten to a more standard structure. Filter by ‘Duplicate’ and you’ll see all pages that have multiple URL versions. Filter by ‘Parameters’ and you’ll see URLs that include parameters.
Additionally, if you go to the ‘Internal’ tab, filter by ‘HTML’ and scroll the to ‘Hash’ column on the far right, you’ll see a unique series of letters and numbers for every page. If you click ‘Export’, you can use conditional formatting in Excel to highlight the duplicated values in this column, ultimately showing you pages that are identical and need to be addressed.
After the spider has finished crawling, click on the ‘Directives’ tab. To see the type of directive, simply scroll to the right to see which columns are filled, or use the filter to find any of the following tags:
By default, Screaming Frog will comply with robots.txt. As a priority, it will follow directives made specifically for the Screaming Frog user agent. If there are no directives specifically for the Screaming Frog user agent, then the spider will follow any directives for Googlebot, and if there are no specific directives for Googlebot, the spider will follow global directives for all user agents. The spider will only follow one set of directives, so if there are rules set specifically for Screaming Frog it will only follow those rules, and not the rules for Googlebot or any global rules. If you wish to block certain parts of the site from the spider, use the regular robots.txt syntax with the user agent ‘Screaming Frog SEO Spider’. If you wish to ignore robots.txt, simply select that option in the Spider Configuration settings.
To find every page that contains Schema markup or any other microdata, you need to use custom filters. Simply click on ‘Custom’ → ‘Search’ in the Configuration Menu and enter the footprint that you are looking for.
To find every page that contains Schema markup, simply add the following snippet of code to a custom filter: itemtype=http://schema.org
To find a specific type of markup, you’ll have to be more specific. For example, using a custom filter for ‹span itemprop=”ratingValue”› will get you all of the pages that contain Schema markup for ratings.
As of Screaming Frog 11.0, the SEO spider also offers us the ability to crawl, extract, and validate structured data directly from the crawl. Validate any JSON-LD, Microdata, or RDFa structured data against the guidelines from Schema.org and specifications from Google in real-time as you crawl. To access the structured data validation tools, select the options under “Config > Spider > Advanced.”
There is now a Structured Data tab within the main interface that will allow you to toggle between pages that contain structured data, that are missing structured data, and that may have validation errors or warnings:
You can also bulk export issues with structured data by visiting “Reports > Structured Data > Validation Errors & Warnings.”
After the spider has finished crawling your site, click on the ‘Siteamps’ menu and select ‘XML Sitemap’.
Once you have opened the XML sitemap configuration settings, you are able to include or exclude pages by response codes, last modified, priority, change frequency, images etc. By default, Screaming Frog only includes 2xx URLs but it’s a good rule of thumb to always double-check.
Ideally, your XML sitemap should only include a 200 status, single, preferred (canonical) version of each URL, without parameters or other duplicating factors. Once any changes have been made, hit OK. The XML sitemap file will download to your device and allow you to edit the naming convention however you’d like.
Creating an XML Sitemap By Uploading URLs
You can also create an XML sitemap by uploading URLs from an existing file or pasting manually into Screaming Frog.
Change the ‘Mode’ from Spider to List and click on the Upload dropdown to select either option.
Hit the Start button and Screaming Frog will crawl the uploaded URLs. Once the URLs are crawled, you will follow the same process that is listed above.
You can easily download your existing XML sitemap or sitemap index to check for any errors or crawl discrepancies.
Go to the ‘Mode’ menu in Screaming Frog and select ‘List’. Then, click ‘Upload’ at the top of the screen, choose either Download Sitemap or Download Sitemap Index, enter the sitemap URL, and start the crawl. Once the spider has finished crawling, you’ll be able to find any redirects, 404 errors, duplicated URLs and more. You can easily export and of the errors identified.
Identifying Missing Pages within XML Sitemap
You can configure your crawl settings to discover and compare the URLs within your XML sitemaps to the URLs within your site crawl.
Go to ‘Configuration’ -> ‘Spider’ in the main navigation and at the bottom there are a few options for XML sitemaps – Auto discover XML sitemaps through your robots.txt file or manually enter the XML sitemap link into the box. *Important note – if your robots.txt file does not contain proper destination links to all XML sitemap you want crawled, you should manually enter them.
Once you’ve updated your XML Sitemap crawl settings, go to ‘Crawl Analysis’ in the navigation then click ‘Configure’ and ensure the Sitemaps button is ticked. You’ll want to run your full site crawl first, then navigate back to ‘Crawl Analysis’ and hit Start.
Once the Crawl Analysis is complete, you’ll be able to see any crawl discrepancies, such as URLs that were detected within the full site crawl that are missing from the XML sitemap.
Wondering why certain pages aren’t being indexed? First, make sure that they weren’t accidentally put into the robots.txt or tagged as noindex. Next, you’ll want to make sure that spiders can reach the pages by checking your internal links. A page that is not internally linked somewhere on your site is often referred to as an Orphaned Page.
In order to identify any orphaned pages, complete the following steps:
Go to ‘Configuration’ -> ‘Spider’ in the main navigation and at the bottom there are a few options for XML sitemaps – Auto discover XML sitemaps through your robots.txt file or manually enter the XML sitemap link into the box. *Important note – if your robots.txt file does not contain proper destination links to all XML sitemap you want crawled, you should manually enter them.
Go to ‘Configuration → API Access’ → ‘Google Analytics’ – using the API you can pull in analytics data for a specific account and view. To find orphan pages from organic search, make sure to segment by ‘Organic Traffic’
You can also go to General → ‘Crawl New URLs Discovered In Google Analytics’ if you would like the URLs discovered in GA to be included within your full site crawl. If this is not enabled, you will only be able to view any new URLs pulled in from GA within the Orphaned Pages report.
Go to ‘Configuration → API Access’ → ‘Google Search Console’ – using the API you can pull in GSC data for a specific account and view. To find orphan pages you can look for URLs receiving clicks and impressions that are not included in your crawl.
You can also go to General → ‘Crawl New URLs Discovered In Google Search Console’ if you would like the URLs discovered in GSC to be included within your full site crawl. If this is not enabled, you will only be able to view any new URLs pulled in from GSC within the Orphaned Pages report.
Crawl the entire website. Once the crawl is completed, go to ‘Crawl Analysis –> Start’ and wait for it to finish.
View orphaned URLs within each of the tabs or bulk export all orphaned URLs by going to Reports → Orphan Pages
If you do not have access to Google Analytics or GSC you can export the list of internal URLs as a .CSV file, using the ‘HTML’ filter in the ‘Internal’ tab.
Open up the CSV file, and in a second sheet, paste the list of URLs that aren’t being indexed or aren’t ranking well. Use a VLOOKUP to see if the URLs in your list on the second sheet were found in the crawl.
@ipullrank has an excellent Whiteboard Friday on this topic, but the general idea is that you can use Screaming Frog to check whether or not old URLs are being redirected by using the ‘List’ mode to check status codes. If the old URLs are throwing 404’s, then you’ll know which URLs still need to be redirected.
First, you’ll need to identify the footprint of the malware or the spam. Next, in the Configuration menu, click on ‘Custom’ → ‘Search’ and enter the footprint that you are looking for.
You can enter up to 10 different footprints per crawl. Finally, press OK and proceed with crawling the site or list of pages.
When the spider has finished crawling, select the ‘Custom’ tab in the top window to view all of the pages that contain your footprint. If you entered more than one custom filter, you can view each one by changing the filter on the results.
So, you’ve harvested a bunch of URLs, but you need more information about them? Set your mode to ‘List’, then upload your list of URLs in .txt or .csv format. After the spider is done, you’ll be able to see status codes, outbound links, word counts, and of course, meta data for each page in your list.
First, you’ll need to identify the footprint. Next, in the Configuration menu, click on ‘Custom’ → ‘Search’ or ‘Extraction’ and enter the footprint that you are looking for.
You can enter up to 10 different footprints per crawl. Finally, press OK and proceed with crawling the site or list of pages. In the example below, I wanted to find all of the pages that say ‘Please Call’ in the pricing section, so I found and copied the HTML code from the page source.
When the spider has finished crawling, select the ‘Custom’ tab in the top window to view all of the pages that contain your footprint. If you entered more than one custom filter, you can view each one by changing the filter on the results.
Below are some additional common footprints you can scrape from websites that may be useful for your SEO audits:
youtube.com/embed/|youtu.be|<video|player.vimeo.com/video/|wistia.(com|net)/embed|sproutvideo.com/embed/|view.vzaar.com|dailymotion.com/embed/|players.brightcove.net/|play.vidyard.com/|kaltura.com/(p|kwidget)/ – Find pages containing video content
If you are pulling product data from a client site, you could save yourself some time by asking the client to pull the data directly from their database. The method above is meant for sites that you don’t have direct access to.
To identify URLs with session ids or other parameters, simply crawl your site with the default settings. When the spider is finished, click on the ‘URI’ tab and filter to ‘Parameters’ to view all of the URLs that include parameters.
To remove parameters from being shown for the URLs that you crawl, select ‘URL Rewriting’ in the configuration menu, then in the ‘Remove Parameters’ tab, click ‘Add’ to add any parameters that you want removed from the URLs, and press ‘OK.’ You’ll have to run the spider again with these settings in order for the rewriting to occur.
To rewrite any URL that you crawl, select ‘URL Rewriting’ in the Configuration menu, then in the ‘Regex Replace’ tab, click ‘Add’ to add the RegEx for what you want to replace.
Once you’ve added all of the desired rules, you can test your rules in the ‘Test’ tab by entering a test URL in the space labeled ‘URL before rewriting’. The ‘URL after rewriting’ will be updated automatically according to your rules.
If you wish to set a rule that all URLs are returned in lowercase, simply select ‘Lowercase discovered URLs’ in the ‘Options’ tab. This will remove any duplication by capitalized URLs in the crawl.
Remember that you’ll have to actually run the spider with these settings in order for the URL rewriting to occur.
Generally speaking, competitors will try to spread link popularity and drive traffic to their most valuable pages by linking to them internally. Any SEO-minded competitor will probably also link to important pages from their company blog. Find your competitor’s prized pages by crawling their site, then sorting the ‘Internal’ tab by the ‘Inlinks’ column from highest to lowest, to see which pages have the most internal links.
To view pages linked from your competitor’s blog, deselect ‘Check links outside folder’ in the Spider Configuration menu and crawl the blog folder/subdomain. Then, in the ‘External’ tab, filter your results using a search for the URL of the main domain. Scroll to the far right and sort the list by the ‘Inlinks’ column to see which pages are linked most often.
Drag and drop columns to the left or right to improve your view of the data.
After the spider has finished running, look at the ‘Meta Keywords’ tab to see any meta keywords found for each page. Sort by the ‘Meta Keyword 1’ column to alphabetize the list and visually separate the blank entries, or simply export the whole list.
If you’ve scraped or otherwise come up with a list of URLs that needs to be vetted, you can upload and crawl them in ‘List’ mode to gather more information about the pages. When the spider is finished crawling, check for status codes in the ‘Response Codes’ tab, and review outbound links, link types, anchor text and nofollow directives in the ‘Outlinks’ tab in the bottom window. This will give you an idea of what kinds of sites those pages link to and how. To review the ‘Outlinks’ tab, be sure that your URL of interest is selected in the top window.
Of course you’ll want to use a custom filter to determine whether or not those pages are linking to you already.
You can also export the full list of out links by clicking on ‘All Outlinks’ in the ‘Bulk Export Menu’. This will not only provide you with the links going to external sites, but it will also show all internal links on the individual pages in your list.
So, you found a site that you would like a link from? Use Screaming Frog to find broken links on the desired page or on the site as a whole, then contact the site owner, suggesting your site as a replacement for the broken link where applicable, or just offer the broken link as a token of good will.
Upload your list of backlinks and run the spider in ‘List’ mode. Then, export the full list of outbound links by clicking on ‘All Out Links’ in the ‘Advanced Export Menu’. This will provide you with the URLs and anchor text/alt text for all links on those pages. You can then use a filter on the ‘Destination’ column of the CSV to determine if your site is linked and what anchor text/alt text is included.
Set a custom filter that contains your root domain URL, then upload your list of backlinks and run the spider in ‘List’ mode. When the spider has finished crawling, select the ‘Custom’ tab to view all of the pages that are still linking to you.
Did you know that by right-clicking on any URL in the top window of your results, you could do any of the following?
Copy or open the URL
Re-crawl the URL or remove it from your crawl
Export URL Info, In Links, Out Links, or Image Info for that page
Check indexation of the page in Google, Bing and Yahoo
Check backlinks of the page in Majestic, OSE, Ahrefs and Blekko
Look at the cached version/cache date of the page
See older versions of the page
Validate the HTML of the page
Open robots.txt for the domain where the page is located
Search for other domains on the same IP
Likewise, in the bottom window, with a right-click, you can:
Copy or open the URL in the ‘To’ for ‘From’ column for the selected row
SERP Mode allows you to preview SERP snippets by device to visually show how your meta data will appear in search results.
Upload URLs, titles and meta descriptions into Screaming Frog using a .CSV or Excel document
If you already ran a crawl for your site you can export URLs by going to ‘Reports → SERP Summary’. This will easily format the URLs and meta you want to reupload and edit.
Mode → SERP → Upload File
Edit the meta data within Screaming Frog
Bulk export updated meta data to send directly to developers to update
Change rendering preferences depending on what you’re looking for. You can adjust the timeout time, window size (mobile, tablet, desktop, etc)
Hit OK and crawl the website
Within the bottom navigation, click on the Rendered Page tab to view how the page is being rendered. If your page is not being rendered properly, check for blocked resources or extend the timeout limit within the configuration settings. If neither option helps solve the how your page is rendering, there may be a larger issue to uncover.
You can view and bulk export any blocked resources that may be impacting crawling and rendering of your website by going to ‘Bulk Export’ → ‘Response Codes’
View Original HTML and Rendered HTML
If you’d like to compare the raw HTML and rendered HTML to identify any discrepancies or ensure important content is located within the DOM, go to ‘Configuration’ → ’Spider’ –> ‘Advanced’ and hit store HTML & store rendered HTML.
Within the bottom window, you will be able to see the raw and rendered HTML. This can help identify issues with how your content is being rendered and viewed by crawlers.