How To Implement Facebook Pixel Using Google Tag Manager

Using Google Tag Manager (GTM) to help implement Facebook pixels will help you consistently and easily track conversion and events from your website, allowing you to prove the success (or failure) of your advertising while building valuable data inside of Facebook that can be used for future targeting.

What Is Facebook Pixel?

Facebook Pixel is an analytics tool that helps you measure the effectiveness of your Facebook marketing campaigns by understanding what users do on your site. The Facebook Pixel is just pixel code that you generate within your Facebook Ads Manager account. The default pixel will help you in three main areas:

  • Conversion tracking: Facebook pixel ties conversions on your site back to specific Facebook advertising users clicked through.
  • Optimization: After installation, you can set up automatic bidding to target people who are more likely to convert.
  • Remarketing: Create custom audiences based on groups of users who came from certain ads to remarket to

Event Tracking

Tying conversions back to your Facebook ads is not the only thing the Facebook Pixel can do. You can also track events – actions taken on your site – with the Facebook Pixel. To do this, you need to generate extra pixel code within Ads Manager. You can choose from a list of standard events Facebook provides for you, or create your own custom events that are based off URLs. For a complete description of Facebook Pixel events, check out this document written by Facebook.

This sounds pretty great, right? Let’s talk a bit about why you should use the Facebook Pixel before we begin our walk-through of how to get the Facebook Pixel code, and how we can use GTM to implement it.

Why Should You Use Facebook Pixel?

Facebook is a major social media platform that can drive a great deal of traffic to your website. By using the Facebook Pixel, you will collect insightful data regarding actions and conversions that result from Facebook traffic.

Just like in Google Analytics, we can use that data to:

  • Better justify our ad spend
  • Better target future people

Google Analytics is great, and through audiences and remarketing, we can target people around the web.
But it’s not the only platform. There will certainly be crossover, but Facebook can provide you with:

  • A completely new audience
  • Different experiences for promotions and ads that may perform differently

One of the great things about Facebook Pixel is that it can be implemented on your site in many different ways, especially with Google Tag Manager!

Using Google Tag Manager

Default Code

Our first step is to generate our default Facebook Pixel in the Facebook Events Manager interface. A popup window will appear when you create your pixel. You will be given a few options for how to implement your Facebook Pixel, and we want to choose the first option: Use an integration of tag manager.

Modal popup window displaying options for how to implement the Facebook Pixel

Choose Google Tag Manager as your solution, and then you will be asked if you want to do a quick install, or manual install. You can choose to do a quick install, but for the purpose of this post, we will do a manual install. The next window will present us with instructions for implementing the base pixel code and any event code we wish to use. Copy the base code for use in GTM.

Facebook Pixel base code in the configuration popup

Over in GTM, we will need to use a Custom HTML tag for our pixel code. Why are we using a Custom HTML tag you ask? Great question! While GTM has many built-in tags, there is not yet one for the Facebook Pixel! Therefore, we must resort to using the Custom HTML tag.

Paste the base pixel code into the tag, and name it, CU – Default Facebook Pixel. For our trigger, we will use the All Pages trigger, because we want our Facebook Pixel present on every page. This lets us know what pages users from Facebook viewed on our site. When you’re finished with the tag, it should look like this:

Finished Default Facebook Pixel Tag with All Pages Trigger

Event Code

For this next bit we need to go back to the popup we left open in the Facebook Events Manager interface, and get the pixel code for the event we want to track. You will have a few standard events to choose from, and a custom event that you can create on your own. The recommended add to cart event code comes with the value and currency variables, while the advanced version throws descriptive parameters such as, content_id and content_name (i.e. product ID and product name). You can choose from any three versions of this code, and customize it by adding extra parameters. Check out this document to get a detailed look at the different parameters available for your use.  Let’s say we want to track add to cart events on our site. The pixel code for an add to cart event will look something like this:

Add to cart event in the Facebook Events Manager interface

In GTM, create a Custom HTML tag, and you guessed it, paste the pixel code into the tag. Let’s name the tag, CU – Facebook Pixel Event – Add To Cart, and provide it some extra parameters.

*Note: you can use variables to dynamically pull in information regarding parameters like price, content_name, etc. as needed.

Facebook Pixel AddToCart Event Custom HTML Tag

Since we are tracking a specific event, we want this tag to fire when the user clicks the “add to cart” button. We need a trigger that tells GTM to only execute the contents of this Custom HTML tag when a user clicks a “add to cart” button. The good thing is you can reuse the trigger that you already are using for GA. Reusing triggers makes sure that events are defined the same way in both platforms. Here is an example of what your trigger configuration should look like:

Add To Cart Link Click Trigger Configuration For Add To Cart Event

Testing

Our default pixel tag and pixel event tag have been configured and saved, now we need to take them for a test run. To test out our two new tags, we can click the preview button in the top right corner of the GTM interface.

Google Tag Manager Preview Button 

We can now go to our website and check to be sure the CU – Default Facebook Pixel tag is firing every page, and that our add to cart buttons fire the CU – Facebook Pixel – Add To Cart tag. Once we see that our tags are firing, we can publish our container, and start collecting data.

You can also use the Facebook Pixel Helper extension for Google Chrome to debug any issues with your pixel. The extension will tell you if there is a Facebook Pixel present on the page, and provide information such as the Facebook Pixel ID. When you navigate to a page on your website, you can click on the Facebook Pixel Helper extension, and you should see this:

Facebook Pixel Helper Chrome Extension Interface

If you see your Facebook Pixel ID, and pageview data, then you know your Facebook Pixel is present on the current page. Navigate to a few others to make sure it is present everywhere. To test your add to cart event, you will need to add an item to your cart, and you should see your pixel event tag fire in the GTM Debug panel, as well as in the Facebook Pixel Helper extension.

Final Thoughts

Facebook Pixel is an analytics tool that helps measure the effectiveness of your Facebook campaigns by what users do on your site. You will not see this data in Google Analytics, but rather in your Facebook Ads Manager account. Using Facebook Pixel gives you the opportunity to create custom audiences based on users who came from specific ads, set up automatic bidding to target those who are more likely to convert, and tie conversions back to your Facebook ads. Using GTM to implement your Facebook Pixel is an easy way to do this, and you can reuse triggers you have already created.


Maximize Your AdWords Budget for Small Businesses

Most small businesses have only have a small chunk of change for their advertising, which means you cannot afford to be wasting your money in ineffective areas. This post will help anyone with a small advertising spend in AdWords to narrow their target audience in Google search results. Following the three sections will make sure your ad is there at the right time and place for the right customers.

Setter-and-forgetters can skip this post. This is for those that want to take actionable steps to up their advertising game and maximize their precious, hard-earned moola.

There are certain features you want to avoid while running Google AdWords, unless you want Google making the decisions for you. Many of the suggestions in this article can only be implemented when not using these settings. If you want to take that next step with your AdWords account, its best to leave these alone.

AdWords Express Account

Google offers AdWords Express for quick account set ups for those who do not have much time on their hands. You give Google three lines of information about your business and an ad, and they handle the rest. Sounds appealing, no?

This is the ultimate no-no if you want to have control over your marketing strategy. Although you are getting up and running fast, you have no say in your keywords you will be advertising on and cannot use cool functions, like ad scheduling and location targeting.

If you already have an AdWords Express account and would like to upgrade, you will need to get in contact with Google. Here is a link to get you started.

Search Network with Display Select Campaign Type

This campaign type works similarly to “Search Network only”, but takes your search ad copy and places them on web pages relevant to your keywords across the internet. There are two issues with this campaign type:

  1. You have no way to limit your budget between Display and Search ads, because budget is controlled at the campaign level. Display ads typically use up their budget much faster than Search ads and bring in less relevant traffic.
  2. Reporting and analyzing the data is more difficult with the overlap. This is key in making smart marketing decisions.

To keep it straight, use the “Search Network only” and “Display Network only” campaign types, instead of using “Search Network with Display Select”.

Standard Campaign Subtype

After separating your Display and Search campaigns, you will want to be sure that you are not using the “Standard” campaign subtype, and instead are using the “All Features” subtype. As the subtype name suggests, “All Features” allows you to use all the tools that AdWords makes available. “Standard” is meant for easier set up, but does not give you the ability to maximize your accounts potential.

Google AdWords is an endless wall of advertising space. You have so many places that you could hang your ad on the wall, but every person who inquires about it costs you money, even if what they were looking for is in a different city or they are using a homograph of your service to describe what they want.

Luckily there are ways to curtail these inquiries and make sure you only reach serious buyers.

Only the most serious of buyers

Ad Schedule

Within each campaign setting, you can create bid adjustments for different times of the day and days of the week. If your business only runs during specific times, then you can use this feature to run your ads only during these times. You do this by decreasing your bid adjustment by 100% during the times you do not want to show your ad. This could be helpful if you are a B2B company that only wants to run your ads during regular business hours, when you have people near the phone, ready to pick up for interested customers.

Alternatively, this feature can also be beneficial during busier times of the day when you want your ads to be more aggressive. You can bid higher during the times and days of the week that your business is busiest. Just be careful, if you have any other bid adjustments stacked on top of each other, these will be multiplied together to a maximum of 900%.

Location Targeting

Many small businesses do not operate worldwide, so it does not always make sense to target someone on the other side of the globe. Similar to the ad schedule, you can select specific locations to target in the campaign settings.

Here you can target cities, regions, states, countries and more. You can even target a specific radius around a location. So if you were running a local ice cream store, you could limit your ad to only those within five miles of your store.

Device Targeting

Every good advertiser has a strong call-to-action in their ad. Whether it is to fill out a form, to make a purchase, or to sign up for a newsletter, there should be an action you want a user to make once they have clicked your ad.

The ability to complete this action can be impeded by the device that the users are using. For example, filling out a long application will likely not be completed on a mobile device or making a phone call for a restaurant reservation is more likely to happen on cell phone.

We can use bid adjustments at either the campaign or ad group level to prefer either computers, mobile phones, or tablets. These bid adjustments can also be used to exclude a device type entirely, just like the ad scheduling bid adjustments.

If you are unsure which device works best for your conversions, let all the device types run and use the Segment feature to see how your campaigns/ad groups/keywords perform for each of the device types.

Long-Tail Keywords

Choosing the right keywords can make or break your AdWords account. For Search campaigns, they are the tool of communication with Google to place your ad on the expansive advertising wall. To use your AdWords budget effectively, it is best to give Google keywords that have high intent and are directly relevant to your business.

Keywords with high intent tend to be long-tail keywords. These are keywords that include additional detail about a product or service. Here is an example, where the keywords on the left of the graph are generic, high volume keywords and the ones on the right are long-tail keywords:

Long-tail keywords not only tend to convert at a higher rate, they also have the tendency to cost less per click, because they have less competition. This is how you are able to get more bang for your buck with long-tail keywords compared to the high volume short-tail keywords. Ubersuggest.io and Answer the Public are great tools to help you find long-tail keywords.

Keyword Match Types

Another way to keep out the lower performing traffic is by using more specific keyword match types. Keyword match types tell Google how broad they can extrapolate a keyword. For example, the broad match keyword coffee mug might match into search queries about coffee beans or the average consumption of cups of coffee. Broad match keywords can be a great way to find new keywords for the account, but unless you have a large budget, you will feel better knowing you did not spend money on a click searching on national statistics on coffee consumption.

With a limited budget it is best to stick to exact, phrase, and modified broad match types. If you stick to these 3 match types you are ensuring the keywords that you are using have to be contained somewhere within the user’s search queries.

Don’t forget to add negative keywords whenever you find search terms that you don’t want to spend your budget on!

Now for the neverending story of AdWords: testing. The best way to improve your account is to always test ad copy, landing pages, bidding strategies, and more and then compare the results to past performance. The more you learn, the better, even if you find out the test did not work. At least you learned something.

Don’t let your account get stuck in a performance rut

There are many important metrics that you will want to try and improve upon when trying to maximize your budget’s effectiveness, like conversion rate, clickthrough rate, and cost-per-click, but one you might not have been aware of is your keyword’s Quality Score. This metric determines how much you are going to have to bid for specific locations on the page. An advertiser with a Quality Score of 10 would have to bid 1/10th the amount as an advertiser with a Quality Score of 1 for the same ad position. Every point that you can boost your Quality Score will make you that much more competitive and save you some extra dough for more clicks.

You can increase your Quality Score by improving your ad copy relevance to the keywords you are targeting, increasing your expected clickthrough-rate of an ad, and improving the landing page experience. Simply put, you can increase it by experimenting with different ad copy and by testing new landing pages.

Whatever you are testing within the account, always try to keep all variables within the experiment constant besides the variable you are testing. So for example, if you are testing ad copy, try to only test either a Headline or the Description Line, never both at the same time. By doing this you can attribute the differences in the performance to a specific change and take action from what you have learned from it.

If you are spending money with AdWords, you might as well take full advantage of its capabilities. Setting up the bare minimum is better than nothing, but you are not going to make your account more effective by letting the account sit there unchanged month over month. Hopefully, you can take a few of the suggestions from here, apply them, and take your new, hard-earned conversions and learned customer behavior to the bank!


Launch Experiments Faster With Google Optimize Custom Objectives

Google Optimize just added a new, highly-requested feature called Custom Objectives that will make creating and launching experiments much easier, capitalizing on existing Analytics implementations and reducing the learning curve for new testers. For those totally new to Google Optimize, we recommend reading this introductory guide. It’s the crash course you’ll need for this new and exciting feature (just announced) to make sense.

What Are Objectives In Google Optimize?

Google refers to Objectives as “the website functionality you wish to optimize.” Put another way, what actions do you want users to take on your website? That likely depends on your business.

  • Ecommerce companies typically want users to buy things.
  • Lead generation companies typically want users to submit forms.
  • Publishers typically want users to view content and subscribe.

All of those are common conversions and engagement metrics that marketers already track in Google Analytics. This post does a great job discussing different types of goals. For the previous examples, our Google Analytics goals (and Optimize Objectives) might look like this.

  • Ecommerce = Transactions and reveneue
  • Lead generation = Page destination goal based on submitting a contact form
  • Publishers = Maximize pageviews or goal based on email subscription

Until today, system objectives included only engagement metrics (pageviews, sessions duration, bounces), ecommerce tracking (transaction, revenue) and goals imported from a connected View in Google Analytics. Here is a screenshot.

This sample experiment is connected to a Google Analytics View with two Goals: Contact (#2) and Checkout (#3). If the website also uses events to track scrolling, email subscriptions or video views, each would require a corresponding Goal set up in Google Analytics to be used as Objectives in Google Optimize. Not insurmountable, but clearly requires extra steps, potentially for each experiment.

But that’s changing.

What Are Custom Objectives?

Custom Objectives are created in the Google Optimize User Interface, which means Google Analytics goal nor additional tracking code is required. These click-and-go Objectives can be configured in seconds.

The two options are designed around Events in Google Analytics and pageviews. Event-based Objectives match Event category, action, label and values. Matching rules will be familiar for Google Analytics users, including equals, regex and starts with. Adding an Event-based Objective is easy – it automatically uses the Linked Google Analytics View to auto-complete as you type, so you can make sure you’re choosing the right events.

The second option is pageviews, and includes matching rules similar to the other option. Think of this as an on-the-fly destination goal.

Pageview Objectives are incredibly helpful because not every test is designed to convert or generate revenue. Often a test is to help users progress through the conversion funnel, moving from navigational content to informational content to transactional content.

Tell Me More

Let’s talk through an example.

LunaMetrics offers training and consulting in conversion optimization. It’s unrealistic to assume that an experiment on our blog would immediately lead to training registrations or contact form submissions.

A better Objective for a blog experiment might be to focus on getting users to navigate to the training page or the consulting page. On those pages, we might craft another experiment to increase the likelihood that users engage with the training content or add a training seat to their cart.

In that scenario, historically, we would have had to create several new destination goals to use as Objectives. Not any longer. Now we have Custom Objectives.

Anything Else?

You bet! In addition to allowing you to target non-goals inside of Google Optimize, they’ve also added a new way to count objectives. Goals are session-based, meaning did the specific event, pageview, or threshold occur once or more in the session. Did the person complete “at least one purchase” or did they download “at least one PDF?” We just care that it happens at least once, then we count that session as converted.

By opening up the experiments to focus on Events or Pageviews, they’ve also enabled more granular reporting. When you choose a Custom Objective, you also will choose if you want it to count Once per session or Many per session.

Once per session will work just like Goals, and will typically be used for larger goals like completing a lead form.

Many per session doesn’t just open the door for more opportunity, it takes the door off the hinges. You’ll now be able to optimize for wholesale site changes that can affect users on many pages and encourage consistent changes to behavior. Need some ideas?

Increase Content Engagement

  1. Add scroll tracking to your site through our scroll tracking GTM recipe or the new trigger options.
  2. Create an experiment designed to increase content consumption. Target the experiment to your blog and create variations with different elements removed or rearranged.
  3. See which variation encouraged the most engagement by setting your Objective to the events that indicated a user scrolled to the 75% point, counting Many Per Session.

Increase File Downloads

  1. Add file download tracking to Google Tag Manager.
  2. Create an experiment designed to encourage more clicks on download links. Make variations sitewide to the way the Download links are styled using custom CSS changes.
  3. See which variation encouraged the most clicks on download links by setting your Objective to the download event, counting Many Per Session.

Are You Done Yet?

One last thing. In perhaps the most intuitive example of the two tools working together, as you create a Custom Objective Google Optimize will continue to update a new section called Rule Validation, giving you instant feedback for how frequently your Objective occurred over the past 7 days. It seems simple, but it’s great to see Google Optimize making use of the connection with Google Analytics.

Why Does This Change Everything?

This is a big deal that has a lot of industry experts talking. Let’s review why they matter and, more importantly, how they will make your life better.

The new Custom Objectives in Google Optimize:

  • Expedite experiment building. Marketers no longer have to create goals based on events and page destinations for an individual experiment. Using the training example from above, the optimization team at LunaMetrics can quickly create Pageview Objectives for /training/ and /services/ pages to use as objectives in a matter of seconds.
  • Help to avoid goal bloat. Creating new destination-based Goals in Google Analytics for each experiment quickly fills your Views with Goals, which you cannot delete. That’s an issue. While LunaMetrics recommends creating a new View for testing and all of the Goals related to testing, Custom Objectives save you from having to do this as often.
  • Level the playing field. We have used Optimizely, another A/B testing tool, for a long time. The ability to create click-based objectives in Optimizely was a serious perk, for all of the reasons mentioned. This update to Google Optimize levels the playing field, while keeping Optimize’s huge advantage of a native integration with Google Analytics.

We are, as you might be able to tell, super excited about Custom Objectives and eager to see which features are added next. For newbies, we still highly recommend this introductory guide and this detailed explanation of installation best practices. For event-based objectives, you still need to be tracking events into Google Analytics first, so check out our Google Tag Manager Recipes to get you off the ground quickly.


8 Reasons Podcast Ads Should Be In Your 2018 Marketing Plan

Reading Time: 7 minutes

Chances are if you’ve ever heard of SquareSpace, Blue Apron, or Audible, you probably first heard about them on a podcast. These brands sponsor quite a few shows, and in return have gained a decent amount of brand recognition. What makes podcasts so appealing to these brands, and why should more marketers (like you) consider this not-so-niche-anymore segment? This list, created with the help of our resident podcast enthusiast, Maddie Goodwin,  will explain why she and other listeners are more likely to buy products they’ve heard advertised on podcasts. It will also show the value for companies to advertise in this growing space.

 

1. More than 67 million people listen to podcasts every month.

Bar graph showing the change in the percentage of the total population ages 12 and up who listen to podcasts monthly from 2008 to 2017

And that’s up nearly 18% from 2016, so it’s safe to say the number of consumers listening to podcasts is on a steady growth trajectory. The number of diehard fans, those who listen to podcasts on a weekly basis, has jumped up to 42 million people from an estimated 35 million in 2016.

Bar graph showing the change in the percentage of the total population ages 12 and up who listen to podcasts weekly from 2013 to 2017

And while it may have been their first taste of this medium, these listeners aren’t just frequenting Serial or its spin-offs. Top ranked podcasts are as varied and diverse as they are addictive, with hosts ranging from nerd culture kings to past Bachelor contestants and NPR personalities. There really is a podcast out there for everyone!

 

2. The “unreachables” are listening, too.

You know the type: cord-cutters who skip YouTube ads and have ad blockers enabled. People who aren’t paying for cable anymore are paying more for ad-free subscription services, like Spotify, just to escape the barrage of unwanted advertisements. And according to Sarah Van Mosel, chief podcast sales and strategy officer of Market Enginuity, 70% of podcast fans use ad blockers online. They’re not seeing or hearing many traditional ads, but with podcasts … (see #3)

 

3. People actually listen to the ads (and the entire podcast).

Listeners are less likely to skip an ad read on a podcast because they don’t want to miss part of the show. And with the right host + product combo, host-read ads can feel like a natural part of the podcast episode. Additionally, ads are typically read at the beginning, middle, and (sometimes) end of the podcast. With 40% of people listening to entire podcasts and 45% listening to most of the podcasts, that gives you multiple opportunities to leave an impression.

Pie chart showing the percentage of people who listen to podcasts entirely, most of the way through, less than halfway through, or just the beginning

“What makes podcast ads stand out? A few things. The host read/native spots. And the active type of listening that makes the ads more memorable. If I take the time to listen to a podcast I am activating my brain to follow the story or learn something. It’s different than the passive, background listening that is true in music and other types of audio.”

– Maggie Taylor, Public Radio Exchange

4. It’s a platform for word-of-mouth marketing.

There’s nothing quite like a recommendation from a friend. That’s what great podcast ad reads feel like. Listeners invest so much time in podcasts that they tend to feel a connection with their hosts. These relationships mean listeners trust the hosts and value their opinions. So when all ads are: A) read by the hosts, and B) purposely infused with their personality, it feels like “genuine” word-of-mouth, not just someone selling you a product. In fact, the relationship between the hosts and the brands themselves plays a big part in this. When hosts and brands are a perfect match, everything feels natural, and listeners can tell. Chelsea said it best in her post about brand personas:

We want real, relatable content. We want brands that represent qualities that align with ourselves and our beliefs. Brands need to infuse their Social content with relatable qualities to maximize the brand power.

Just replace “Social content” with “podcast ads” and you have the same message. Pick podcasts that are relevant to your audience with hosts that mesh with your product or service for that natural, relatable feel.

 

5. The target demographic research is precise.

Emotions aside, the data is there. If you want to target college educated women between 25-44 who buy gluten-free products every month that get delivered to the home they own, you can do it.

Podcast listeners take part in regular surveys because they realize it’s a symbiotic relationship. They tell podcast production and media companies information about them so those companies can offer relevant brand ads. They share the kinds of things they buy, their interests, income, and much more.

 

6. There are easy ways to pay and measure ROI.

Podcast ads are typically purchased through podcast advertising networks, like Midroll and AdvertiseCast, on a CPM (cost per thousand) basis. The “thousand” in this case is measured by guaranteed user downloads, which are based on the number of subscribers a podcast has. According to AdvertiseCast, the industry average rates for a podcast with 5,000+ listeners are:

  • $15 for a 10-second Ad (CPM)
  • $18 for a 30-second Ad (CPM)
  • $25 for a 60-second Ad (CPM)

But how can you tell if the ads are working? The three most popular ways to track success are through unique checkout codes, vanity URLs offered in the ad, or the old “how did you hear about us?” prompt at checkout. While some of these options have flaws, such as coupon codes being spread on sites like RetailMeNot, they offer a few easy ways to gauge how your podcast ads are performing for specific shows.

 

7. They offer an elevator pitch for your product.

Podcasts are the perfect medium for companies who have a great product but need some help selling the idea of it. If your goals are about driving awareness for your product or service, or you have a proprietary product that requires a little bit of explanation, look no further. Ad spots should be kept short, so they should be succinct and to-the-point while highlighting your product’s best features.

 

8. Over 80% of listeners can recall the brand.

Midroll Media conducted a recall study of 11,123 podcast listeners and found that:

  • 80% could recall at least one brand advertised in an episode
  • 67% could recall a specific product feature or promotion
  • 51% were more likely to buy from the target brand

They also found that longer campaigns perform better than shorter ones: campaigns with ads on five or more episodes of a podcast lead to 39% more listener recall than campaigns with one ad spot. This means consumers are actually listening to the ads and not tuning out as they might with other mediums, like TV or radio.

 

I’m Interested…Now What?

 

Now that podcast advertising has piqued your interest, you need to decide if it’s right for your company. Here’s what you need to consider if you want to convince your boss or client that this is a valuable addition to your marketing strategy and get started immediately:

  • Finances: $10k is the recommended amount of ad spend a month for podcasts. Does this line up with realistic customer acquisition costs, lifetime value, and other important metrics?
  • Audience: Is your target audience there? The largest age group for podcasts overall is 25-54, but this varies by podcast (as will other demographics). Is there potential for a new audience here?
  • Marketing Mix: Will this be a game changer for your business, or just a supporting piece in your mix?

 

If you determine that advertising on podcasts would be a good fit and your client or boss gives the okay, here are a few tips for adding them into your strategy:

  • Start small. One ad on one podcast won’t cut it, but you can try running a few ads on several relevant podcasts to test the waters before going all in. When you start to see results, branch out.
  • Test. Test your ad messaging. Test different shows. Test the number of spots you run on each show (but remember that five or more ad spots run on a show leads to better recall). Use these tests to tweak and refine when necessary until you find your stride.
  • Test against radio ads. If you or your client are running radio ads, test them against your podcast ads to see how they compare. Depending on the results, you may even begin to slowly replace your radio ads.

“Know that there is some trial and error in the podcast advertising space.  Sometimes you find a podcast with what you believe is a perfect audience with a great host and the campaign doesn’t perform as expected…and then sometimes you are surprised the other way!”

– Dave Hanley, AdvertiseCast

Update: Barriers to Entry

After speaking with a few brands, podcast networks, and advertising companies, we uncovered more information about the barriers to entry involved with podcast advertising. For many subscription companies, podcast ads are not a viable option at the moment. It is either something they do not have the budget for, or the reach is too broad. As Kim Keitner at Fresh City Farms explained to us:

“I’ve noticed a prevalence of meal kit companies on podcasts and we’ve discussed it, but the podcast audience is too wide for our purposes. We’re GTA-based (Greater Toronto Area), and while there are definitely many podcast listeners in the GTA, we just don’t feel this avenue of advertising is targeted enough for a smaller, local company like ours.”

Additionally, many brands aren’t convinced that listeners are actually listening. Public Radio Exchange’s Maggie Taylor argues that “No metrics prove viewing/listening. TV ratings don’t take into account when someone leaves the room, skips the commercials, etc. Most metrics other than CPA depend on a bit of a shared myth.”

Some brands also worry about the freedom they have to hand over to podcast hosts in order for them to create natural-feeling ad reads. AdvertiseCast’s VP of Business Development, Dave Hanley, notes that advertisers can request a pre-air check to review the spot before it goes live to help them ease into giving hosts more creative control. He also suggests giving hosts a set of talking points structured around a personal story, the outcomes of your product or service, and a call-to-action to ensure they deliver the important points while still having the flexibility to get creative. Dave also recommends leaning on to experts within the podcast advertising industry to find out what does and doesn’t work:

“These folks will not only provide you with great suggestions for content, but also recommend the shows that they believe will best represent the brand. Being a relatively new space for many marketers, many of us have resigned ourselves to the fact that education is a key part of our role as podcast advertising advocates.”

Resources

The post 8 Reasons Podcast Ads Should Be In Your 2018 Marketing Plan appeared first on Greenlane.


Combining Google Analytics with Other Data Sources

Google Analytics can collect quite a lot of data on its own, from user behavior, to traffic sources, to interactions, to demographics. It can also integrate with other Google products, allowing for easy and seamless combination of data.

But sometimes you’ll have another source of data about the visitors to your website, whether it’s from your customer database, a third-party survey tool, a campaign management tool, or anything else. And naturally, you’ll want to combine that with the rich interaction data available in Google Analytics. Maybe you’ll want to build user segments in GA based on survey results, or maybe you’ll want your CRM to include a customer’s original traffic source or how often they visit the site.

Despite the breadth and variety of data sources, there is a general approach that allows you to combine your Google Analytics data with almost any other data source you may have available. Specific products may have their own best practices or gotchas, but almost all of them follow a similar pattern. Setting up a connection with your favorite third-party tool (hereafter “Tool X”) requires answering the following three questions:

  1. How is the data being combined?
  2. What is the “key” that connects the data sources?
  3. How do I put the data from one system into the other?

How Is the Data Being Combined?

Visitors interact with websites in complicated ways, and as a result, Google Analytics data is complicated. The GA interface does a good job of getting you the information you need without bogging you down in details, but when you’re dealing with data connections, you need to pay more attention to the nuts and bolts than you otherwise would. Getting this right is the most important step to making sure that your combined reports are sensible and accurate.

Do you want data from Tool X in your GA reports, or data from GA in your Tool X dashboards, or both? Sometimes one system is a “source of truth” (often a Business Intelligence tool), and data flows into it. Other times, you want to take advantage of the unique reporting and analytic capabilities of both tools. Decide which direction(s) you need to pull your data.

What Is the Scope of Your Data Connection?

Google Analytics has four scopes that data can live at: User, session, hit (page and/or event), or product. A data connection will also exist at one of these four scopes. Picking the right scope is critical to making your reports work correctly.

Marketing tools like campaign management software or email remarketing will almost always want to connect at the Session level. In Google Analytics, traffic sources and campaign data are session-scoped.

User data such as a CRM or a customer database will almost always want to connect at the User level. A/B tests are usually user-scoped as well, since the same user should be served the same test on consecutive visits. Surveys may be user-scoped or session-scoped, depending on the type of questions being asked and whether it’s specific to the user’s current visit to the site.

Data about content on your site, such as from your CMS or ad-serving platform, will almost always be hit-scoped. Most tools are not hit-scoped, either because they have their own notion of a session, or because your users don’t interact with them on every page view. For example, information about form submissions should usually connect at the session level. While the form only exists on one particular page, the goal of the data connection is usually to understand the whole series of interactions leading up to the form submission, which is session-level data.

Data about products should be product-scoped. Occasionally you may want product data to be scoped to a pageview hit, but if you have Enhanced Ecommerce it’s usually better for such data to be scoped to the product of a product detail view.

If you are pulling GA data into a business intelligence tool, you may want to combine data at several scopes, such as session-scoped traffic data and user-scoped customer lifetime value. It’s usually best to do this by setting up separate connections for each scope. GA may give surprising and inaccurate results if you attempt to combine several scopes into a single report or export.

Tool X may or may not have its own concept of scope. You will have to figure that out on your own.

What Is the “Key” That Connects the Data Sources?

In database terminology, a “key” is a value in a data store that uniquely identifies a single record. If another data store holds a reference to that key, then those two data stores can be “joined,” meaning combined at the level of individual records. For example, your social security number is a “key” that uniquely identifies you. This lets other data sources like taxes, bank records, and credit scores be uniquely associated with you as an individual, rather than some other person who might have the same name or birthdate.

The easiest and best way to combine data sources is to find a key in one data source that you can import into the other data source. A unique key helps prevent a lot of problems that arise from data not matching up exactly, or different tools using different definitions of “page” or “user.” It also gives the flexibility to adjust your connection later on. As long as the key exists in both data sets, you can always pull down more data from one tool and upload it into the other.

Above, you answered the question about what direction your data is flowing. The “key” in your data must go in that same direction. So if you are pulling data from Tool X into GA, then you need to find a key value in Tool X, and bring that value into Google Analytics.

Choosing the Right Key

There are two considerations for choosing the right key: Scope, and granularity.

It’s important to make sure your key exists at the right scope. A key may be unique at one scope but not another, or it may be ambiguous at the wrong scope. For example, campaign ID is unique at the session level but not the user level; and a product SKU is ambiguous at the session level if a user purchases more than one product.

Granularity asks: What are you tracking? If you are tracking campaigns, then you want your key to refer to an individual campaign. If you are tracking A/B tests, then you want your key to refer to a specific variation with a specific experiment. Page-level data usually refers to individual pages and product-level data to individual products, but sometimes it refers to groups or categories of these.

How Do I Put the Data from One System into the Other?

This is the part that varies the most between tools and may require some coding. Google Tag Manager is an awesome help for pulling data from one location and pushing it to another.

Pulling Data out of Google Analytics

Google Analytics stores a value called the Client ID in a first-party cookie named _ga. This is the perfect value to use as a user-scoped key because it’s the same key that Google uses in its own processing… except that it’s not available in the reports! Fortunately, it’s easy enough to pull the value of this cookie from Tag Manager and store it in a Custom Dimension.

Google Analytics does not provide a session ID value to the browser. A session can be uniquely identified by combining the Client ID with another piece of data, like Visit Number from the old utm cookies. You can also approximate sessions by combining Client ID with Date. Fortunately, very few tools have the concept of a session, so this issue tends not to show up in practice.

The key for most hit-level data is the Page Path, and the key for most product-level data is the SKU. If you are using these as keys, it’s important to be aware of any transformations you may be applying, either through GTM or through Filters in GA. For example, if you are removing certain query parameters, or applying a lower-case filter, then the URL that GA reports is not the exact same one that the visitor saw. You will need to apply the same transformation in your other tool to get the data to match up.

Once you have the key from Google Analytics, what you do with it depends on the tool. If your key is a ULR or SKU, it probably already exists in Tool X. If you are using the Client ID or something else, you will have to figure out a way to pass it along. Common solutions include adding it as a field in tag in GTM, appending it as a query parameter to a URL, or inserting it into a hidden field in a form.

Once the key is in your other tool, then you can create a Custom Report in Google Analytics based on that key, and export the data you want. Then upload it into Tool X, and follow Tool X’s instructions for how to match data.

Putting Data into Google Analytics

First, unless your key is already a built-in dimension in Google Analytics (such as product SKU), you will need to create a custom dimension. You should already know what scope to configure it to.

Second, you need to populate that key. How you do this depends on the specific tool and how it makes that key available. Common approaches are URL query parameters for campaign data or A/B tests, or cookies for most types of customer management systems. Some systems have an API that you need to interface with using custom JavaScript.

Finally, you need to use this key to integrate the rest of your data. The easiest way to do this is with Data Import. For extra style points, this process can be automated, making the data connection appear seamless after you set it up.

Related Reading

This general outline should guide how you set up connecting different platforms. If this sounds familiar, it should! We’ve outlined the specific process for several systems, and you notice a lot of crossover between posts. Here’s a quick rundown of related posts:

Conclusion

Congratulations! Now your web data lives in the same tool as other data that you’ve been using! This allows for much more powerful reports, like end-to-end tracking like tying campaign impressions to conversions that happen on your website, or connecting your traffic source data in Google Analytics will offline customer acquisition reports from your CRM.


Prep for the Holidays: Tackling the Dreaded Forecast

Reading Time: 6 minutes

Forecasting. Some account managers love it, some hate it. Others believe it should be owned by an analytics team.

For the record, dear reader, I LOVE forecasting. Judge me as you see fit.

Regardless of your position on the matter, a Q4 forecast is essential to building a solid holiday plan. It sets your client’s expectations, helps you determine which tactics are affordable, and makes pitching new testing opportunities easier.

The question I get asked the most from newer AM’s is where to even begin formulating a Q4 forecast. Simple: ask your client what their goal or budget is. Knowing your bounds, spend and/or revenue, are key. For example, which question does your forecast need to support

  • Do they want to achieve 20% YoY Q4 growth?
  • Do they have $250,000 to spend in Q4?
  • Do they want 30% YoY growth AND only have a $500,000 budget?

On occasion, you won’t get a specific budget or revenue guideline from the client, they want YOU to tell them how much to spend and what revenue to expect. Sounds daunting, but it’s doable using the same process as forecasting with known bounds.

 

Top Down, Bottom Up – Where’s the Middle Ground?

When teaching new Account Managers forecasting, I like to start with the basics. There are two main schools of forecasting approaches: Top Down and Bottom Up. Top Down means starting with the highest-level goal, e.g.. 20% YoY growth, and then divvying down into contributions by channel and the budget needed to support such growth. Bottom Up, you start at the individual channel levels, which then combine to reach your overarching goal.

My preferred forecasting process blends the two approaches together to avoid some of the common pitfalls associated with each method. Miscalculations become magnified, forecasts can quickly become over-complex (Bottom Up), or produce unrealistic goals for each marketing channel (Top Down).

I lean more towards Bottom Up in my approach as it pulls in more historical data and channel nuance, which I find absent in Top Down approaches. This leads me to my usual method:

Hybrid Approach: Determine each channel’s performance potential based on the client goal/spend + historical data. Then layer in site trends to reverse engineer into specific monthly site and channel goals.

 

What You Need

As with everything we do, your forecast needs to be based in data. Two years of data is a good groundwork to identify trends and outliers, as well as solidify the inevitable assumptions/constants that fall into the forecasting computations. Questions I like to ask when looking at historical data include:

  • What is my current year to date (YTD) growth rate versus last year’s (LY)?
  • How much did I spend in Q4 last year and the year prior?
    • – If there was a big change, how did that change channel allocations and results?
  • Were there any unexpected spikes or dips caused by tracking issues, larger site issues, etc.?
  • What metrics were consistent YoY, are they similar to the last 3 months trending?
    • – For me this is usually average order value (AOV), cost per clicks (CPC), and click through rate (CTR)
  • How did peak day performance change YoY?
    • – Did Black Friday sales hit earlier last year? Why?
    • – When did peak offers ‘leak’ or hit the site?
    • – Do I know if competitors’ offers were more or less competitive than mine LY?

This list isn’t exhaustive, but should provide a decent starting point for knowing if your upcoming holiday season has the potential to be strong or weak versus the prior years and if a revenue/growth goal is attainable. For example, is your site trending at 15% growth in 2017 and your goal is 20% growth in Q4? Might be difficult. But what if LY Q4 2016 grew at 30% YoY while the rest of the same year was pacing at 10% growth. You seem to have a pretty good chance of hitting a 20% growth in 2017 Q4, if Q4 2016 grew at 20 percentage points higher versus the rest of the year.

Historical data makes forecasts more informed.

Even better, some of the questions around specific peak day results may spark fresh strategies and tactics for the next 3 months.

Constant Rate Variables

Next, choose your constant variables. These should NOT be the same for all your marketing channels and should NOT be the site averages. This is where we bring in the individual channel nuance found in a Bottom Up approach. Using rate metrics as constants is better than hard numbers. Additionally, if a marketing channel has an ROI or CPA goal from the client, you would fold that in here as well.

One word of caution, try not to use more than four constants per channel. Any more than that and your forecast runs the risk of not being based in reality. Perfect world scenarios don’t help anyone.

 

Percent of Site Contribution

My favorite place to start a forecast uses each channel’s percent of site contribution for revenue. This is where we bring in elements of a Top Down forecasting approach to our process.

Join our mailing list!

Want to know when new free tools and blog posts are released? Join our newsletter!

Using the last two years of data, how much revenue does each channel make up of the total?; e.g. 25% of revenue is Email, 18% is PPC, 16% is Affiliate, 15% is Organic, 13% is Direct, 8% is Display (Remarketing + Prospecting), 5% is Social. Compare the last two years of contribution breakouts against the most recent three months to ensure they are on par with current channel trends. Expect minor variations, which could be a few percentage points up or down.

Similarly, look at how each of the months in Q4 make up the whole. October drives 15% of revenue, while November and December drive 45% and 35%, respectively.

At this point you have all the necessary inputs to begin formulating your forecast.

 

Numbers Numbers, Math Math Math

 

Step 1: Begin by forecasting the Q4 total revenue and spend for the marketing channels using the channel % of site contribution and KPI constants.

Determining the overall revenue goals per channel.

Click to enlarge
Backing into spend per channel using the revenue goals and constants.

 

Step 2: Ensure that total spend equals your client’s set budget. Only applicable if you have a known budget bound.

Confirming calculated budget is within client’s specified budget.

 

Step 3: Break down the Q4 site revenue goal into monthly goals based on each month’s % of site revenue contribution.

Use the previous year’s monthly contributions to allocate the site revenue goals by month.

Step 4: Layer in monthly channel revenue goals using the results from Step 1 combined with monthly breakdowns and seasonality trends from historical data.

Click to enlargeCheck to ensure the totals match budget and goals.

 

Step 5: Make sure the total revenue and spend equal budget/goals.

Check to ensure the totals match budget and goals.

 

Tracking Forecast Success & Accuracy

You’ll want to know how well the site, marketing channels, etc. are trending to your forecast. I recommend implementing a monthly tracker, updated daily, so you know the budget pacing and revenue trend. The best thing is to have all your data in a simple view to make recapping holiday in January a breeze. Click below to download a template I use for yearly forecasts. It can be manipulated to suit many client and forecast needs: full year, Q4, and so on.

Download the Forecast Tracking template

Tracking is also important because you may need to re-forecast or shift revenue projections and budget between months depending on last minute promo changes from the client, unexpected issues like the site crashing on Black Friday, or their Email Service Provider getting caught in a SPAM trap on Cyber Monday, increased competition in the search space driving CPCs up and organic rankings down. I could go on, but you get the point.

Life happens, forecasts aren’t perfect. But they can be thoughtfully put together.

My parting tip: the best time to get started on a forecast is September for holiday and November/December for the lengthy full year forecasts. So go download that template and get started!

The post Prep for the Holidays: Tackling the Dreaded Forecast appeared first on Greenlane.


How to Implement Page Speed Recommendations at Server Level

There are several tools that analyze page speed and show how quickly users can see and interact with content. The tools will identify areas that need improvement and most of them will analyze the same areas. There is a great article by Gtmetrix.com that explains why different tools might show different results when analyzing page speed on sites.
On this blog post, I’ll review the items that are analyzed at server level and provide examples of how you can fix or improve you site speed at server level.

Items Fixed on Server

Leverage browser caching:

Browsers can store webpage resources on computers so that they don’t have to download these resources every time they visit a site. By leveraging browser caching, you can instruct browsers how often resources are updated on your website so that they know when these resources should be downloaded again.

Leveraging browser caching is different depending on the type of server your site is using. We will discuss how to fix these items on Apache but will provide resources for other servers:

Note: Editing your .htaccess file could break your site if not done correctly. If you are not comfortable doing this, please check with your web host first.

APACHE

To enable browser caching on Apache servers you need to edit your .htaccess file by using notepad or any form of basic text editor.
In this file you can set your caching parameters to tell the browser what types of files to cache or store. Below is an example of the code:

 IfModule mod_expires.c>
 ## EXPIRES CACHING ##
 ExpiresActive On
 ExpiresByType image/jpg "access plus 1 year"
 ExpiresByType image/jpeg "access plus 1 year"
 ExpiresByType image/gif "access plus 1 year"
 ExpiresByType image/png "access plus 1 year"
 ExpiresByType text/css "access plus 1 month"
 ExpiresByType application/pdf "access plus 1 month"
 ExpiresByType text/x-javascript "access plus 1 month"
 ExpiresByType application/x-shockwave-flash "access plus 1 month"
 ExpiresByType image/x-icon "access plus 1 year"
 ExpiresDefault "access plus 2 days"
 ## EXPIRES CACHING ##

You can see on the code above that images are updated once a year, css is updated once a month and so on.
Depending on your website’s files you can set different expiry times. If certain types of files are updated more frequently, you would set an earlier expiry time on them (ie. css files)
Before adding expire dates, make sure that you are not setting this up for every resource on the site, for ecommerce sites it is important not to cache resources on the shopping cart because it can lead to serious issues. There is a great article on this topic.

When you are done save the file as is and not as a .txt file.

NGINX

How to quickly leverage browser caching on Nginx
Stack Overflow – Nginx with Plesk

IIS
Microsoft Library
Stack Overflow – FAQ

Specify a cache validator:

When you see a “Specify a cache validator” warning in when analyzing your site’s speed on one of the tools is because you are missing HTTP caching headers like: Last-Modified, Cache-Control and Etag headers.

These headers should be included on every page server response, as they both validate and set the length of the cache. If the headers aren’t found, it will generate a new request for the resource every time, which increases the load on your server. Utilizing caching headers ensures that subsequent requests don’t have to be loaded from the server, thus saving bandwidth and improving performance for the user.

The Last-Modified headers are weak cache headers. The current preference is to use Cache-Control headers. Below we’ve included how Google defines these headers:

  • Cache-Control defines how, and for how long the individual response can be cached by the browser and other intermediate caches.
  • ETag provides a revalidation token that is automatically sent by the browser to check if the resource has changed since the last time it was requested. To learn more, see HTTP caching.

Below you can see an example of how to set up an Etag on your .htaccess file.

APACHE

Etag

<IfModule mod_headers.c>
 Header unset ETag </IfModule> FileETag None

NGINX
https://stackoverflow.com/questions/24549377/how-to-configure-etag-on-nginx
IIS
https://technet.microsoft.com/en-us/library/ee619764(v=ws.10).aspx

Enable gzip compression / Compress components with gzip

As discussed above, there are different methods of configuring servers depending on their type and gzip is no different. Below are examples of how you can enable gzip on different servers:

APACHE
You will need to add the following lines to your .htaccess file:

# Compress HTML, CSS, JavaScript, Text, XML and fonts
 AddOutputFilterByType DEFLATE application/javascript
 AddOutputFilterByType DEFLATE application/rss+xml
 AddOutputFilterByType DEFLATE application/vnd.ms-fontobject
 AddOutputFilterByType DEFLATE application/x-font
 AddOutputFilterByType DEFLATE application/x-font-opentype
 AddOutputFilterByType DEFLATE application/x-font-otf
 AddOutputFilterByType DEFLATE application/x-font-truetype
 AddOutputFilterByType DEFLATE application/x-font-ttf
 AddOutputFilterByType DEFLATE application/x-javascript
 AddOutputFilterByType DEFLATE application/xhtml+xml
 AddOutputFilterByType DEFLATE application/xml
 AddOutputFilterByType DEFLATE font/opentype
 AddOutputFilterByType DEFLATE font/otf
 AddOutputFilterByType DEFLATE font/ttf
 AddOutputFilterByType DEFLATE image/svg+xml
 AddOutputFilterByType DEFLATE image/x-icon
 AddOutputFilterByType DEFLATE text/css
 AddOutputFilterByType DEFLATE text/html
 AddOutputFilterByType DEFLATE text/javascript
 AddOutputFilterByType DEFLATE text/plain
 AddOutputFilterByType DEFLATE text/xml # Remove browser bugs (only needed for really old browsers)
 BrowserMatch ^Mozilla/4 gzip-only-text/html
 BrowserMatch ^Mozilla/4\.0[678] no-gzip
 BrowserMatch \bMSIE !no-gzip !gzip-only-text/html
 Header append Vary User-Agent

After you’ve saved your .htaccess file, test your site again on one of the speed tools to make sure it has been properly compressed.

IIS
If you are running on IIS, there are two different types of compression, static and dynamic. We recommend checking out Microsoft’s guide on how to set up gzip.

NGINX
If you are running on NGINX, simply add the following to your nginx.conf file.

 gzip on;
 gzip_disable "MSIE [1-6]\.(?!.*SV1)";
 gzip_vary on;
 gzip_types text/plain text/css text/javascript image/svg+xml image/x-icon
 application/javascript application/x-javascript;

Avoid landing page redirects

If you see a warning on avoiding landing page redirects, this means that you have more than one redirect from the given URL to the final landing page. Reducing the number of redirects from one URL to another cuts out additional round trip times to the server and wait time for users.

Google provides some examples on redirect patterns that can harm your site:

  • example.com uses responsive web design, no redirects are needed – fast and optimal!
  • example.com → m.example.com/home – multi-roundtrip penalty for mobile users.
  • example.com → www.example.com → m.example.com – very slow mobile experience.
  • http://mydomaine.com/ → https://mydomaine.com → https://www.mydomaine.com – very slow experience.

So, make sure that your domain redirects only once when needed in order to avoid the multiple RTTs to the server by:

  • Using responsive layouts to avoid sending users to a mobile version of the site on a subdomain like shown above.
  • Identify redirects to non-html resources such as images and CSS.
  • Perform the redirection server-side rather than client side.

Depending on the landing page redirects, you’ll need to add different codes on your .htacces in the case of an Apache server.

Enable Keep-Alive

Enable keep-alive will allow the same TCP connection to handle several requests between the web server and the browser.
Apache enables Keep-Alive connections by default, however you can explicitly turn them on by adding the following line to your httpd.conf file.

KeepAlive On

It is important to know that you should not set up keep-alive headers through .htaccess since it can send misleading information about the server’s capabilities.

Use a Content Delivery Network (CDN)

To summarize, a CDN is a network of servers on different locations. Each of these servers will save static content of sites like images and CSS/JS files. It’s common that most of the page load time is spent on retrieving this type of content, which is why making them available in servers in different geographical regions will allow browsers to pull data from servers’ closer to them. The short distance for the data to travel will provide a fast site experience.

Using a CDN that can cache the static HTML of your website homepage, and not just dependent resources like images, javascript and CSS can help improve latency which will also make a positive impact on your time to first byte (TTFB).

There are several companies that offer CDN, below we’re are listing three of the more popular options:

Specify a Vary: Accept-Encoding header

Issues on some public proxies may lead to compressed versions of your resources being served to users that don’t support compression. Specifying the Vary: Accept-Encoding header instructs the proxy to store both a compressed and uncompressed version of the resource.

To fix this issue on Apache, add the following code on your .htaccess:

<IfModule mod_headers.c>
 <FilesMatch ".(js|css|xml|gz|html)$">
 Header append Vary: Accept-Encoding
 </FilesMatch>
 </IfModule>

NGINX
http://nginx.org/en/docs/http/ngx_http_gzip_module.html

IIS
https://support.microsoft.com/en-us/help/2877816/vary-header-is-overwritten-as-accept-encoding-after-you-enable-dynamic

If you have any other resources and suggestions for improving server speed, please feel free to add them on the comments below.


12 Valuable Lessons on Redesigns, Replatforming & Migrations

Reading Time: 11 minutes

If you asked me 1000+ days ago if I thought I would consistently have some redesign, replatform, or migration on my to-do board I would have laughed. Not because redesigns make me laugh like Bill #Sebald stories, but because I “left” design for content and marketing back in 2011.

Even though I’m not a designer, I still get a lot of joy out of redesigns. They’re exciting but can easily become a nightmare.

I’ve talked to a number of peers over the years and found that redesigns (and the lot) are a big challenge. Most had some negative experience with this. Agency friends talked about redesigns coming and going. And in-house friends were all too familiar with that one time they redesigned or tried to at least (usually followed by eye rolls, fearful faces, etc.).

Site updates are a fact of life, especially if you work in an agency and/or plan on staying in the digital space for a while. That’s why we wanted to share some of our most important lessons learned.

Disclaimer: This is not a checklist

There are some truly incredible migration, redesign, and similar checklists out there, and I encourage you/your team to start your own. I’ll link to a few from my resource archive at the end of this post, and I’ll even throw in Greenlane’s tech-SEO list.

There’s a lot of value in these checklists, but we know our work is more than following a list of items. The gold comes from what we discover via the IFTTT (if this, then that) approach. The lessons listed below are all the things we learned that couldn’t be captured in a list.

 

What Do These Terms Mean?

Let’s take a step back and get to know the language because this is where it starts to get muddy. It’s very easy to use “redesign”, “replatform”, and “migration” interchangeably, but it’s not that simple.

It’s important to confirm that the terminology your team uses matches the scenario you have in mind. Otherwise, you could end up with wicked gaps in the estimated resources required and/or even worse, this could result in organic visibility getting crushed overnight. By default, some terms may encompass others. For example, a replatform generally includes some form of visual redesign since you’re switching to a brand new software.

The most important differences to be aware of are:

Replatform – This is when the software delivering/managing the site is about to switch. URLs, user flow, navigation — just about everything tech-related will change. For easier situations, we’re talking about ditching something terrible for WordPress or a similar content management system (CMS). In more complex scenarios, you could be looking at switching from/to Magento, angularjs, and/or some proprietary setup.

Redesign – This is when we update the visual appearance of a website. URLs, user flow, and navigation can change or remain the same. This can be the least intensive option since you’re not necessarily changing every single thing like a replatform. There’s less relearning for crawlers to do in cases where URLs/navigation remain the same. This goes by a lot of names – site refresh, facelift, reskin, etc.

Migration – There are a few different types of migrations, so it’s important that everyone knows what exactly is migrating because lots of things migrate – geese, the Belted Kingfisher, URLs, and more. I encourage you to present a simple one-sheeter that breaks it down so everyone, in-house and external support alike, is on the same page. Examples of what migration can be used to reference include:

  • URLs (from moving to a brand new domain or changing to https)
  • Hosting packages and/or providers
  • Platform migration (see “Replatforming”)

In addition to knowing when to use these terms, there are other important considerations:

What is the size, complexity, and intent of the organization?
Removing all the tech stuff, understanding operations and business objectives can go a long way. Just think about the differences between a large (or small) e-commerce site versus a doctor’s office with a dozen locations.

What is your level of involvement?
Team members balance a number of roles (no different at Greenlane). It is important to stay clear on expectations and areas of focus.

Are there staging/dev environments?
Get out ahead of this question by understanding how many dev environments there are, how they work together, and what it will take to get access outside of HQ.

1. Start as early as you can

“Fix-it SEO”, where you’re called in after the fact, is the worst kind. From the first to the most recent project, we’ve pushed to get involved earlier and earlier in the planning process for any type of site overhaul. You’ll also want to know what’s being expected from your team. For Greenlane, we have a variety of team members that could get brought in to fill gaps when needed – analytics, content, UX, etc.

In a Moz article Bill wrote back in 2014 about finding the perfect client, he included a question about internal teams that offers a spot-on match here:

Who will you be working with?  What skill sets?  Will you be able to sit at the table with other vendors too?  If you’re being hired to fill in the gaps, make sure you have the skills to do so.

 

2. Ask a lot of questions

You can never be too sure. Part of the reason to call upon your entire team is to cover every angle possible. This process isn’t only happening in an SEO bubble. It’s important to ask questions and get answers throughout the process so the bigger picture isn’t lost. There’s no time like the present, so figure out the best means for an ongoing Q&A (Basecamp, Google Doc/Sheet, etc.).

Let’s take content for example. Maddie, Greenlane’s Content Specialist, has shared loads of insight as we’ve discussed redesigns.

What stuck with me most are the questions she asks while auditing content. The shortlist resembles something like:

  • What’s the site currently doing?
  • What do we want it to be doing?
  • What’s the audience saying (surveys, user testing, customer service insight, etc.)?
  • What are the gaps?
  • What’s the action plan for fixing/post-launch?

 

3. QA comes with the territory

Get familiar with what it means to QA, how sprints work, and project managing in a dev world. Not a redesign has gone by where there wasn’t:

  • Tech recommendation to split into multiple tasks
  • Something missed in the release that took a follow-up sprint

When the site goes live your work isn’t done. That’s when it’s critical to switch gears to QA support.

Pro Tip: Speaking of sprints, if you aren’t using one already, implement a post-sprint checklist a few sprint releases ahead of a big redesign/replatform. This will lay the required workflow foundation for redesign review (and educate the client/decision makers on this often missing piece).

 

4. Impeccable research or bust

Alongside the checklist items are topics that require special consideration, the IFTTT stuff. Since most site changes last at least 18 – 24 months, it’s important to make sure every recommendation is rooted in research.

Never build your site in a bubble based purely off of internal/vendor assumptions. It’s too easy to get it wrong. Here are 5 ways to do it:

  1. Heatmap and ask questions to non-converting users with tools like Hotjar
  2. Test templates and A/B test your assumptions outright. (Consider a partial launch of a template or element from the pending redesign)
  3. Mine your analytics account (conversions, user flow, etc.)
  4. Survey past customers and those that have converted
  5. Use Treejack or other tools to label navigation elements clearly

If there is one sleeping giant, it’s your architecture and internal link structure. Below is an example of what can happen when you take primary search landing pages out of the navigational hierarchy, essentially creating orphan pages:

Graph of what can happen when primary search landing pages are out

Introducing a new architecture and prioritizing/deprioritizing pages can erase months of forwarding progress.

For a recent brand consolidation project, we gathered information and presented a handful of options, two of which were the most ideal. We couldn’t take it down to the single best decision without the client’s input. The data stays the same, but their input provided the final nudge:

  • Which brands/areas are most important (and by how much)?
  • Risk tolerance vs status quo?
  • What do users coming to page X, Y, Z expect?

Remember to present this information well – source your research, make it easy to consume, provide background, etc. This document will likely get passed around internally on the client side.

 

5. There is no solo path/user flow

Speaking of impeccable research — don’t listen to any design house that focuses on a single user path/flow. That’s 100% fake news.

The truth is that I still blackout rage when I think about this, so Krista (Greenlane’s Director of Analytics) had to help me here:

“There is no solo user path just like there is no one user type for any website. The thought of a single user path is immediately flawed by even the most basic segmentation, such as a new versus returning user. I mean, unless you never want anyone to come back to your site since they may use it differently.”

6. Benchmark and crawl all the [meaningful] things

This point is every checklist’s darling, but I added it here because this point cannot be stressed enough. Begin identifying, prioritizing, and discussing when to begin gathering data (e.g. weekly site speed checks) for benchmarking purposes. This is very much at the intersection of technical, speed, and analytics/tracking.

The real lesson here is to be intentional with the data you gather. Sure, you can benchmark and crawl all of the things, but remember that it has to be meaningful. Need a few ideas to to kickstart your list? Sean (Director of Tech) had these must-do points to consider:

  • If any URLs will change, build out redirects before launch. List out all old URLs and match ones that will change or be removed to equivalent pages on the new site.
  • Test page speed before launch, and retest after launch. Compare speeds and make sure the new site is at least as fast as the old.
  • Test staging environment with a crawl to make sure no infinite loops from parameters, meta hrefs, etc. exist.
  • Check rankings report from SEMrush, and make sure important pages will have the same or similar text content after launch, in addition to any needed redirects.

I’d also add that finding historical crawls, audits, and/or site architectures is one of the best ways to help future-you. Platforms and URL change often re-reveal historical kinks in the chain.

And don’t forget to ensure basic tracking remains intact. This can get lost in the excitement of launch. Review in staging/dev and once again immediately following the push live. Sometimes it can be a challenge to ensure the standard code snippets have been transferred and are correct.

 

7. Don’t be too high level

What do you do when you have to convey a complicated concept to a dozen folks, half of which are hearing about it for the first time? You of course: “Give ‘em the high level.”

The challenge is that the majority of the conversations that happen with big-ticket changes include a variety of people, teams, and departments. This need to leave no one behind makes it very easy to only deliver mass appeal messages. Here are a few tips:

  1. Identify the best way to dig in. Consider separate calls to smaller teams and/or focused calls where the goal is to only address three or less big topics.
  2. High level ≠ watered down. I repeat, don’t water down recommendations. A botched redesign comes back on an uphill battle, especially if you plan on sticking with your partner for a while.
  3. Be as detailed as possible without losing sight of the bigger picture and bandwidth limitations.

 

8. Every redesign timeline is wrong

Let me rephrase – the first timeline you get is always wrong. Everyone is still just trying to figure it all out. And this isn’t any different than any other industry/career – timelines have and always will be a huge challenge project managers and “doers” face, but it’s painstakingly obvious with redesigns. Why? Because the majority of digital operations fizzle. It is easy to fall into the trap of making no or little progress because everything is judged against whether or not it should be “addressed after the redesign.”

As an auxiliary service provider to “the big redesign”, you should care for a few reasons:

  1. Disrupting Peak – All industries have some peak periods through the year or at least time blocks you’d call non-peak. See what I’m going for here? Do your best to guide timelines away from your bread and butter. Be the realistic voice with a plan, and move things along as best you can.
  2. Bandwidth Planning – Looking back at a couple years of agendas, time tracker reports, and memories, the tasks associated to the site switch come in waves. This feast or famine can best be addressed by getting a “surge” plan in place along with identifying a non-core action item list.
  3. Temporarily Obsolete – Things dragging on beyond your control is a real shame. Don’t get me wrong, there are times when pausing is the best play. But pausing everything on your end is often just the easy choice. Instead, leverage the lull:
    • Incomplete projects
    • Housekeeping
    • Test on the existing site because “it’s just going to be taken down anyway”
      1. Emotional triggers
      2. Page layout
      3. Long-form content
      4. CTAs
    • Pitch something big and exciting
    • Life after the redesign

 

9. Educate, educate, educate

The biggest truth to understand working in an agency is that we only interact with partners for a fraction of their work week and priority list. It doesn’t matter how integrated you are. Take a look at simple math:

5% of Your Client's Time is You (the Agency)

There should/will be more than just SEO that contributes to important decisions. This means your messages have to be easily heard among the noise, but more importantly, your primary contacts will have to carry the loudspeaker.

Education is key. Identify gaps in knowledge and mission-critical considerations that are going to require discussion. Schedule discussions, share links, write POVs, and/or share your screen for a live Q&A. Do what it takes to empower your partner to own it with or without you there.

 

10. Shiny object syndrome is real

So so so so real. I’m never dismissive since new technology and wishlist discussions often lead to great outcomes, but it’s important to remain on task. Develop your own pre-research process with questions to help you identify whether a topic is worth pursuing further or should be shot down. Some questions to consider:

  • What is the real why behind this ask?
  • What thought/research has gone into this so far?
  • Do link, industry listening, and search support it?
  • How much effort goes into this on all sides?
  • What’s the estimated impact (and how could we measure it)?

The best partners will pick up on the process (see: educate, educate, educate) so they can begin defusing bombs as well.

 

11. It’s not over when it’s over

Womp womp, sorry to burst your bubble. This is when we keep our sleeves rolled up, as a large portion of Greenlane’s effort begins after the release.

If you’re working with an external vendor for the project, this is often their cue to transition governance. Whether internal/external, this is usually the cue for IT/dev to get involved with the QA.

Between transition flux and reviewing pretty much everything you reviewed prior to launch and in dev/staging, this will be a feast time for areas requiring attention.  It’s worth saying again – make sure tracking and other codes (like testing/heat-mapping) remain intact.

Links and Downloads

Beyond the lessons learned above, consider bookmarking the checklist links below to help with building your own master checklist. All of these lists have shaped how I approach redesigns, replatforms, and the like.  

A special shoutout and thanks to the sites/authors behind each. While not specific to site changes, Annie Cushing’s technical audit checklist is a must if you don’t have this box ticked. In addition to the new site, it’s always wise to conduct a technical audit of your current build to understand the strengths/weaknesses/opportunities (early in the process is even better).

 

Bonus #12: Don’t Stop Believing Making It Better

The entire Greenlane team rallies behind a phrase/idea: “Make it better.” It’s crucial to have that mindset during a redesign. Looking beyond making a better end product, embody this motto for the entire process. There are so many steps that could be made better, this time and the next.

Because whether you work agency (sooner) or in-house (later), there will be a next time. We’ve had some partners long enough to have been through second and third replatforms/redesigns. And we’ve seen first hand just how important it has been to archive past information for easy access.

Archived resources

It’s always a win when you can call back on your notes, even if you do have a typo in the email 😉

Well, there we have it. Are there any other lessons you would add? I encourage everyone reading to contribute via the comments below, Twitter, or email (jon@greenlanemarketing.com).

The post 12 Valuable Lessons on Redesigns, Replatforming & Migrations appeared first on Greenlane.


A Simple Tool For Saving Google Search Console Data To BigQuery

For a while now we have been wanting to find an easy way to log Google Search Console(GSC) Search Analytics data for managed websites.  Google has mentioned several times  that more data is coming to GSC, but has been elusive when pinning down a date.  There are many reasons to want to collect GSC for yourself including:

  • Google Search Console only returns 1,000 rows and has a 90-day limit of historical data.
  • Making data available to other tools to manipulate the data.
  • Who knows what projects / data Google will decide to sunset or reduce access to.
  • Just being a cool Technical SEO and having what other people don’t.

In reviewing options, it was offered that it would be fairly easy to just move the GSC data to BigQuery.  The obvious advantages of BigQuery are:

  • Same Python API library as GSC API.
  • BigQuery data is available to Google Data Studio.
  • The pricing of BigQuery storage is ~ $24/mo for 1TB of data.
  • It is easy to interact with BigQuery via their user interface with SQL-style queries, CSV, and Google Sheets integration.

We began by researching the various tools of Google Cloud (we are much more familiar with AWS) and quickly landed on using Google App Engine (GAE) along with their very sweet integration of cron actions.  In addition, GAE has credential-based access via service accounts, which means that we were able to build a tool without the need for browser-based authentication.

The solution we can up with is located here (we are giving away free to the SEO community) : https://github.com/jroakes/gsc-logger.

From the Readme file, the script:

This script runs daily and pulls data as specified in config.py file to BigQuery. There is little to configure without some programming experience.
Generally, this script is designed to be a set-it-and-forget-it in that once deployed to app engine, you should be able to add your service account email as a full user to any GSC project and the data will be logged daily to BigQuery. By default the data is set to pull from GSC 7 days earlier every day to ensure the data is available.

The tool stores the following fields (currently restricted to US traffic, but easy to update in config.py) daily:

  • date
  • query
  • page
  • device
  • clicks
  • impressions
  • ctr
  • position

It will also try to grab 5,000 rows in each gulp and keep pulling and saving until less than 5,000 rows returned, signaling all the data.

With all that said, let’s get started showing you how to implement.  As a warning, to follow the info below, you should have some development experience.

Setting Up Google App Engine Project

Screenshot of Google Cloud Platform - Google Search Console Logger

  1. Navigate to Google Cloud Platform Console and Create a project.
  2. In this example, we named it gsc-logger-demo.
  3. At this point, go ahead and link a billing account with your project.  You can find billing by using the hamburger menu on the top left.
  4. Click on API’s and services from the same hamburger menu.  Search for and enable BigQuery API and Google Search Console API.
  5. Then create a Service Account by going to API’s and services > Credentials and clicking on Create credentials > Service account key. Select New service Account and give it a succinct name (we used GSC Logger Demo) in this demo. Select the role project owner, and leave JSON selected as key type.  Then click Create.  A JSON file will be downloaded to your browser, save this for later.

Getting into the code

Google Cloud Shell for GSC logger

Most of the below steps can be done from within Google Cloud Platform using their built-in Cloud Shell.  After launching Google Cloud Shell, follow the following steps:

  1. Download the repo:
    git clone https://github.com/jroakes/gsc-logger.git
    
  2. Upload your credentials file (the one you downloaded earlier when creating a service account):Upload file to Google Cloud Shell
  3. Move this file into your credentials directory:
    mv gsc-logger-demo-e2e0d97384ap.json gsc-logger/appengine/credentials/
    
  4. Move to the appengine directory:
    cd gsc-logger/appengine
    
  5. Open the config file:
    nano config.py
    
  6. Edit the CREDENTIAL_SERVICE file name to match the file you just uploaded.
  7. Update the DATASET_ID to something you like.  Only use letters and underscores.  No spaces.
  8. Edit GSC_TIMEZONE to match your current timezone.
  9. There are two other editable items here, ALLOW_CRON_OPEN and HIDE_HOMEPAGE. These are commented for what they do, but this should ideally be adjusted after testing.
  10. After editing, hit CTRL+x, y to save modified, and enter to keep the same file name.
  11. While still in the appengine directory type the below to initialize your project.  Use the project name you selected earlier (ours was gcs-logger-demo):
    gcloud config set project <your-project-id>
    
  12. Then type the below to install requirements:
    pip install -t lib -r requirements.txt
    
  13. Then create a new Google App Engine App:
    gcloud app create
    

    Select the region of your choice. We chose us-east-4.

  14. Finally, you are ready to deploy your app:
    gcloud app deploy app.yaml \cron.yaml \index.yaml
    
  15. Answer Y to continue.
  16. The app should take a minute or so to deploy and should output a url to where your app is deployed :
    Updating service [default]...done.
    Waiting for operation [apps/gsc-logger-demo/operations/9310c527-b744-4b7c-b6b6-00a79b6c28de] to complete...done.
    Updating service [default]...done.
    Deployed service [default] to [https://gsc-logger-demo.appspot.com]
    Updating config [cron]...done.
    Updating config [index]...done.
    
  17. You should now be able to navigate to the Deployed service url in your browser (ours in this demo is: https://gsc-logger-demo.appspot.com)
  18. Try going to your apps homepage (image below) and /cron/ (ours in this demo is: https://gsc-logger-demo.appspot.com/cron/) page once.  The /cron/ page should return:
    {"status": "200", "data": "[]"}
    

    It is important to hit the /cron/ page once so that your service email can be initialized with your Google account.

Adding Sites

Google Search Console Logger Main Screen

If all went well, you should see the screen above when navigating to your deployed App Engine URL.  You will notice there are no sites active.  To add sites to pull GSC data for, simply add your service account email as a full user in GSC.  For convenience, the email is listed on your app’s homepage.

To add a user, navigate to your Dashboard for a GSC account that you have ownership access to, click on the gear icon in the upper right, and click, Users and Property Owners.   Then add a new user according to the image below.

Google Search Console Add User

Once the user is connected, you should see the site listed when you refresh your app page.

Site added to GSC Logger

Next Steps

Now that the app has been deployed, it should download your GSC data to BigQuery every 24 hours based on cron functionality in Google App Engine.  Two things to explore next are:

  • Explore your data in BigQuery: https://bigquery.cloud.google.com. From BigQuery, you can run database queries and save to CSV, or save to Google Sheets.  You can also access the historical data in your own platforms via the API.
  • Try hooking up your BigQuery data to Google Data Studio.  Google provides easy integration with BigQuery from their data sources.  Simply add a BigQuery data source and make it available in your reports.
  • Verify your cron jobs in GAE: https://console.cloud.google.com/appengine/taskqueues/cron. You can run your cron job from this link or you can manually go to /cron/ from your browser.

For security, you want to go back and edit the config.py file using the steps above and adjust the settings for ALLOW_CRON_OPEN and HIDE_HOMEPAGE.  Primarily the ALLOW_CRON_OPEN.  Setting this to False means that only Google App Engine will be able to execute the cron function and direct calls to your /cron/ endpoint should result in an Access Denied response.  robots.txt is set to disallow: / for this repo, so it should not be findable in Google, but you want to be careful with exposing your managed sites, so the homepage visibility is up to you.

If you want to say thank you, please share on Twitter, follow Adapt Partners on Twitter, and/or suggest improvements via Github.

Thanks for Russ Jones and Derk Seymour for giving great feedback on the repo.

Update: I emailed John Mueller (John is amazing BTW) to ask if there were anything to be concerned with from Google’s standpoint from running this tool.  He said, “Go for it”, and “For the next version, you might want to grab the crawl errors & sitemaps too (with their indexed URL counts). I think that’s about it with regards to what you can pull though. ”