Combining Google Analytics with Other Data Sources

Google Analytics can collect quite a lot of data on its own, from user behavior, to traffic sources, to interactions, to demographics. It can also integrate with other Google products, allowing for easy and seamless combination of data.

But sometimes you’ll have another source of data about the visitors to your website, whether it’s from your customer database, a third-party survey tool, a campaign management tool, or anything else. And naturally, you’ll want to combine that with the rich interaction data available in Google Analytics. Maybe you’ll want to build user segments in GA based on survey results, or maybe you’ll want your CRM to include a customer’s original traffic source or how often they visit the site.

Despite the breadth and variety of data sources, there is a general approach that allows you to combine your Google Analytics data with almost any other data source you may have available. Specific products may have their own best practices or gotchas, but almost all of them follow a similar pattern. Setting up a connection with your favorite third-party tool (hereafter “Tool X”) requires answering the following three questions:

  1. How is the data being combined?
  2. What is the “key” that connects the data sources?
  3. How do I put the data from one system into the other?

How Is the Data Being Combined?

Visitors interact with websites in complicated ways, and as a result, Google Analytics data is complicated. The GA interface does a good job of getting you the information you need without bogging you down in details, but when you’re dealing with data connections, you need to pay more attention to the nuts and bolts than you otherwise would. Getting this right is the most important step to making sure that your combined reports are sensible and accurate.

Do you want data from Tool X in your GA reports, or data from GA in your Tool X dashboards, or both? Sometimes one system is a “source of truth” (often a Business Intelligence tool), and data flows into it. Other times, you want to take advantage of the unique reporting and analytic capabilities of both tools. Decide which direction(s) you need to pull your data.

What Is the Scope of Your Data Connection?

Google Analytics has four scopes that data can live at: User, session, hit (page and/or event), or product. A data connection will also exist at one of these four scopes. Picking the right scope is critical to making your reports work correctly.

Marketing tools like campaign management software or email remarketing will almost always want to connect at the Session level. In Google Analytics, traffic sources and campaign data are session-scoped.

User data such as a CRM or a customer database will almost always want to connect at the User level. A/B tests are usually user-scoped as well, since the same user should be served the same test on consecutive visits. Surveys may be user-scoped or session-scoped, depending on the type of questions being asked and whether it’s specific to the user’s current visit to the site.

Data about content on your site, such as from your CMS or ad-serving platform, will almost always be hit-scoped. Most tools are not hit-scoped, either because they have their own notion of a session, or because your users don’t interact with them on every page view. For example, information about form submissions should usually connect at the session level. While the form only exists on one particular page, the goal of the data connection is usually to understand the whole series of interactions leading up to the form submission, which is session-level data.

Data about products should be product-scoped. Occasionally you may want product data to be scoped to a pageview hit, but if you have Enhanced Ecommerce it’s usually better for such data to be scoped to the product of a product detail view.

If you are pulling GA data into a business intelligence tool, you may want to combine data at several scopes, such as session-scoped traffic data and user-scoped customer lifetime value. It’s usually best to do this by setting up separate connections for each scope. GA may give surprising and inaccurate results if you attempt to combine several scopes into a single report or export.

Tool X may or may not have its own concept of scope. You will have to figure that out on your own.

What Is the “Key” That Connects the Data Sources?

In database terminology, a “key” is a value in a data store that uniquely identifies a single record. If another data store holds a reference to that key, then those two data stores can be “joined,” meaning combined at the level of individual records. For example, your social security number is a “key” that uniquely identifies you. This lets other data sources like taxes, bank records, and credit scores be uniquely associated with you as an individual, rather than some other person who might have the same name or birthdate.

The easiest and best way to combine data sources is to find a key in one data source that you can import into the other data source. A unique key helps prevent a lot of problems that arise from data not matching up exactly, or different tools using different definitions of “page” or “user.” It also gives the flexibility to adjust your connection later on. As long as the key exists in both data sets, you can always pull down more data from one tool and upload it into the other.

Above, you answered the question about what direction your data is flowing. The “key” in your data must go in that same direction. So if you are pulling data from Tool X into GA, then you need to find a key value in Tool X, and bring that value into Google Analytics.

Choosing the Right Key

There are two considerations for choosing the right key: Scope, and granularity.

It’s important to make sure your key exists at the right scope. A key may be unique at one scope but not another, or it may be ambiguous at the wrong scope. For example, campaign ID is unique at the session level but not the user level; and a product SKU is ambiguous at the session level if a user purchases more than one product.

Granularity asks: What are you tracking? If you are tracking campaigns, then you want your key to refer to an individual campaign. If you are tracking A/B tests, then you want your key to refer to a specific variation with a specific experiment. Page-level data usually refers to individual pages and product-level data to individual products, but sometimes it refers to groups or categories of these.

How Do I Put the Data from One System into the Other?

This is the part that varies the most between tools and may require some coding. Google Tag Manager is an awesome help for pulling data from one location and pushing it to another.

Pulling Data out of Google Analytics

Google Analytics stores a value called the Client ID in a first-party cookie named _ga. This is the perfect value to use as a user-scoped key because it’s the same key that Google uses in its own processing… except that it’s not available in the reports! Fortunately, it’s easy enough to pull the value of this cookie from Tag Manager and store it in a Custom Dimension.

Google Analytics does not provide a session ID value to the browser. A session can be uniquely identified by combining the Client ID with another piece of data, like Visit Number from the old utm cookies. You can also approximate sessions by combining Client ID with Date. Fortunately, very few tools have the concept of a session, so this issue tends not to show up in practice.

The key for most hit-level data is the Page Path, and the key for most product-level data is the SKU. If you are using these as keys, it’s important to be aware of any transformations you may be applying, either through GTM or through Filters in GA. For example, if you are removing certain query parameters, or applying a lower-case filter, then the URL that GA reports is not the exact same one that the visitor saw. You will need to apply the same transformation in your other tool to get the data to match up.

Once you have the key from Google Analytics, what you do with it depends on the tool. If your key is a ULR or SKU, it probably already exists in Tool X. If you are using the Client ID or something else, you will have to figure out a way to pass it along. Common solutions include adding it as a field in tag in GTM, appending it as a query parameter to a URL, or inserting it into a hidden field in a form.

Once the key is in your other tool, then you can create a Custom Report in Google Analytics based on that key, and export the data you want. Then upload it into Tool X, and follow Tool X’s instructions for how to match data.

Putting Data into Google Analytics

First, unless your key is already a built-in dimension in Google Analytics (such as product SKU), you will need to create a custom dimension. You should already know what scope to configure it to.

Second, you need to populate that key. How you do this depends on the specific tool and how it makes that key available. Common approaches are URL query parameters for campaign data or A/B tests, or cookies for most types of customer management systems. Some systems have an API that you need to interface with using custom JavaScript.

Finally, you need to use this key to integrate the rest of your data. The easiest way to do this is with Data Import. For extra style points, this process can be automated, making the data connection appear seamless after you set it up.

Related Reading

This general outline should guide how you set up connecting different platforms. If this sounds familiar, it should! We’ve outlined the specific process for several systems, and you notice a lot of crossover between posts. Here’s a quick rundown of related posts:

Conclusion

Congratulations! Now your web data lives in the same tool as other data that you’ve been using! This allows for much more powerful reports, like end-to-end tracking like tying campaign impressions to conversions that happen on your website, or connecting your traffic source data in Google Analytics will offline customer acquisition reports from your CRM.


Prep for the Holidays: Tackling the Dreaded Forecast

Reading Time: 6 minutes

Forecasting. Some account managers love it, some hate it. Others believe it should be owned by an analytics team.

For the record, dear reader, I LOVE forecasting. Judge me as you see fit.

Regardless of your position on the matter, a Q4 forecast is essential to building a solid holiday plan. It sets your client’s expectations, helps you determine which tactics are affordable, and makes pitching new testing opportunities easier.

The question I get asked the most from newer AM’s is where to even begin formulating a Q4 forecast. Simple: ask your client what their goal or budget is. Knowing your bounds, spend and/or revenue, are key. For example, which question does your forecast need to support

  • Do they want to achieve 20% YoY Q4 growth?
  • Do they have $250,000 to spend in Q4?
  • Do they want 30% YoY growth AND only have a $500,000 budget?

On occasion, you won’t get a specific budget or revenue guideline from the client, they want YOU to tell them how much to spend and what revenue to expect. Sounds daunting, but it’s doable using the same process as forecasting with known bounds.

 

Top Down, Bottom Up – Where’s the Middle Ground?

When teaching new Account Managers forecasting, I like to start with the basics. There are two main schools of forecasting approaches: Top Down and Bottom Up. Top Down means starting with the highest-level goal, e.g.. 20% YoY growth, and then divvying down into contributions by channel and the budget needed to support such growth. Bottom Up, you start at the individual channel levels, which then combine to reach your overarching goal.

My preferred forecasting process blends the two approaches together to avoid some of the common pitfalls associated with each method. Miscalculations become magnified, forecasts can quickly become over-complex (Bottom Up), or produce unrealistic goals for each marketing channel (Top Down).

I lean more towards Bottom Up in my approach as it pulls in more historical data and channel nuance, which I find absent in Top Down approaches. This leads me to my usual method:

Hybrid Approach: Determine each channel’s performance potential based on the client goal/spend + historical data. Then layer in site trends to reverse engineer into specific monthly site and channel goals.

 

What You Need

As with everything we do, your forecast needs to be based in data. Two years of data is a good groundwork to identify trends and outliers, as well as solidify the inevitable assumptions/constants that fall into the forecasting computations. Questions I like to ask when looking at historical data include:

  • What is my current year to date (YTD) growth rate versus last year’s (LY)?
  • How much did I spend in Q4 last year and the year prior?
    • – If there was a big change, how did that change channel allocations and results?
  • Were there any unexpected spikes or dips caused by tracking issues, larger site issues, etc.?
  • What metrics were consistent YoY, are they similar to the last 3 months trending?
    • – For me this is usually average order value (AOV), cost per clicks (CPC), and click through rate (CTR)
  • How did peak day performance change YoY?
    • – Did Black Friday sales hit earlier last year? Why?
    • – When did peak offers ‘leak’ or hit the site?
    • – Do I know if competitors’ offers were more or less competitive than mine LY?

This list isn’t exhaustive, but should provide a decent starting point for knowing if your upcoming holiday season has the potential to be strong or weak versus the prior years and if a revenue/growth goal is attainable. For example, is your site trending at 15% growth in 2017 and your goal is 20% growth in Q4? Might be difficult. But what if LY Q4 2016 grew at 30% YoY while the rest of the same year was pacing at 10% growth. You seem to have a pretty good chance of hitting a 20% growth in 2017 Q4, if Q4 2016 grew at 20 percentage points higher versus the rest of the year.

Historical data makes forecasts more informed.

Even better, some of the questions around specific peak day results may spark fresh strategies and tactics for the next 3 months.

Constant Rate Variables

Next, choose your constant variables. These should NOT be the same for all your marketing channels and should NOT be the site averages. This is where we bring in the individual channel nuance found in a Bottom Up approach. Using rate metrics as constants is better than hard numbers. Additionally, if a marketing channel has an ROI or CPA goal from the client, you would fold that in here as well.

One word of caution, try not to use more than four constants per channel. Any more than that and your forecast runs the risk of not being based in reality. Perfect world scenarios don’t help anyone.

 

Percent of Site Contribution

My favorite place to start a forecast uses each channel’s percent of site contribution for revenue. This is where we bring in elements of a Top Down forecasting approach to our process.

Join our mailing list!

Want to know when new free tools and blog posts are released? Join our newsletter!

Using the last two years of data, how much revenue does each channel make up of the total?; e.g. 25% of revenue is Email, 18% is PPC, 16% is Affiliate, 15% is Organic, 13% is Direct, 8% is Display (Remarketing + Prospecting), 5% is Social. Compare the last two years of contribution breakouts against the most recent three months to ensure they are on par with current channel trends. Expect minor variations, which could be a few percentage points up or down.

Similarly, look at how each of the months in Q4 make up the whole. October drives 15% of revenue, while November and December drive 45% and 35%, respectively.

At this point you have all the necessary inputs to begin formulating your forecast.

 

Numbers Numbers, Math Math Math

 

Step 1: Begin by forecasting the Q4 total revenue and spend for the marketing channels using the channel % of site contribution and KPI constants.

Determining the overall revenue goals per channel.

Click to enlarge
Backing into spend per channel using the revenue goals and constants.

 

Step 2: Ensure that total spend equals your client’s set budget. Only applicable if you have a known budget bound.

Confirming calculated budget is within client’s specified budget.

 

Step 3: Break down the Q4 site revenue goal into monthly goals based on each month’s % of site revenue contribution.

Use the previous year’s monthly contributions to allocate the site revenue goals by month.

Step 4: Layer in monthly channel revenue goals using the results from Step 1 combined with monthly breakdowns and seasonality trends from historical data.

Click to enlargeCheck to ensure the totals match budget and goals.

 

Step 5: Make sure the total revenue and spend equal budget/goals.

Check to ensure the totals match budget and goals.

 

Tracking Forecast Success & Accuracy

You’ll want to know how well the site, marketing channels, etc. are trending to your forecast. I recommend implementing a monthly tracker, updated daily, so you know the budget pacing and revenue trend. The best thing is to have all your data in a simple view to make recapping holiday in January a breeze. Click below to download a template I use for yearly forecasts. It can be manipulated to suit many client and forecast needs: full year, Q4, and so on.

Download the Forecast Tracking template

Tracking is also important because you may need to re-forecast or shift revenue projections and budget between months depending on last minute promo changes from the client, unexpected issues like the site crashing on Black Friday, or their Email Service Provider getting caught in a SPAM trap on Cyber Monday, increased competition in the search space driving CPCs up and organic rankings down. I could go on, but you get the point.

Life happens, forecasts aren’t perfect. But they can be thoughtfully put together.

My parting tip: the best time to get started on a forecast is September for holiday and November/December for the lengthy full year forecasts. So go download that template and get started!

The post Prep for the Holidays: Tackling the Dreaded Forecast appeared first on Greenlane.


How to Implement Page Speed Recommendations at Server Level

There are several tools that analyze page speed and show how quickly users can see and interact with content. The tools will identify areas that need improvement and most of them will analyze the same areas. There is a great article by Gtmetrix.com that explains why different tools might show different results when analyzing page speed on sites.
On this blog post, I’ll review the items that are analyzed at server level and provide examples of how you can fix or improve you site speed at server level.

Items Fixed on Server

Leverage browser caching:

Browsers can store webpage resources on computers so that they don’t have to download these resources every time they visit a site. By leveraging browser caching, you can instruct browsers how often resources are updated on your website so that they know when these resources should be downloaded again.

Leveraging browser caching is different depending on the type of server your site is using. We will discuss how to fix these items on Apache but will provide resources for other servers:

Note: Editing your .htaccess file could break your site if not done correctly. If you are not comfortable doing this, please check with your web host first.

APACHE

To enable browser caching on Apache servers you need to edit your .htaccess file by using notepad or any form of basic text editor.
In this file you can set your caching parameters to tell the browser what types of files to cache or store. Below is an example of the code:

 IfModule mod_expires.c>
 ## EXPIRES CACHING ##
 ExpiresActive On
 ExpiresByType image/jpg "access plus 1 year"
 ExpiresByType image/jpeg "access plus 1 year"
 ExpiresByType image/gif "access plus 1 year"
 ExpiresByType image/png "access plus 1 year"
 ExpiresByType text/css "access plus 1 month"
 ExpiresByType application/pdf "access plus 1 month"
 ExpiresByType text/x-javascript "access plus 1 month"
 ExpiresByType application/x-shockwave-flash "access plus 1 month"
 ExpiresByType image/x-icon "access plus 1 year"
 ExpiresDefault "access plus 2 days"
 ## EXPIRES CACHING ##

You can see on the code above that images are updated once a year, css is updated once a month and so on.
Depending on your website’s files you can set different expiry times. If certain types of files are updated more frequently, you would set an earlier expiry time on them (ie. css files)
Before adding expire dates, make sure that you are not setting this up for every resource on the site, for ecommerce sites it is important not to cache resources on the shopping cart because it can lead to serious issues. There is a great article on this topic.

When you are done save the file as is and not as a .txt file.

NGINX

How to quickly leverage browser caching on Nginx
Stack Overflow – Nginx with Plesk

IIS
Microsoft Library
Stack Overflow – FAQ

Specify a cache validator:

When you see a “Specify a cache validator” warning in when analyzing your site’s speed on one of the tools is because you are missing HTTP caching headers like: Last-Modified, Cache-Control and Etag headers.

These headers should be included on every page server response, as they both validate and set the length of the cache. If the headers aren’t found, it will generate a new request for the resource every time, which increases the load on your server. Utilizing caching headers ensures that subsequent requests don’t have to be loaded from the server, thus saving bandwidth and improving performance for the user.

The Last-Modified headers are weak cache headers. The current preference is to use Cache-Control headers. Below we’ve included how Google defines these headers:

  • Cache-Control defines how, and for how long the individual response can be cached by the browser and other intermediate caches.
  • ETag provides a revalidation token that is automatically sent by the browser to check if the resource has changed since the last time it was requested. To learn more, see HTTP caching.

Below you can see an example of how to set up an Etag on your .htaccess file.

APACHE

Etag

<IfModule mod_headers.c>
 Header unset ETag </IfModule> FileETag None

NGINX
https://stackoverflow.com/questions/24549377/how-to-configure-etag-on-nginx
IIS
https://technet.microsoft.com/en-us/library/ee619764(v=ws.10).aspx

Enable gzip compression / Compress components with gzip

As discussed above, there are different methods of configuring servers depending on their type and gzip is no different. Below are examples of how you can enable gzip on different servers:

APACHE
You will need to add the following lines to your .htaccess file:

# Compress HTML, CSS, JavaScript, Text, XML and fonts
 AddOutputFilterByType DEFLATE application/javascript
 AddOutputFilterByType DEFLATE application/rss+xml
 AddOutputFilterByType DEFLATE application/vnd.ms-fontobject
 AddOutputFilterByType DEFLATE application/x-font
 AddOutputFilterByType DEFLATE application/x-font-opentype
 AddOutputFilterByType DEFLATE application/x-font-otf
 AddOutputFilterByType DEFLATE application/x-font-truetype
 AddOutputFilterByType DEFLATE application/x-font-ttf
 AddOutputFilterByType DEFLATE application/x-javascript
 AddOutputFilterByType DEFLATE application/xhtml+xml
 AddOutputFilterByType DEFLATE application/xml
 AddOutputFilterByType DEFLATE font/opentype
 AddOutputFilterByType DEFLATE font/otf
 AddOutputFilterByType DEFLATE font/ttf
 AddOutputFilterByType DEFLATE image/svg+xml
 AddOutputFilterByType DEFLATE image/x-icon
 AddOutputFilterByType DEFLATE text/css
 AddOutputFilterByType DEFLATE text/html
 AddOutputFilterByType DEFLATE text/javascript
 AddOutputFilterByType DEFLATE text/plain
 AddOutputFilterByType DEFLATE text/xml # Remove browser bugs (only needed for really old browsers)
 BrowserMatch ^Mozilla/4 gzip-only-text/html
 BrowserMatch ^Mozilla/4\.0[678] no-gzip
 BrowserMatch \bMSIE !no-gzip !gzip-only-text/html
 Header append Vary User-Agent

After you’ve saved your .htaccess file, test your site again on one of the speed tools to make sure it has been properly compressed.

IIS
If you are running on IIS, there are two different types of compression, static and dynamic. We recommend checking out Microsoft’s guide on how to set up gzip.

NGINX
If you are running on NGINX, simply add the following to your nginx.conf file.

 gzip on;
 gzip_disable "MSIE [1-6]\.(?!.*SV1)";
 gzip_vary on;
 gzip_types text/plain text/css text/javascript image/svg+xml image/x-icon
 application/javascript application/x-javascript;

Avoid landing page redirects

If you see a warning on avoiding landing page redirects, this means that you have more than one redirect from the given URL to the final landing page. Reducing the number of redirects from one URL to another cuts out additional round trip times to the server and wait time for users.

Google provides some examples on redirect patterns that can harm your site:

  • example.com uses responsive web design, no redirects are needed – fast and optimal!
  • example.com → m.example.com/home – multi-roundtrip penalty for mobile users.
  • example.com → www.example.com → m.example.com – very slow mobile experience.
  • http://mydomaine.com/ → https://mydomaine.com → https://www.mydomaine.com – very slow experience.

So, make sure that your domain redirects only once when needed in order to avoid the multiple RTTs to the server by:

  • Using responsive layouts to avoid sending users to a mobile version of the site on a subdomain like shown above.
  • Identify redirects to non-html resources such as images and CSS.
  • Perform the redirection server-side rather than client side.

Depending on the landing page redirects, you’ll need to add different codes on your .htacces in the case of an Apache server.

Enable Keep-Alive

Enable keep-alive will allow the same TCP connection to handle several requests between the web server and the browser.
Apache enables Keep-Alive connections by default, however you can explicitly turn them on by adding the following line to your httpd.conf file.

KeepAlive On

It is important to know that you should not set up keep-alive headers through .htaccess since it can send misleading information about the server’s capabilities.

Use a Content Delivery Network (CDN)

To summarize, a CDN is a network of servers on different locations. Each of these servers will save static content of sites like images and CSS/JS files. It’s common that most of the page load time is spent on retrieving this type of content, which is why making them available in servers in different geographical regions will allow browsers to pull data from servers’ closer to them. The short distance for the data to travel will provide a fast site experience.

Using a CDN that can cache the static HTML of your website homepage, and not just dependent resources like images, javascript and CSS can help improve latency which will also make a positive impact on your time to first byte (TTFB).

There are several companies that offer CDN, below we’re are listing three of the more popular options:

Specify a Vary: Accept-Encoding header

Issues on some public proxies may lead to compressed versions of your resources being served to users that don’t support compression. Specifying the Vary: Accept-Encoding header instructs the proxy to store both a compressed and uncompressed version of the resource.

To fix this issue on Apache, add the following code on your .htaccess:

<IfModule mod_headers.c>
 <FilesMatch ".(js|css|xml|gz|html)$">
 Header append Vary: Accept-Encoding
 </FilesMatch>
 </IfModule>

NGINX
http://nginx.org/en/docs/http/ngx_http_gzip_module.html

IIS
https://support.microsoft.com/en-us/help/2877816/vary-header-is-overwritten-as-accept-encoding-after-you-enable-dynamic

If you have any other resources and suggestions for improving server speed, please feel free to add them on the comments below.


12 Valuable Lessons on Redesigns, Replatforming & Migrations

Reading Time: 11 minutes

If you asked me 1000+ days ago if I thought I would consistently have some redesign, replatform, or migration on my to-do board I would have laughed. Not because redesigns make me laugh like Bill #Sebald stories, but because I “left” design for content and marketing back in 2011.

Even though I’m not a designer, I still get a lot of joy out of redesigns. They’re exciting but can easily become a nightmare.

I’ve talked to a number of peers over the years and found that redesigns (and the lot) are a big challenge. Most had some negative experience with this. Agency friends talked about redesigns coming and going. And in-house friends were all too familiar with that one time they redesigned or tried to at least (usually followed by eye rolls, fearful faces, etc.).

Site updates are a fact of life, especially if you work in an agency and/or plan on staying in the digital space for a while. That’s why we wanted to share some of our most important lessons learned.

Disclaimer: This is not a checklist

There are some truly incredible migration, redesign, and similar checklists out there, and I encourage you/your team to start your own. I’ll link to a few from my resource archive at the end of this post, and I’ll even throw in Greenlane’s tech-SEO list.

There’s a lot of value in these checklists, but we know our work is more than following a list of items. The gold comes from what we discover via the IFTTT (if this, then that) approach. The lessons listed below are all the things we learned that couldn’t be captured in a list.

 

What Do These Terms Mean?

Let’s take a step back and get to know the language because this is where it starts to get muddy. It’s very easy to use “redesign”, “replatform”, and “migration” interchangeably, but it’s not that simple.

It’s important to confirm that the terminology your team uses matches the scenario you have in mind. Otherwise, you could end up with wicked gaps in the estimated resources required and/or even worse, this could result in organic visibility getting crushed overnight. By default, some terms may encompass others. For example, a replatform generally includes some form of visual redesign since you’re switching to a brand new software.

The most important differences to be aware of are:

Replatform – This is when the software delivering/managing the site is about to switch. URLs, user flow, navigation — just about everything tech-related will change. For easier situations, we’re talking about ditching something terrible for WordPress or a similar content management system (CMS). In more complex scenarios, you could be looking at switching from/to Magento, angularjs, and/or some proprietary setup.

Redesign – This is when we update the visual appearance of a website. URLs, user flow, and navigation can change or remain the same. This can be the least intensive option since you’re not necessarily changing every single thing like a replatform. There’s less relearning for crawlers to do in cases where URLs/navigation remain the same. This goes by a lot of names – site refresh, facelift, reskin, etc.

Migration – There are a few different types of migrations, so it’s important that everyone knows what exactly is migrating because lots of things migrate – geese, the Belted Kingfisher, URLs, and more. I encourage you to present a simple one-sheeter that breaks it down so everyone, in-house and external support alike, is on the same page. Examples of what migration can be used to reference include:

  • URLs (from moving to a brand new domain or changing to https)
  • Hosting packages and/or providers
  • Platform migration (see “Replatforming”)

In addition to knowing when to use these terms, there are other important considerations:

What is the size, complexity, and intent of the organization?
Removing all the tech stuff, understanding operations and business objectives can go a long way. Just think about the differences between a large (or small) e-commerce site versus a doctor’s office with a dozen locations.

What is your level of involvement?
Team members balance a number of roles (no different at Greenlane). It is important to stay clear on expectations and areas of focus.

Are there staging/dev environments?
Get out ahead of this question by understanding how many dev environments there are, how they work together, and what it will take to get access outside of HQ.

1. Start as early as you can

“Fix-it SEO”, where you’re called in after the fact, is the worst kind. From the first to the most recent project, we’ve pushed to get involved earlier and earlier in the planning process for any type of site overhaul. You’ll also want to know what’s being expected from your team. For Greenlane, we have a variety of team members that could get brought in to fill gaps when needed – analytics, content, UX, etc.

In a Moz article Bill wrote back in 2014 about finding the perfect client, he included a question about internal teams that offers a spot-on match here:

Who will you be working with?  What skill sets?  Will you be able to sit at the table with other vendors too?  If you’re being hired to fill in the gaps, make sure you have the skills to do so.

 

2. Ask a lot of questions

You can never be too sure. Part of the reason to call upon your entire team is to cover every angle possible. This process isn’t only happening in an SEO bubble. It’s important to ask questions and get answers throughout the process so the bigger picture isn’t lost. There’s no time like the present, so figure out the best means for an ongoing Q&A (Basecamp, Google Doc/Sheet, etc.).

Let’s take content for example. Maddie, Greenlane’s Content Specialist, has shared loads of insight as we’ve discussed redesigns.

What stuck with me most are the questions she asks while auditing content. The shortlist resembles something like:

  • What’s the site currently doing?
  • What do we want it to be doing?
  • What’s the audience saying (surveys, user testing, customer service insight, etc.)?
  • What are the gaps?
  • What’s the action plan for fixing/post-launch?

 

3. QA comes with the territory

Get familiar with what it means to QA, how sprints work, and project managing in a dev world. Not a redesign has gone by where there wasn’t:

  • Tech recommendation to split into multiple tasks
  • Something missed in the release that took a follow-up sprint

When the site goes live your work isn’t done. That’s when it’s critical to switch gears to QA support.

Pro Tip: Speaking of sprints, if you aren’t using one already, implement a post-sprint checklist a few sprint releases ahead of a big redesign/replatform. This will lay the required workflow foundation for redesign review (and educate the client/decision makers on this often missing piece).

 

4. Impeccable research or bust

Alongside the checklist items are topics that require special consideration, the IFTTT stuff. Since most site changes last at least 18 – 24 months, it’s important to make sure every recommendation is rooted in research.

Never build your site in a bubble based purely off of internal/vendor assumptions. It’s too easy to get it wrong. Here are 5 ways to do it:

  1. Heatmap and ask questions to non-converting users with tools like Hotjar
  2. Test templates and A/B test your assumptions outright. (Consider a partial launch of a template or element from the pending redesign)
  3. Mine your analytics account (conversions, user flow, etc.)
  4. Survey past customers and those that have converted
  5. Use Treejack or other tools to label navigation elements clearly

If there is one sleeping giant, it’s your architecture and internal link structure. Below is an example of what can happen when you take primary search landing pages out of the navigational hierarchy, essentially creating orphan pages:

Graph of what can happen when primary search landing pages are out

Introducing a new architecture and prioritizing/deprioritizing pages can erase months of forwarding progress.

For a recent brand consolidation project, we gathered information and presented a handful of options, two of which were the most ideal. We couldn’t take it down to the single best decision without the client’s input. The data stays the same, but their input provided the final nudge:

  • Which brands/areas are most important (and by how much)?
  • Risk tolerance vs status quo?
  • What do users coming to page X, Y, Z expect?

Remember to present this information well – source your research, make it easy to consume, provide background, etc. This document will likely get passed around internally on the client side.

 

5. There is no solo path/user flow

Speaking of impeccable research — don’t listen to any design house that focuses on a single user path/flow. That’s 100% fake news.

The truth is that I still blackout rage when I think about this, so Krista (Greenlane’s Director of Analytics) had to help me here:

“There is no solo user path just like there is no one user type for any website. The thought of a single user path is immediately flawed by even the most basic segmentation, such as a new versus returning user. I mean, unless you never want anyone to come back to your site since they may use it differently.”

6. Benchmark and crawl all the [meaningful] things

This point is every checklist’s darling, but I added it here because this point cannot be stressed enough. Begin identifying, prioritizing, and discussing when to begin gathering data (e.g. weekly site speed checks) for benchmarking purposes. This is very much at the intersection of technical, speed, and analytics/tracking.

The real lesson here is to be intentional with the data you gather. Sure, you can benchmark and crawl all of the things, but remember that it has to be meaningful. Need a few ideas to to kickstart your list? Sean (Director of Tech) had these must-do points to consider:

  • If any URLs will change, build out redirects before launch. List out all old URLs and match ones that will change or be removed to equivalent pages on the new site.
  • Test page speed before launch, and retest after launch. Compare speeds and make sure the new site is at least as fast as the old.
  • Test staging environment with a crawl to make sure no infinite loops from parameters, meta hrefs, etc. exist.
  • Check rankings report from SEMrush, and make sure important pages will have the same or similar text content after launch, in addition to any needed redirects.

I’d also add that finding historical crawls, audits, and/or site architectures is one of the best ways to help future-you. Platforms and URL change often re-reveal historical kinks in the chain.

And don’t forget to ensure basic tracking remains intact. This can get lost in the excitement of launch. Review in staging/dev and once again immediately following the push live. Sometimes it can be a challenge to ensure the standard code snippets have been transferred and are correct.

 

7. Don’t be too high level

What do you do when you have to convey a complicated concept to a dozen folks, half of which are hearing about it for the first time? You of course: “Give ‘em the high level.”

The challenge is that the majority of the conversations that happen with big-ticket changes include a variety of people, teams, and departments. This need to leave no one behind makes it very easy to only deliver mass appeal messages. Here are a few tips:

  1. Identify the best way to dig in. Consider separate calls to smaller teams and/or focused calls where the goal is to only address three or less big topics.
  2. High level ≠ watered down. I repeat, don’t water down recommendations. A botched redesign comes back on an uphill battle, especially if you plan on sticking with your partner for a while.
  3. Be as detailed as possible without losing sight of the bigger picture and bandwidth limitations.

 

8. Every redesign timeline is wrong

Let me rephrase – the first timeline you get is always wrong. Everyone is still just trying to figure it all out. And this isn’t any different than any other industry/career – timelines have and always will be a huge challenge project managers and “doers” face, but it’s painstakingly obvious with redesigns. Why? Because the majority of digital operations fizzle. It is easy to fall into the trap of making no or little progress because everything is judged against whether or not it should be “addressed after the redesign.”

As an auxiliary service provider to “the big redesign”, you should care for a few reasons:

  1. Disrupting Peak – All industries have some peak periods through the year or at least time blocks you’d call non-peak. See what I’m going for here? Do your best to guide timelines away from your bread and butter. Be the realistic voice with a plan, and move things along as best you can.
  2. Bandwidth Planning – Looking back at a couple years of agendas, time tracker reports, and memories, the tasks associated to the site switch come in waves. This feast or famine can best be addressed by getting a “surge” plan in place along with identifying a non-core action item list.
  3. Temporarily Obsolete – Things dragging on beyond your control is a real shame. Don’t get me wrong, there are times when pausing is the best play. But pausing everything on your end is often just the easy choice. Instead, leverage the lull:
    • Incomplete projects
    • Housekeeping
    • Test on the existing site because “it’s just going to be taken down anyway”
      1. Emotional triggers
      2. Page layout
      3. Long-form content
      4. CTAs
    • Pitch something big and exciting
    • Life after the redesign

 

9. Educate, educate, educate

The biggest truth to understand working in an agency is that we only interact with partners for a fraction of their work week and priority list. It doesn’t matter how integrated you are. Take a look at simple math:

5% of Your Client's Time is You (the Agency)

There should/will be more than just SEO that contributes to important decisions. This means your messages have to be easily heard among the noise, but more importantly, your primary contacts will have to carry the loudspeaker.

Education is key. Identify gaps in knowledge and mission-critical considerations that are going to require discussion. Schedule discussions, share links, write POVs, and/or share your screen for a live Q&A. Do what it takes to empower your partner to own it with or without you there.

 

10. Shiny object syndrome is real

So so so so real. I’m never dismissive since new technology and wishlist discussions often lead to great outcomes, but it’s important to remain on task. Develop your own pre-research process with questions to help you identify whether a topic is worth pursuing further or should be shot down. Some questions to consider:

  • What is the real why behind this ask?
  • What thought/research has gone into this so far?
  • Do link, industry listening, and search support it?
  • How much effort goes into this on all sides?
  • What’s the estimated impact (and how could we measure it)?

The best partners will pick up on the process (see: educate, educate, educate) so they can begin defusing bombs as well.

 

11. It’s not over when it’s over

Womp womp, sorry to burst your bubble. This is when we keep our sleeves rolled up, as a large portion of Greenlane’s effort begins after the release.

If you’re working with an external vendor for the project, this is often their cue to transition governance. Whether internal/external, this is usually the cue for IT/dev to get involved with the QA.

Between transition flux and reviewing pretty much everything you reviewed prior to launch and in dev/staging, this will be a feast time for areas requiring attention.  It’s worth saying again – make sure tracking and other codes (like testing/heat-mapping) remain intact.

Links and Downloads

Beyond the lessons learned above, consider bookmarking the checklist links below to help with building your own master checklist. All of these lists have shaped how I approach redesigns, replatforms, and the like.  

A special shoutout and thanks to the sites/authors behind each. While not specific to site changes, Annie Cushing’s technical audit checklist is a must if you don’t have this box ticked. In addition to the new site, it’s always wise to conduct a technical audit of your current build to understand the strengths/weaknesses/opportunities (early in the process is even better).

 

Bonus #12: Don’t Stop Believing Making It Better

The entire Greenlane team rallies behind a phrase/idea: “Make it better.” It’s crucial to have that mindset during a redesign. Looking beyond making a better end product, embody this motto for the entire process. There are so many steps that could be made better, this time and the next.

Because whether you work agency (sooner) or in-house (later), there will be a next time. We’ve had some partners long enough to have been through second and third replatforms/redesigns. And we’ve seen first hand just how important it has been to archive past information for easy access.

Archived resources

It’s always a win when you can call back on your notes, even if you do have a typo in the email 😉

Well, there we have it. Are there any other lessons you would add? I encourage everyone reading to contribute via the comments below, Twitter, or email (jon@greenlanemarketing.com).

The post 12 Valuable Lessons on Redesigns, Replatforming & Migrations appeared first on Greenlane.


A Simple Tool For Saving Google Search Console Data To BigQuery

For a while now we have been wanting to find an easy way to log Google Search Console(GSC) Search Analytics data for managed websites.  Google has mentioned several times  that more data is coming to GSC, but has been elusive when pinning down a date.  There are many reasons to want to collect GSC for yourself including:

  • Google Search Console only returns 1,000 rows and has a 90-day limit of historical data.
  • Making data available to other tools to manipulate the data.
  • Who knows what projects / data Google will decide to sunset or reduce access to.
  • Just being a cool Technical SEO and having what other people don’t.

In reviewing options, it was offered that it would be fairly easy to just move the GSC data to BigQuery.  The obvious advantages of BigQuery are:

  • Same Python API library as GSC API.
  • BigQuery data is available to Google Data Studio.
  • The pricing of BigQuery storage is ~ $24/mo for 1TB of data.
  • It is easy to interact with BigQuery via their user interface with SQL-style queries, CSV, and Google Sheets integration.

We began by researching the various tools of Google Cloud (we are much more familiar with AWS) and quickly landed on using Google App Engine (GAE) along with their very sweet integration of cron actions.  In addition, GAE has credential-based access via service accounts, which means that we were able to build a tool without the need for browser-based authentication.

The solution we can up with is located here (we are giving away free to the SEO community) : https://github.com/jroakes/gsc-logger.

From the Readme file, the script:

This script runs daily and pulls data as specified in config.py file to BigQuery. There is little to configure without some programming experience.
Generally, this script is designed to be a set-it-and-forget-it in that once deployed to app engine, you should be able to add your service account email as a full user to any GSC project and the data will be logged daily to BigQuery. By default the data is set to pull from GSC 7 days earlier every day to ensure the data is available.

The tool stores the following fields (currently restricted to US traffic, but easy to update in config.py) daily:

  • date
  • query
  • page
  • device
  • clicks
  • impressions
  • ctr
  • position

It will also try to grab 5,000 rows in each gulp and keep pulling and saving until less than 5,000 rows returned, signaling all the data.

With all that said, let’s get started showing you how to implement.  As a warning, to follow the info below, you should have some development experience.

Setting Up Google App Engine Project

Screenshot of Google Cloud Platform - Google Search Console Logger

  1. Navigate to Google Cloud Platform Console and Create a project.
  2. In this example, we named it gsc-logger-demo.
  3. At this point, go ahead and link a billing account with your project.  You can find billing by using the hamburger menu on the top left.
  4. Click on API’s and services from the same hamburger menu.  Search for and enable BigQuery API and Google Search Console API.
  5. Then create a Service Account by going to API’s and services > Credentials and clicking on Create credentials > Service account key. Select New service Account and give it a succinct name (we used GSC Logger Demo) in this demo. Select the role project owner, and leave JSON selected as key type.  Then click Create.  A JSON file will be downloaded to your browser, save this for later.

Getting into the code

Google Cloud Shell for GSC logger

Most of the below steps can be done from within Google Cloud Platform using their built-in Cloud Shell.  After launching Google Cloud Shell, follow the following steps:

  1. Download the repo:
    git clone https://github.com/jroakes/gsc-logger.git
    
  2. Upload your credentials file (the one you downloaded earlier when creating a service account):Upload file to Google Cloud Shell
  3. Move this file into your credentials directory:
    mv gsc-logger-demo-e2e0d97384ap.json gsc-logger/appengine/credentials/
    
  4. Move to the appengine directory:
    cd gsc-logger/appengine
    
  5. Open the config file:
    nano config.py
    
  6. Edit the CREDENTIAL_SERVICE file name to match the file you just uploaded.
  7. Update the DATASET_ID to something you like.  Only use letters and underscores.  No spaces.
  8. Edit GSC_TIMEZONE to match your current timezone.
  9. There are two other editable items here, ALLOW_CRON_OPEN and HIDE_HOMEPAGE. These are commented for what they do, but this should ideally be adjusted after testing.
  10. After editing, hit CTRL+x, y to save modified, and enter to keep the same file name.
  11. While still in the appengine directory type the below to initialize your project.  Use the project name you selected earlier (ours was gcs-logger-demo):
    gcloud config set project <your-project-id>
    
  12. Then type the below to install requirements:
    pip install -t lib -r requirements.txt
    
  13. Then create a new Google App Engine App:
    gcloud app create
    

    Select the region of your choice. We chose us-east-4.

  14. Finally, you are ready to deploy your app:
    gcloud app deploy app.yaml \cron.yaml \index.yaml
    
  15. Answer Y to continue.
  16. The app should take a minute or so to deploy and should output a url to where your app is deployed :
    Updating service [default]...done.
    Waiting for operation [apps/gsc-logger-demo/operations/9310c527-b744-4b7c-b6b6-00a79b6c28de] to complete...done.
    Updating service [default]...done.
    Deployed service [default] to [https://gsc-logger-demo.appspot.com]
    Updating config [cron]...done.
    Updating config [index]...done.
    
  17. You should now be able to navigate to the Deployed service url in your browser (ours in this demo is: https://gsc-logger-demo.appspot.com)
  18. Try going to your apps homepage (image below) and /cron/ (ours in this demo is: https://gsc-logger-demo.appspot.com/cron/) page once.  The /cron/ page should return:
    {"status": "200", "data": "[]"}
    

    It is important to hit the /cron/ page once so that your service email can be initialized with your Google account.

Adding Sites

Google Search Console Logger Main Screen

If all went well, you should see the screen above when navigating to your deployed App Engine URL.  You will notice there are no sites active.  To add sites to pull GSC data for, simply add your service account email as a full user in GSC.  For convenience, the email is listed on your app’s homepage.

To add a user, navigate to your Dashboard for a GSC account that you have ownership access to, click on the gear icon in the upper right, and click, Users and Property Owners.   Then add a new user according to the image below.

Google Search Console Add User

Once the user is connected, you should see the site listed when you refresh your app page.

Site added to GSC Logger

Next Steps

Now that the app has been deployed, it should download your GSC data to BigQuery every 24 hours based on cron functionality in Google App Engine.  Two things to explore next are:

  • Explore your data in BigQuery: https://bigquery.cloud.google.com. From BigQuery, you can run database queries and save to CSV, or save to Google Sheets.  You can also access the historical data in your own platforms via the API.
  • Try hooking up your BigQuery data to Google Data Studio.  Google provides easy integration with BigQuery from their data sources.  Simply add a BigQuery data source and make it available in your reports.
  • Verify your cron jobs in GAE: https://console.cloud.google.com/appengine/taskqueues/cron. You can run your cron job from this link or you can manually go to /cron/ from your browser.

For security, you want to go back and edit the config.py file using the steps above and adjust the settings for ALLOW_CRON_OPEN and HIDE_HOMEPAGE.  Primarily the ALLOW_CRON_OPEN.  Setting this to False means that only Google App Engine will be able to execute the cron function and direct calls to your /cron/ endpoint should result in an Access Denied response.  robots.txt is set to disallow: / for this repo, so it should not be findable in Google, but you want to be careful with exposing your managed sites, so the homepage visibility is up to you.

If you want to say thank you, please share on Twitter, follow Adapt Partners on Twitter, and/or suggest improvements via Github.

Thanks for Russ Jones and Derk Seymour for giving great feedback on the repo.

Update: I emailed John Mueller (John is amazing BTW) to ask if there were anything to be concerned with from Google’s standpoint from running this tool.  He said, “Go for it”, and “For the next version, you might want to grab the crawl errors & sitemaps too (with their indexed URL counts). I think that’s about it with regards to what you can pull though. ”


“My Best Career Advice” from the Analytics Influencers

No one in my generation dreamed of a career in digital analytics. It wasn’t an option for pre-Urchin children. We dreamed of being firefighters and doctors and, if you were like me, backup dancers for Michael Jackson.

Lucky for music lovers, my aspirations moved away from ruining the King of Pop’s entourage. Instead, I grew enamored with the internet. An infinite creative canvas, uniquely accessible and measurable, with digital metrics — hits, sessions, users — that quantified and, thereby, empowered the impact of our online investments.

In other words, I started to become a digital marketer.

That should sound familiar — after all, you’re reading this. Maybe you were more into Springsteen or Swift, but the premise was the same. Your interests led you down a path that eventually manifested itself as web analytics. And you aren’t alone.

Young professionals are flocking to careers in web and mobile analytics for same reason that I did. This article is designed to help them start or continue their journey. It includes a collection of career advice from some of the biggest names and influencers in analytics.

  • Krista Seiden, Google

    Krista Seiden is an Analytics Advocate and Product Manager at Google. Krista’s resume speaks for itself. But she speaks for herself, too, at conferences around the world and on her blog, Digital Debrief.

  • Alex Moore, LunaMetrics

    Alex Moore is Director of Analytics & Insight at LunaMetrics. Alex leads consulting initiatives in analytics and data science and is a national trainer in Google Analytics and Google Tag Manager.

  • Matt Petrowski, United States Postal Service

    Matt Petrowski is Digital Analytics Program Manager at United States Postal Service. Matt and his team transform website traffic metrics from USPS.com into meaningful, decision-making marketing insights.

  • Annie Cushing, Annielytics.com

    Annie Cushing is Chief Data Officer at Outspoken Media and founder of Annielytics. Annie is a usual suspect at digital marketing conferences and frequent contributor to industry publications, including Search Engine Land and Moz Blog.

  • Adam Singer, Google

    Adam Singer is an Analytics Advocate at Google and editor of The Future Buzz. Adam presents 15-20 times a year at the most prestigious conferences on digital marketing, PR, and analytics.

  • Michael Bartholow, LunaMetrics

    Michael Bartholow is Manager of Digital Marketing Strategy at LunaMetrics. Michael is an industry advocate of data-driven marketing, presenter at Inbound and SMX, and national Google AdWords trainer.

  • Khalid Saleh, Invesp

    Khalid Saleh is CEO of Invesp, a usability and conversion optimization firm and co-author of “Conversion Optimization: The Art and Science of Converting Prospects into Customers.”

  • Elena Alikhachkina, Johnson & Johnson

    Elena Alikhachkina is Global Head of Analytics at Johnson & Johnson. Her insight was originally published here.

  • Russell Walker, Kellogg School of Management

    Russell Walker is a professor at the Kellogg School of Management and author of “From Big Data to Big Profits: Success with Data and Analytics” and other books. His insight was originally published here.

  • Avinash Kaushik, Google

    Avinash Kaushik is Digital Marketing Evangelist for Google and author of “Web Analytics 2.0” and “Web Analytics: An Hour A Day.” His insight was originally published here.

If You Were Starting Your Career in 2017, What Would You…

To help guide future analysts young and old who are interested in the industry, I’ve asked for direct feedback from industry leaders as well as curating existing advice around two important points:

  • If you were starting your career in 2017, what would you do exactly the same?
  • If you were starting your career in 2017, what would you do totally differently?

Two simple questions with tremendous impact. Here is a summary of their advice, with some of my analysis and thoughts along the way.

Get Technical

We can’t analyze what we can’t track, and tracking requires a technical infrastructure. Anyone can look at a graph, but only analysts with strong technical skills can cull the data to create it or understand the underlying processes to interpret it.

But developing those skills is intimidating. Analytics was not an available course or major on university campuses so most of us were self-taught. That’s one of the things that many of the experts referenced.

“My path led from web development to SEO to paid search then, finally, Analytics. That continuum provided a broad context to the digital field, and at least an entry point for just about any conversation.”

Alex Moore

“I often feel limited by not having a development background, which can be frustrating. If I had a college do over I would absolutely study data science or computer science.”

Annie Cushing

There are many schools of analytics and for a long time, the skills necessary for web analytics focused mainly around collection, or how do we get the information off the page and into a tool like Google Analytics. Changes over the years have made this part of analytics easier, as website platforms have risen in popularity and tag management tools like Google Tag Manager have lowered the technical barriers of entry.

While knowledge of front-end technologies is still vital, the shift is being made to focusing on analysis and evaluation, or mining the data for results, which overlaps more broadly with other analytics focuses. While this shift couldn’t necessarily have been predicted, many commented on the need for deeper technical skills.

“I would begin my development-to-marketing path, not with HTML and JavaScript, but with Python, R, and Java. I wish I had seen the machine learning revolution coming ten years ago. Machine learning will be the great litmus test among agencies between those providing mere “reporting” and insight. With a solid foundation in these technologies, a young aspiring analytics professional in 2017 will be able to crack open data in ways that a human being literally cannot, and that puts these newcomers at a huge advantage.”

Alex Moore

“I would have had a much better start in the digital field with a more technical background. I don’t think I would have enjoyed a major in computer science, because it’s not really my strong suit, but a minor would be so incredibly helpful to what I do these days.”

Krista Seiden

Most of us learned technical skills the hard way: break it, read Stack Overflow threads, attempt to fix it. Someone could have saved us the frustration by encouraging experimentation with functions during daycare.

Sidenote: This book is actually a great primer for a non-technical person looking to get started with Google Tag Manager and HTML!

Understanding how things work is so important, even if you’re not the one writing code. Especially if you’re not writing the code. Regardless of your role, you will need to be able to work with others to evangelize analytics and empathize with their goals.

“I would have spent more time actively engaging development teams about the importance of what we do. As we continue to type and talk our way through the IoT, we need to ensure that information can be found, adapted, and interpreted by all stakeholders.”

Matt Petrowski

Marketers and developers have an infamously complicated relationship that can feel more like a House of Cards episode than anything exhibiting teamwork. Analysts with a technical background can be valuable mediators between the two, serving as a trusted advisor with expertise in both areas.

Related Reading:

Crave Experience

Resumes in a stack begin to blur after the second dozen. Intangibles like enthusiasm and thought leadership don’t jump off the page like black-inked experience. Recruiters even take shortcuts to uncover it.

LunaMetrics’ About Us page reveals many parallels amidst such diverse backgrounds. Almost all of us have personal projects that have added valuable experience with experimentation, promotion, and skills beyond their official job titles.

More importantly, varied experience leads to wisdom. Anyone can learn to write a line of code or create a Google Analytics filter. It’s the stories surrounding the lesson that add value to your career and extend your trajectory.

“You can learn everything there is to learn about fishing in a book, or at a University. You won’t actually get any good unless you grab that pole and sit for hours on end on the water… Go get a site. Your mom’s. Favorite charity’s. Your friend’s business. Your spouse’s sibling on whom you have a crush. Or. . . start your own!”

Avinash Kaushik

“I am continually curious about new technology, new digital platforms, and new ideas. I sign up for new products, implement them and play around with them, and then compare and contrast to what I know. Understanding the digital landscape is key — it’s how I got into Google Analytics in the first place. While I was working at Adobe, I decided to implement Google Analytics on my blog to expand my analytics knowledge set.”

Krista Seiden

“I spend 25% of my time learning something new and it used to be a lot more. You have to be constantly learning and adding to your arsenal.”

Khalid Saleh

Khalid is always reading something: new books, new blog posts. His company also keeps a weekly meeting where every team member is expected to bring in some piece of new information they learned and share it with everyone else.

Many professionals are quick to point out that knowledge beyond traditional analytics is essential, too.

“Almost all of your career success will not be sourced from your ability to build pivot tables in Excel… rather it will come from two abilities: a) your business knowledge [and] b) your emotional Intelligence.”

Elena Alikhachkina

“If your goal is to participate in leading and transforming an organization, it will require more than writing code and doing analysis… If you work for a company that manufactures goods, go visit the factory. Learn how things get done. Learn about the processes that you are modeling.”

Russell Walker

Acquiring knowledge is a science. Turning it into experience is an art, and that is a learned skill that takes most people years to develop. Often it’s not something a professional can do on their own, in their own head. It requires the right environment. Sometimes it’s the right peer network. Sometimes it’s the right company. Sometimes it’s the right clients.

“I would 100%, without question, start my career again at an agency — the diversity of projects and clients, and the expertise available for osmosis from every colleague and every smart client.”

Alex Moore

“Starting my career on the agency side was really a great decision. Agencies are the ones who execute much of the actual hands on, tactical marketing work, which you need to spend years working on before you truly understand developing higher level strategy work.”

Adam Singer

Agencies provide a tremendous amount of variety and flexibility. But that’s not the only way to learn. I’ve also found that entrepreneurial and nonprofit environments have similar advantages. Resource scarcity, while typically not something we strive for, can force us to be especially creative. Some of my most influential professional experience was derived from desperation — “Well, this has to be done and there’s no one else to do it…”

Related Reading:

Pursue Passion

Career passion is a calling for some people — they can’t imagine doing anything else. For others, it’s a search. Our analytics experts spoke to both sides.

“Agencies give you a view into many different industries and companies so you can figure out what work you’re really passionate about to pursue.”

Adam Singer

“I’m extremely passionate about what I do. Analytics, and more broadly, digital marketing, is something I get excited about, to the point that I love to talk to my family and friends about it (likely too much so, in their opinions).”

Krista Seiden

Passion and generally standing for something goes a long way. Krista is a great example. She has put a tremendous amount of work into the #WomenInAnalytics movement to help elevate women in this field and make it a more inclusive space.

“I’m driven by my passion for the field, but also by the knowledge that many people out there haven’t had the opportunities I have had to dive into it, and I want to help them do so anyway I can.”

Krista Seiden

Passionate people live-and-breathe what they do, all day every day. Whether starting a career or looking to take it to the next level, remember that people (especially recruiters!) are drawn to enthusiasm.

“The best digital marketers advocate 24 hours a day. What are you advocating for in your spare time? Do you run Facebook ads for a family business or do volunteer work with a nonprofit’s website? Do you run an Etsy shop or a YouTube channel? Passion in the evening and weekends translates to passion for in-house and client work.”

Michael Bartholow

Last winter I spent a cold February weekend hacking together Google Assistant, Google Analytics and my FitBit in an effort to gamify my life. Now I’ll be the first to admit that normal people don’t do that. Normal people participate in winter sports and watch Game of Thrones.

Normal people also don’t love their job.

Related Reading:

Take Risks

A scan of the experts LinkedIn profiles reveals something interesting. Nearly all of them include career paths (or detours) that are not linear. They don’t follow a “perfect career progression” that you might see in a Tony Robbins seminar. Most of these thought leaders pursued passion projects, donated themselves to causes, and contributed to the conversations around them. They took risks. And, although arguable selfless or selfish or silly, this experience advanced them.

“I took a lot of risks early on in my career but, looking back, I think I could have done even more to push the envelope and try things no one else had done yet. When you are young your mind is basically free of the little voice telling you ‘This is a crazy idea, I shouldn’t do it.’ Take advantage of that!”

Adam Singer

Following someone else’s lead without questions or falling into your own routine can be dangerous. As an analyst, your goal should always be why and how, instead of simply what.

“I focused a lot on reporting and, boy, I could get very creative generating more and more reports with tons of data. But I lost track of what these reports are telling me. Reports without actionable insights from them are useless.”

Khalid Saleh

“I learned early the importance of anticipating questions before they are asked. There should be very few ‘whys’ and ‘hows’ left unanswered when it comes to discussions across teams.”

Matt Petrowski

This next quote, or quote of a quote, is perhaps the best advice of the group, and is the best way I can think to end this roundup.

“Even the things I tried that failed ended up providing such good lessons — success is a horrible teacher. As one of my former clients used to tell me, ‘Regret what you do, not what you don’t.’”

Adam Singer

Related Reading:


Our comment section is typically filled with troubleshooting questions and technical caveats. If you’re a professional in the analytics industry, like many of our readers, I’d encourage you to share your own career advice below!


How I Explain Ranking Algorithms In SEO

Reading Time: 5 minutes

Simplifying a massively complex algorithm sounds like a fool’s game.  Google’s search algorithm is like a rope.  A thick rope full of fibers.  Some of the fibers (or smaller algorithms) are old, and some are new. Some are made for on-the-fly determinations, and some are looking for old school signals. A jungle of computation!google algo rope

Think an SEO can deconstruct that algorithm?  Google Engineers are even surprised by results these days.  For some, it’s fun to chase algorithm updates and report on them.  I definitely appreciate their helpful contributions to the SEO scene, but nobody has found the full cheat codes to rankings.  In the old days, we got close.  Today?  We’re further away than ever.

But for those who’ve been in SEO for a long time, we’ve become used to the unknowns. We can have opaque conversations about ranking algorithms because we collectively know where the blanks are.  The missing pieces are part of our language.  However, for new SEOs, and especially companies who don’t speak SEO, they could use a shortcut.

Here’s What We Know

We know Google factors more than 200 signals in a ranking, some of which are factored in mere milliseconds.  Those signals change often. Public patents suggest some signals they could be looking for, but for the most part, we don’t know anything for sure.  By design, everything is in flux.

That’s part of the game.  SEO is not for the weak of heart.

We know the standard signals Google has spilled throughout the last two decades (yes – it’s been that long).  Keywords in title tags.  Page load speed.  The context in URLs.  Backlinks.  And several more, to which Google reserves the right to remove, dampen, or amplify at any point without even a whisper.  Need a refresher?  Check out Moz’s most recent ranking factors study.

In the SEO industry, we test.  We try new things and come up with theories.  When you hire an SEO, you are hiring someone who speaks the language, learns from the trades, and ideally experiments constantly.  I don’t think an SEO can actually be an expert.  It’s like being an expert on snowflake patterns.

My Analogy

So with such a broad science, I still find it valuable to paint a broad picture with broad strokes.  Because a little understanding is better than no understanding.

Imagine, if you will, a checklist of items that Google has in its system to check each time it processes a page for rankings.  One signal may look like this (using a scale of 1 – 10 for the sake of easy math):

Signal 1

Now, consider Google is measuring you against other websites:

Signal 2

If the internet was only 3 web pages, and Google only used one signal, your site would ideally be seen as the best result.

Unfortunately, Google is factoring many signals (pretend I added more than 200 below, I simply don’t have the patience to draw that many):

Signal 3

And, Google is comparing your signals to all of the other websites out there:

Signal 4

You can see where this is going.  To add to the complexity, Google overlays dampening, conditional rules, and “if this then that” mechanics to finely tune the results, keeping it much more complex than my very simple illustrations.  Google has told us content, links, and machine learning (called RankBrain) are the biggest factors.  So, we’re looking at a much bigger calculation at this point.  Below is an example where specific rules would impact certain signals differently.

Signal 5

Get the picture?  Is your head starting to hurt?  The complexities continue layering upon each other.  Those Stanford boys (Larry Page and Sergey Brin) were brilliant, and they handed their original Google ranking algorithms over to more brilliant people with the request of improving their idea.

What’s The Simple, Actionable Takeaway?

With all this in mind, the simple takeaway is… do your best to get a 10 in every single category, no matter how minute you may believe it is.

Seriously – that’s really my takeaway.

When you boil it down, SEO is really about being better in the appropriate signals than your competitors.  Our ultimate goal is to score higher in every area as possible, thus knocking competitors down a few pegs in ranking results.  Since we’ll never know what the exact signal mix is for any query, I find less value in forensic SEO, and more value in just pushing forward with creating the best web page possible on every front.

If you’re a company working with an SEO, lean on them to show you all the signals the industry either knows about or believes could be real.  And work with them to implement the highest quality changes in each category.

If you’re an SEO, continue to listen, practice, and learn all the great things the industry provides, as well as discovering your own theories.  SEO is an experiment, so treat it as such.

And lean on the earlier analogy when you’re facing tough questions:

“Bill, we implemented all the site speed recommendations to the absolute best of our ability.  It’s been 4 weeks and we’re not ranking better.  Why?”

“Well client, let me give you an analogy…. [describe visualization above].   And that, client, is why we still have a lot of work to do until our overall signal mix is better than everyone else’s on the internet who ranks for a given keyword.  Strap in, Bucko – the ride’s not over.”

See folks, this is why SEO has gotten harder in the last few years.  The signal complexity has grown!  This is why it often takes a much longer time to achieve goals, and why the reliance and participation of the website owner is vital.

The post How I Explain Ranking Algorithms In SEO appeared first on Greenlane.


New AdWords Features: How To Stay Up To Date

Paid Search, and Google AdWords in particular, moves fast. Just when you think your account build is complete now that Customer Match lists and Dynamic Search Ads have been added, you open Twitter and find a new feature you didn’t even know about yet! Keeping up to speed on changes is a big part of the job. Here are the go-to sites to check out so that you’re never in the dark.

Search Engine Land
“When You Need to Know Right Now”

There are tons of news sites out there that cover Paid Search, but I always turn to Search Engine Land. Ginny Marvin does an excellent job of keeping readers abreast of changes in AdWords minutia, like in this recent piece on changes to the mobile product card unit ads:

Google AdWords Official Blog
“When You Need to Learn More About a Product Announcement”

Inside AdWords, the official blog for Google AdWords, has been around for years via the Blogspot platform. This is the spot to learn about official product announcements and features.

Google AdWords New Features Page
“When You Need a Recap”

AdWords makes updates basically all the time. The running news items can start to feel jumbled in your head. Thankfully, the product team recognized this and launched a New Features section in the Fall of 2016. This helpful page gives you a timeline of expanded features and a link to the support documentation for each item. Invaluable!

LunaMetrics
“When You Need a Deep Dive”

Shameless self promotion here. We have two main ways that we help companies that use AdWords learn more and get value out of the tool. This applies if you manage AdWords yourself or if you lean on an agency to help manage your accounts.

LunaMetrics Blog

We don’t write up-to-the-minute news about AdWords changes on the LunaMetrics blog as there are better outlets around for that. We do have a knack for instructive posts that exhaust a particular topic. Here are a few highlights:

Google AdWords Search Term Report Action Items

Learn how to effectively use the AdWords Search Term Report to cut wasteful spending and optimize your AdWords campaigns by asking yourself some questions.

Increase Revenue with Strategic Audiences in Google Analytics & Google AdWords

Learn how to create & bid on targeted audiences to reduce wasted spend and increase revenue in this definitive guide. Maximizing your Adwords spend today!

Digging Deeper with Attribution Reports in Google AdWords

Not sure about the tools available for attribution modeling and analysis in Google AdWords? Check out our handy guide to attribution reports in AdWords!

Google AdWords Cross-Device Reporting & Attribution

Visit the AdWords attribution section more frequently. The device attribution reports there are a valuable resource. Let’s look at three device reports.

LunaMetrics In-Person Training

Need more? We’ve got our two-day Google AdWords training that we hold around the country to both ease people into AdWords and make sure they’ve got a strong footing when they leave. There are many other trainings out there, so you’ll understandably want to shop around and compare reviews and agendas to make sure the course you take is worth the investment. We don’t offer online courses, but we certainly recognize that there are pros and cons to both online and offline courses.

As a trainer, I enjoy taking my years of experience with clients big and small and using that to help people see the potential in AdWords for the unique business. No two courses are exactly the same, and it’s great to watch attendees making business connections and sharing tips during the breaks and workshop segments of the class.

Here are a few reviews we’ve received:

“The course was rich, relevant and clearly up-to-date on the latest Google AdWords topics and settings. I appreciated the examples, discussions and answers provided during the session.”

—Boston – Google AdWords 101

“So much information! I knew that there was stuff that I did not know but now I definitely am well-versed in all that’s available—now I have some practicing to do!”

—Chicago – Google AdWords 201

Google AdWords Best Practices Guide
“When You Need a Refresher”

This one is a favorite of those who have already attended the LunaMetrics AdWords training seminars. This one single Best Practices page has some of the best planning tools out there for your AdWords account.

When you pop open any topic above you get a great list of guides and checklists that can all be downloaded as PDFs for you to work through. Super handy!

Think With Google
“When You Need to Be Inspired”

Going beyond using the tool and latest product updates, it’s important to see actually see successes with the different features. Think with Google is my personal favorite site on today’s list. Packed with case studies, videos, downloads and ideas, this site is where Google docks it’s best advertising efforts and biggest client successes. For example, see what big brands are doing with RLSA and get inspired to try your own Remarketing Lists for Search Ads campaigns.


Google Analytics API v4: Histogram Buckets

Back in April of last year, Google released version 4 of their reporting API. One of the new features they’ve added is the ability to request histogram buckets straight from Google, instead of binning the data yourself. Histograms allow you to examine the underlying frequency distribution of a set of data, which can help you make better decisions with your data. They’re perfect for answering questions like:

  • Do most sessions take about the same amount of time to complete, or are there distinct groups?
  • What is the relationship between session count and transactions per user?

How It Really Works

Here’s how to use this new Histogram feature yourself with the API.

Note: we’re assuming you’ve got the technical chops to handle authorizing access to your own data and issuing the requests to the API.

Here’s what a typical query looks like with the new version of the API:

{ "reportRequests": [ { "viewId": "VIEW_ID", "dateRanges": [ { "startDate": "30daysAgo", "endDate": "yesterday" } ], "metrics": [ { "expression": "ga:users" } ], "dimensions": [ { "name": "ga:hour" } ], "orderBys": [ { "fieldName": "ga:hour", "sortOrder": "ASCENDING" } ] } ]
}

This query will return a row for each hour, with the number of users that generated a session during that hour for each row; simplified, it’d be something like this:


[ ['0', 100], ['1', 100], ['2', 100], ['3', 110], ['4', 120], ['5', 140], ['6', 220], ['7', 300], ...
]

Wouldn’t this data be more useful if it were dayparted? Let’s use the histogram feature to bucket our data into traditional TV dayparts:

Early Morning 6:00 AM – 10:00 AM
Daytime 10:00 AM – 5:00 PM
Early Fringe 5:00 PM – 8:00 PM
Prime Time 8:00 PM – 11:00 PM
Late News 11:00 PM – 12:00 PM
Late Fringe 12:00 PM – 1:00 AM
Post Late Fringe 1:00 AM – 2:00 AM
Graveyard 2:00 AM – 6:00 AM

To request our data be returned in these new buckets, we’ll need to make two modifications to our query from before. The first change we’ll make is to add a histogramBuckets array to the ga:hour object in our dimensions array. We’ll populate this with ["0", "2", "6", "10", "17", "20", "22", "23"]. Each number in this sequence marks the beginning of a new histogram bin.

The end of the bin is inferred by the number that follows it, and if values exist below the first bin’s minimum an additional bin will be tacked on for us at the beginning to contain those values. For example, if we had started our histogramBuckets with “2” instead of “0”, the API would add a new bucket to the beginning named “<2″, and it would contain the values for matching rows where the ga:hour dimension was 0 or 1. The second change we need to make is to add the “orderType”: “HISTOGRAM_BUCKET” to the orderBys portion of our request.

{ "reportRequests": [ { "viewId": "70570703", "dateRanges": [ { "startDate": "30daysAgo", "endDate": "yesterday" } ], "metrics": [ { "expression": "ga:users" } ], "dimensions": [ { "name": "ga:hour", "histogramBuckets": [ "0", "2", "6", "10", "17", "20", "22", "24" ] } ], "orderBys": [ { "fieldName": "ga:hour", "orderType": "HISTOGRAM_BUCKET", "sortOrder": "ASCENDING" } ] } ]
}

Here’s what the response for that query looks like for some data from a personal site:

{ "reports": [ { "columnHeader": { "dimensions": [ "ga:hour" ], "metricHeader": { "metricHeaderEntries": [ { "name": "ga:users", "type": "INTEGER" } ] } }, "data": { "rows": [ { "dimensions": [ "0-1" ], "metrics": [ { "values": [ "31" ] } ] }, { "dimensions": [ "2-5" ], "metrics": [ { "values": [ "113" ] } ] }, { "dimensions": [ "6-9" ], "metrics": [ { "values": [ "155" ] } ] }, { "dimensions": [ "10-16" ], "metrics": [ { "values": [ "247" ] } ] }, { "dimensions": [ "17-19" ], "metrics": [ { "values": [ "52" ] } ] }, { "dimensions": [ "20-21" ], "metrics": [ { "values": [ "25" ] } ] }, { "dimensions": [ "22-23" ], "metrics": [ { "values": [ "21" ] } ] } ], "totals": [ { "values": [ "644" ] } ], "rowCount": 7, "minimums": [ { "values": [ "21" ] } ], "maximums": [ { "values": [ "247" ] } ], "isDataGolden": true } } ], "queryCost": 1
}

Some Downsides

As of this writing, the chief advantage of this feature is that it can save you a little logic and time when your own application wants to use histograms with your Google Analytics data. There’s no “give me X buckets” though – you have to know the range of your data ahead of time. Additionally, data is coerced into an integer, so floats are out.

That means if you want to generate bins dynamically (like we’re doing in our example), you need to first get the range of the data from Google Analytics, then calculate those buckets and send a second request. You may wish to simply request the raw data and calculate the histogram yourself.

Hopefully Google will add some more functionality to this feature to simplify dynamic binning, too. I’d also welcome the ability to create histograms within the Google Analytics interface! Hopefully this API feature is a sign that something like that is in the works.

There are a limited set of dimensions that can be queried in this manner; here’s a complete list:

Count of Sessions ga:sessionCount
Days Since Last Session ga:daysSinceLastSession
Session Duration ga:sessionDurationBucket
Days to Transaction ga:daysToTransaction
Year ga:year
Month of the year ga:month
Week of the Year ga:week
Day of the month ga:day
Hour ga:hour
Minute ga:minute
Month Index ga:nthMonth
Week Index ga:nthWeek
Day Index ga:nthDay
Minute Index ga:nthMinute
ISO Week of the Year ga:isoWeek
ISO Year ga:isoYear
Hour Index ga:nthHour
Any Custom Dimension ga:dimensionX (where X is the Custom Dimension index)

Great Example Use Cases

Wondering how you might use this feature? Here are some more examples to get your juices flowing:

  • Use Events to capture more accurate page load times and store the time in the label, then bin the times using the API.
  • Capture blog publish dates and see when blog posts peak in engagement
  • Look at months and transactions to identify seasonality
  • Compare Session Count and Revenue to see, in general, the number of sessions required to drive your highest revenue.

Have a clever use case of your own? Let me know about it the comments.