The December roundup is a little later than usual, due to the December/January holiday. Here is what happened during the last month of 2017.

Chrome’s Ad-blocker

Benefit: High
Difficulty: Medium

Back on 1 June 2017, the Chromium team stated that it would stop showing all ads in Chrome (including those delivered by Google), on websites that are are non-compliant with the Better Ads Standards. Google also published an extensive guide on better ad practices to coincide with the announcement.

Coalition to enforce a better ad experience for users - chromium started  restricting some ads for some users.

The search engine has since confirmed that Chrome’s built-in ad-blocker will be activated on 15 February 2018.

Action Point

Ensure that any advertisements served are aligned with the Better Ads best practices before the activation date and that any violations are reported to webmasters via the Google Search Console Ad Experience Report. If a violation has occurred, the webmaster can submit the site for a review once the violation has been fixed.

Failing to resolve violations will lead to sites becoming blocked in Chrome, leading to a loss of revenue.

Robots Directive Update

Benefit: Medium
Difficulty: Low

In a webmaster hangout, John Mueller of Google explained that a long-term noindex, follow robots directive will eventually equate to a noindex, nofollow directive.

When a noindex robots directive is placed on a web page, it will reduce crawl frequency by lowering its importance. If Googlebot does not crawl a webpage at all:

  • PageRank will not pass to any linked webpages from the original webpage.
  • The webpage will be removed from Google’s index.
Action Point

Ensure to review all webpages for the noindex robots directive.

If the links on noindex webpages are not crawled, the website’s overall PageRank may be affected.

Check Log Files For Mobile-First Index

Benefit: High
Difficulty: Low

John Mueller confirmed in a webmaster hangout that server logs will indicate whether a website has been included in Google’s mobile-first index. If 80% of requests originate from Google’s Smartphone user-agent, it is likely that the website is in the mobile-first index. Google’s Smartphone uses the Googlebot user agent token, the same as the desktop crawler. To identify Googlebot (smartphone), check the full user agent string:

Mozilla/5.0 (Linux; Android 6.0.1; Nexus 5X Build/MMB29P) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2272.96 Mobile Safari/537.36 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)
Action Point

Review your server log files and compare the two User Agents. Screaming Frog offers a simple to use Log File Analyser for low Googlebot traffic (less than 2,00,000 Googlebot page visits monthly).

If you need assistance with a high traffic Googlebot server log analysis, send us an email.

Getting Ready for mobile-first indexing

Benefit: High
Difficulty: Low

Typically, crawling, indexing, and ranking systems analyse the desktop version of a page, which has caused issues for users when the mobile version of a website is different to the desktop version. Mobile-first indexing will see the mobile version used for crawling, indexing, and ranking to improve the experience for mobile users.

There will be a significant increase in crawling by the user agent Smartphone Googlebot and snippets in search results and the content on Google cache pages will be from the mobile version of a page.

Google will be evaluating sites independently on their readiness for mobile-first indexing and transitioning them once they are prepared. This process has already started for a handful of sites and it is being closely monitored by the search team.

There is no timeline for when it will be completed but Google provided the following tips:

  • Make sure the mobile version of the site also has the important, high-quality content.
  • Structured data is important for indexing and search features that users love; it should therefore feature both on the mobile and desktop version of the site.
  • Metadata should also be present on both versions of the site.
  • No changes are necessary for interlinking with separate mobile URLs (m.-dot sites). For sites using separate mobile URLs, keep the existing link rel="canonical" and link rel="alternate" elements between these versions.
  • Check hreflang links on separate mobile URLs. When using link rel="hreflang" elements for internationalisation, link between mobile and desktop URLs separately.
  • Ensure the servers hosting the site have enough capacity to handle potentially increased crawl rate.
Action Point

Sites that make use of responsive web design and correctly implement dynamic serving (this includes all desktop content and markup), generally don’t have to do anything in this instance.

16 Month Data Retention for Google Search Console Query (Beta)

Benefit: High
Difficulty: Low

The Search Query date range now extends past the 90 day retention and does so for up to 16 months. We noticed that the “Full Duration” currently limits to 12 months, but expect that it will be filled up to 16 months going forward.

The 90 day restriction still applies to Search Console API access.

Action Point

If you do not currently have access to Google Search Console beta, check in the top left corner of the profile to see if there is a link to “Try the new Search Console”.

You can also consider using the new date range for seasonal demand changes and analysis to see how queries have changed over time.

If you need to store the data for longer than 16 months, send us an email for more information on how to do so.

Google Increases Length of SERP Snippets

Benefit: Medium
Difficulty: Low

Google has increased the length of snippets within search results. The snippet length has grown from 160 characters to 230 characters. Google has not appeared to publish the change on the webmaster blog, but instead the news comes through Search Engine Land.

Google spokesperson did however confirm:

“We recently made a change to provide more descriptive and useful snippets, to help people better understand how pages are relevant to their searches. This resulted in snippets becoming slightly longer, on average.”

The meta description may therefore pull additional information from the webpage’s content (and markup), if it is deemed relevant to the user’s query.

Action Point

Danny Sullivan, Google’s public liaison officer recommended not increasing meta description lengths and stated that it was all part of a “more of a dynamic process.” Instead, monitor rank tracking and check Google Search Console for changes in rank position and click-through rate.

Depreciating Old AJAX Crawling in Q2 2018

Benefit: Low
Difficulty: Medium

The old AJAX crawling scheme was introduced back in 2009 to make AJAX/JavaScript applications crawlable. The scheme accepted pages with either a shebang #! in the URL or a “fragment meta tag“, and then crawls them with an ?_escaped_fragment_= in the URL. That escaped version needs to be fully rendered, and created by the website itself.

Following JavaScript rendering improvements by Googlebot, Google stated in November 2015 that it was depreciating the old AJAX scheme. Google engineers have improved JavaScript rendering since the depreciation notice and stated in December 2017 that Google will be able to render both the old and current AJAX schemes on its servers, rather than that of a website.

This change means that Googlebot will render the older AJAX scheme of shebang URL directly, making it unnecessary for the website owner to provide a rendered version of the page. We still recommend updating to use the current AJAX scheme where possible, as handling deprecated notices can still be problematic.

When Google makes the transition to rendering all JavaScript, ensure to monitor Google Search Console for the number of pages crawled by Googlebot and the number of URLs requested in the server logs.

One option would be to use server side rendering, John Mueller in the JavaScript Sites in Search Working Group, said:

“We realize this isn’t trivial to do. If you can only provide your content through a “DOM-level equivalent” pre-rendered version served through dynamic serving to the appropriate clients, then that works for us too.”

The Google testing tools have been updated to accept #! URLs:

  • The mobile friendly test: a quick way to fetch a page and render it with Googlebot (in the smartphone version), you can do this even if you don’t have the site verified.
  • Fetch & Render in Search Console: check the desktop and smartphone versions (check that the full page loads, and watch for lazy-loading content).
Action Point

If a website is using AJAX-crawling, or a service that relies upon it, such as a pre-rendering service, take a sample of URLs from different page templates and test them directly.

For each page template test, set Chrome’s user-agent to Googlebot and check the Document Object Model (DOM) for rendered HTML a elements, rel=nofollow" attributes, title element, meta description, canonical link element, and structured data.

AJAX requests are performed after the initial DOM load and rely on a data source. There is an additional overhead in loading AJAX on the page load, which can bloat the initial page time and the overall number of pages crawled within the allocated budget.

Rich Results & Testing Tools

Benefit: High
Difficulty: Low

There are many ways to markup a website’s content to provide structured data to trigger rich snippets, rich cards, or enriched results in the SERPs. Having multiple names for the same objects however, has become tricky.

To consolidate all of the prior “rich” related terminologies, Google will now refer to rich snippets and other variants as “rich results”. The company is also introducing a new rich results testing tool, with the aim of simplifying the structured data testing. The new tool provides a more accurate reflection of the page’s appearance on Search and includes improved handling for structured data found on dynamically loaded content.

The new testing tool focuses on the structured data types that are eligible to be shown as rich results. It allows you to test all data sources on your pages, such as JSON-LD (Google’s recommendation), Microdata, and RDFa.

Action Point

Use the new rich results testing tool to help with ongoing maintenance. If you are looking to get started with rich results, here are Google’s guidelines on marking up website content.

WordPress Brute Force Attack

Benefit: High
Difficulty: Medium

A massive distributed brute force attack targeting WordPress sites occurred on 18 December at 03:00 UTC. The attack was broad and used a large number of attacking IPs, with each one generating a vast number of attacks. This was the most aggressive campaign seen to date and peaked at over 14 million attacks per hour.

Action

Many articles and plugins have been created to help protect WordPress sites from brute force attacks. Here are some easy plugins to get you started:

1) Enable two-factor authentication on all accounts.
2) Limit the login attempts per account.
3) Hide your login page and return an error header response.

More advanced:

4) Setup a VPN and only allow login from specific IP addresses.
5) Use a Content Delivery Network such as Cloudflare.
6) Use a honeypot on the login page.
7) Enable a time delay login (4+ seconds).

Improving Search & Discovery on Google

Benefit: High
Difficulty: High

In Google’s quest to keep users searching, three new core additions have been added to the search experience:

There are now more images and related searches inside select Featured Snippets to help users discover even more about a topic. This appears to cover quite a large portion of the search results on mobile devices.

Knowledge Panels have also been updated to show related content, so for example, while looking at the Knowledge Panel about skiing, related searches for similar sports and activities are found directly inside the result.

When a user is researching a certain topic, footballers for instance, suggestions for related topics in the same carousel (or “vein”, to use Google terminology), will be found at the top of the search results page, so you can continue to discover other athletes during your search session.

If a user was searching for football players for this summer’s World Cup, and searched for Neymar, followed by a search for Messi, they are likely to be shown other players featured in the competition who play for Barcelona.

Action Points

Monitor rank tracking and check Google Search Console for changes in rank position and click-through rate. With Knowledge Panels becoming ever more important, well built content strategies and markup will help competitiveness in the knowledge landscape.

Bartosz Góralewicz wrote an interesting article. You need to look past the click-bait title Everything You Know About JavaScript Indexing is Wrong.

John Mueller clarified that there is no difference between trailing slashes on the domain name (with no subdirectory). Find the tweet here.