Development News

JavaScript SEO basics added to Google’s Search Developer’s Guide

Google added a new section to its Search Developer’s Guide on 18 July. The page describes how Google processes JavaScript and provides best practice tips, including:

  • How JavaScript allows SEOs to set or change meta titles and descriptions.
  • How to write compatible code and to be aware of API and JavaScript limitations.
  • How to write meaningful HTTP status codes to communicate with Googlebot.
  • How to ensure that Googlebot does not skip rendering and JavaScript execution.
  • How to fix images and lazy-loaded content to increase performance.
Action Point

Read through the guide and check whether you’re missing out on any opportunities to make the JavaScript on your site more accessible to GoogleBot.
Martin Splitt has also published a JavaScript SEO series on YouTube, offering a digestible way of learning how to implement JavaScript to benefit site presence in search.

Request for Comments to Formalise the Robots Exclusion Protocol Specification

On 1 July Google announced that it had made a Request for Comments to the Internet Engineering Task Force to formalise the Robots Exclusion Protocol Specification.

Google said: “Together with the original author of the protocol, webmasters, and other search engines, we’ve documented how the REP is used on the modern web, and submitted it to the IETF.”

Continuing it said that: “The proposed REP draft reflects over 20 years of real-world experience of relying on robots.txt rules, used both by Googlebot and other major crawlers, as well as about half a billion websites that rely on REP.”

Action Point

Immediate change is unlikely, but it’s worth bearing in mind that every search engine uses robots.txt as a crawling directive. An official standard would be worthwhile for both webmasters and search engines alike.

Robots.txt Parser Becomes Open Source

Also announced on 1 July, Google has open-sourced its C++ library for parsing and matching rules in robots.txt files.

The search engine said: “This library has been around for 20 years and it contains pieces of code that were written in the 90’s [sic]. Since then, the library evolved; we learned a lot about how webmasters write robots.txt files and corner cases that we had to cover for, and added what we learned over the years also to the internet draft when it made sense.”

Google has also added a testing tool in the library so that developers can perform tests on new rules:

robots_main <robots.txt content> <user_agent> <url>
Action Point

As Google looks to make the Robots Exclusion Protocol an internet standard, the library will come in useful so that developers can gain a full understanding of how Google products parse and match rules in robots.txt files.

Google News

Google Drops Support on Noindex Directive in Robots.txt

Following up from its posts on the previous day, Google published an update on the webmasters blog to discuss the fact that it will stop supporting specific rules in the robots exclusive protocol.

Effective from 1 September, the company said; “In the interest of maintaining a healthy ecosystem and preparing for potential future open source releases, we’re retiring all code that handles unsupported and unpublished rules (such as noindex)”.

There are, however, several alternatives available to webmasters to control crawling or indexing of content:

  • Use noindex in robots meta tags.
  • Implement 404 or 410 status codes.
  • Use password protection on pages you don’t want in the index.
  • Implement disallow in robots.txt.
  • Use the Remove URLs Tool for temporary removal while deploying a more permanent method.

On 29 July Google began sending notifications in Google Search Console to remove these robots.txt noindex rules, as shown in the below tweet by Bill Hartzer.

First time I’ve seen this message from google. pic.twitter.com/FiMR0VtSKw— Bill Hartzer (@bhartzer) July 29, 2019

Action Point

Ensure that if you are using the noindex directive in the robots.txt file that you replace it with a different exclusion method before 1 September. Additionally, check to see if you are using nofollow or crawl-delay commands.

Swipe to Visit Rolls Out on Google Image Search

As previewed at Google I/O on 25 July, Google announced that the Swipe to Visit function had rolled out for mobile users conducting image searches.

Available for pages with AMP markup, the function enables a preview of a site header once that a user selects an image result.

By simply swiping up or down, the user can then instantly load the previewed page.

According to Google, the perceived page load speed of a site has a positive impact on bounce rate as well as the overall user experience.

Google also stated that publishers would soon be able to view traffic data from AMP in Google Images in a performance report for Google Images. This addition will be in a new search area named “AMP on Image result”.

Action Point

Publishers that already feature AMP versions of their content do not need to make additional changes to appear in Swipe to Visit. If you don’t yet feature AMP, you can find out more in this guide.

Card Layout Rolled Out on Google News Desktop

In the second week of July, it was reported that Google had begun rolling out card layouts on search results within Google News.

Featured on desktop devices, the new design removes collections of particular stories and provides them to users as single cards.

Google News Card updated on desktop

One noteworthy aspect is that the layout as it stands provides only a single source for each story. Users now need to click the “View full coverage” button to find alternative sources.

Action Point

The new style means that some sites may see a decline in traffic if they are not regularly chosen to be the featured card for stories. We recommend ensuring that your site adheres with Google News content policies and that it complies with the algorithmically-based ranking criteria discussed in this guide.

Search Console Data to Become Available to Third-Party Platforms

In a Google Webmasters YouTube video on July 3, Martin Splitt mentioned that the search engine is working towards Search Console data being made available to third-party platforms.

Discussing the move with Dion Almaer, Splitt said: “We are working on ways of integrating external parties and external content providers and platforms to get the data that we are collecting already for search console.”

He continued: “We don’t want [webmasters] to have to specifically go into Search Console and now they deal with the interface they’re used to and then deal with something new.”

Splitt said that Google is currently determining how and what data is being used by webmasters before opening it up to external parties.

Action Point

Reducing the need for webmasters to go directly into Search Console should help users consolidate data and create more comprehensive performance dashboards, and Splitt states that it is going to happen soon. The conversation takes place at the 5:41 mark.

Interview Reveals How Search Algorithm Reacts to Real-Life Events

In conversation with The Guardian, Pandu Nayak, a senior search engineer at Google, revealed that the search engine’s algorithm can react to real-life crises.

When an event is taking place, the algorithm increases the weight of authority signals, which favour high quality content from well-known sources, to reduce the prevalence of misinformation (so-called “Fake News”).

Nayak makes it clear that the authority signals he mentions are the ones defined within the Search Quality Evaluator Guidelines.

Action Point

If you are a news publisher, read through the Search Quality Evaluator Guidelines (linked above) to see how you can improve your site so that it garners more authority in the eyes of Google’s raters. Ensure article headlines, straplines and copy don’t tip over the line from optimised clickbait to misinformation.

Bing News

Bing Requests Feedback on Webmaster Guidelines

On 30 July Frédéric Dubut, web ranking and quality project manager for Bing, requested feedback on the search engine’s webmaster guidelines.

SEO friends! 👋 I’m kicking off an effort to refresh our @Bing webmaster guidelines, both the spirit and the letter. Any shady tactics you think are not penalized enough? Any feedback on the document itself? https://t.co/Md2iZECrjQ— Frédéric Dubut (@CoperniX) July 30, 2019

Action Point

This is a good opportunity for webmasters to help influence Bing in terms of indexing, ranking and webmaster guidelines in the future. If you have any suggestions or comments, you can reply to the above tweet or click the feedback button on the bottom right of the Webmaster Guidelines Site.

Additional reading

Dan Sullivan Explains how Google Keeps Search Relevant for Users.

In an article published in The Keyword on 15 July, Dan Sullivan discusses how Google’s various SERP features help to keep search results useful and relevant to users.

Google to Launch new Q&A Video Series with John Mueller

Taking to Twitter, Google announced that it is to launch a new YouTube series with John Mueller, aimed at webmasters. Subscribe to the new Q&A series here.

Adobe Survey Finds That 48% of Consumers use Voice Assistants for General Web Searches

In a survey of 1,000 adults, Adobe found that more people are using voice assistants, with 44% of people claiming that they use voice technology daily.

Quote Button Added to Google Business Local Knowledge Panels

Service level businesses can now add call-to-action buttons within local knowledge panels. Currently only available to select businesses in certain countries, find out how it works in this Google My Business help guide.

Could Using Stock Photography be hurting your Rankings?

Shai Ahorony investigates for Reboot whether using stock photography online could be a kind of duplicate content, and how it might impact a site’s potential to rank.

Actionable WordPress Page Speed Masterclass

Nick LeRoy provides a rundown of how he achieved a PageSpeed Insights score of 100 on his WordPress site, and how you can replicate his results.