To make things more digestible, we’re now breaking down our technical updates by individual search engines. We actively monitor:

  • Google
  • Bing
  • Yandex
  • Baidu
  • Facebook
  • Pinterest
  • Youtube

You will only see the relevant sections, depending on which platforms have provided updates during a given month.


Text to Code Ratio

For some time webmasters have been pondering whether Google’s ranking algorithms take into account the code to text ratio on web pages.

In a recent Google Webmaster Central hangout, this popular question was raised to John Mueller and he said that this wasn’t in fact the case.

Despite this, there are certain technical aspects that we must first consider when evaluating his answer.

Although the code to text ratio may not seem important, having a bloated Document Object Model (“DOM”) structure can increase the page loading time, as the browser has to parse more elements, while applying styling and other JavaScript queries.

Having long class names can also lead to larger HTML file sizes, while WordPress and Drupal themes often have lengthy class values. This is more applicable for web servers that do not support Gzip compression.

With the bootstrap framework for example, we often see HTML written like this:

<div class="row">
 <div class="col-12">
 <div class="row">
 <div class="col-12">

The code above allows Bootstrap’s grid system to set the margin gutters. However, the nested elements above produce the exact same margin as:

<div class="row">
 <div class="col-12">

Another variation we see is:

<div class="pull-left">

<div class="pull-right">

Elements default to display as block and <p> are inline. The code above could be simplified to:

<p class="pull-left d-block">

<p class="pull-right d-block">

If you have 30 Bootstrap cards on a single page, each with pulling left and right with <div> elements, that would be a reduction of 60 additional elements that need parsing, then storing in the browser’s memory.

Action Point

Review web page HTML templates, CSS and JavaScript to see if it is possible to remove any unused classes and data attributes. It may be possible to create an equivalent DOM structure in more a more efficient manner (remove that technical debt).

PageRank Patent Updated

On 24 April a continuation patent of an updated PageRank was granted, which could affect how sites are ranked in the future, as well as explaining why some may rank better than others.

The updated algorithm in the patent could be significant as the “distance” between an authoritative site and those that it links to are calculated.

The links themselves can be classified by topic once the link web page has been crawled and indexed. The best possible links are those that are considered to be closest to authoritative within the same niche. This is of course different for every niche, which changes what could be considered to be an authoritative site. The actual PageRank given could be irrelevant in obscure niches, if Google happened to show the PageRank of a particular site (it no longer does),

Due to the fact that there aren’t too many sites within small niches to build a large PageRank, small sites within smaller niches could theoretically enjoy a boost in rankings.

The new algorithm, if implemented, could allow sites within small niches to outrank larger sites that have more links, which, will affect link building, due to the fact that it would change the solution for ranking.

That said, not being able to see any figure means that it is something that can be controlled. In fact, John Mueller mentioned this in his recent AMA and said that:

“You can pick a dampening factor and iteratively calculate the theoretical values based on the papers, but it has nothing to do with what happens within Google. It’s a fun exercise, if you like a little bit of math but usually you’d use many more pages to make it interesting (e.g. a smaller language version of Wikipedia).”

With that in mind, it’s worth remembering that just because Google has a patent for something, it does not mean that it will use it in practice.

Broad Core Update

Broad core updates happen on a semi-regular basis and they can cause fluctuations within rankings. On 20 April Google confirmed via Twitter that it had released a broad core update four days earlier, an announcement that many SEOs had been waiting for.

Google was kind enough to remind webmasters that if they did experience drops, that there is no “fix” for pages that perform less well.

That said, the search engine did reiterate that although a page may fall in rankings, it does not necessarily mean that it is of low quality, rather that it could just not be as relevant for what it previously ranked well for.

Google reminded webmasters to have patience and that over time “content may rise relative to other pages.”

Action Point

Continue creating quality content that is relevant to your site and pay attention to rankings so that you can deduce what Google considers relevant right now.

Snippets by Category

On 28 April an SEO remarked on Twitter that they saw detailed answers in a featured snippet (a collection of boxes) for relevant questions to the main search query.

It may be possible to view the featured snippet yourself by searching for garage conversion while using a mobile device.

The featured snippet provides both images and text as well as expandable categories that consider the cost, plans, regulations, and insulation of garage conversions.

For the most part, each section originates from a different site, and the extension seems to originate from the multifaceted Google featured snippets rolled out in March.

Action Point

We recommend revisiting existing content and adding the role attribute to headings and using IDs to allow URL fragments for internal linking: <h2 role="heading" id="planning-permission">Garage Conversion Planning Permission</h2> This can then be linked to through out the same page: When considering a garage conversion, make sure you check with your local council for <a href="#planning-permission">planning permission</a><a href="code">Inline code</a>

Job Posting Guidelines

On April 27 Google announced that it had changed its job posting guidelines in order to help the experience of job seekers.

Among one of the first topics that it covered was the placement of structured data on a job posting’s detail page.

In order to avoid confusion around job lists, Google recommends placing structured data on the most detailed leaf page. However it noted not to “add structured data to pages intended to present a list of jobs” and only to add it to the most specific page describing one particular job.

The search engine also noted that it had encountered sites that include information in the JobPosting structured data that is not present anywhere else.

To avoid jobseeker confusion, Google reminded webmasters to ensure that information within JobPosting structured data always matches what is within the job posting page.

If a posting violates Google’s Job Posting guidelines, the search engine could take a manual action against a site.

The search engine also encouraged webmasters to delete filled positions as it claimed that doing so might drive more traffic as job seekers gain confidence that positions are still vacant on the site.

Google suggests using one of the following to remove a job posting:

  1. Ensure the validThrough property is populated and in the past.
  2. Remove the page entirely (so that requesting it returns a 404 or 410 status code).
  3. Remove JobPosting structured data from the page.
Action Point

Of the three recommendations that Google suggests, we see one viable option:

1) Add the validThrough if you want to keep the listing in place and update the web page to let prospective readers know the job is no longer available. Perhaps the job role becomes available often?

Rendering a 404 page will bloat errors on the site and point three can be avoided. Removing the JobPosting and serving a generic “this job is no longer available” could potentially become a soft 404 error, while a 301, 302 or 307 redirect would not allow Google to see the page and source code.

Remember to update the XML sitemap and ping Google.

Google’s Podcast Strategy

In an interview with Pacific Content, Zach Reneau-Wedeen of Google’s podcast team mentioned that Google plans to prioritise audio the exact same way that it does with text, images, and video.

He said that:

“Right now Google is really good at giving you text and video related to your search query. With all the amazing work podcasters are publishing each day, there’s no good reason why audio isn’t a first-class citizen in the same way.”

Interestingly, he mentioned that Google plans to incorporate podcast metadata into search results so that podcasts could be returned in search results when topics and/or people are searched for (in addition to podcast titles).

This would imply however, that Google would have to create a way to learn about what each podcast is about, and understand the content within each individual episode.

This opens up an entirely different vertical for SEOs in the form of audio SEO, which could involve spoken word editions of articles, sound clips, or the answering of questions.

Action Point

Revise content to see if it could be curated into podcast material. Remember to optimise for iTunes too.

Facebook Updates

Author Information

In order for people to better assess the stories that they see in their news feed, Facebook announced that it is adding additional authorship features to provide more context for people so that they can read and share stories.

This means that when a user views a news story, they will be able to access the publisher’s Wikipedia entry, alongside related articles about the same topic.

If a publisher does not have a wikipedia entry, the platform shall state that the information is unavailable.

An option will also be given to follow the publisher’s page, as well as allowing people to see more articles from the same publisher and whether or not their friends have shared the same piece.

Facebook is adding the features throughout the US, although it shouldn’t be too long before we see the authorship features roll out across Europe.

This should not be confused with Google’s previous attempt with authorship markup.

Google is now using rich cards for authorship and not <meta> elements.

Action Point

Ensure that all relevant pages have the Open Graph author markup. This <meta /> element appears to be only used by Facebook.

Yandex Updates


HREFLANG attributions are traditionally implemented in the <head> of a HTML web page. However, each HREFLANG attribution method has benefits and drawbacks.

Method: Head

  • Resides within the web page HTML markup.
  • Most common method. Lots of examples and tools for debugging.
  • Only used by Google, however, all browsers must download the kilobytes additional response body data on every web page.
  • Must be downloaded for every HREFLANG check, running into millions of pages and terabytes of bandwidth for many websites.
  • Heavy cost on memory caching servers and database calls.
  • Reliant on the <head> data downloading without interruption or blocking elements.

Method: HTTP Header

  • Resides as a HTTP Header Response.
  • Can be used for file types that are not HTML based. For example: PDF, CSV, XLSX, DOCX.
  • The HEAD method can be requested, which returns only the header data.

Drawbacks: Only used by Google, however, all browsers must download kilobytes of additional response header data on every web page. Less tools available for debugging.

Method: XML Sitemap

  • Resides within XML sitemap(s), using a sub-element.
  • One off cost writing XML sitemaps to static files.
  • Only read by crawlers, reducing bandwidth and improving page load speed.
  • Easy to audit and cross reference without having to download entire web page (suitable for risk mitigation).
  • Can be used for non-HTML file types.
  • Sitemaps have a max file size, which is reached faster, leading to a proliferation of sitemaps. However, Google does not have a maximum limit on the number of XML sitemaps or XML sitemap index files that can be submitted.
  • Sitemaps may take a few minutes to rebuild if URL changes are required.
  • Yandex does not support HREFLANG XML sitemaps (affects Russian markets).

Overall, the XML sitemaps implementation has the best benefits with very little drawback.

We recently contacted the Yandex team to understand their roadmap for HREFLANG. Although there is no official announcement on their blog, they were able to confirm that HREFLANG XML sitemaps are going to be supported in the near future, but no specific date was specified.

Action Point

Businesses currently using head for HREFLANG annotation should consider implementing XML sitemaps.

Yandex currently supports only the head markup, which can be problematic for businesses who serve Russian markets.

Turbo Pages Debugger

Making the announcement on the Yandex Blog for Webmasters on 28 April, the Russian search engine introduced its Turbo Pages Debugger, which allows webmasters to build template Turbo Pages with ease.

Webmasters can assemble a page by transferring its elements into the “debugging” code block, where they can then make amendments and instantly observe any changes; allowing webmasters to check how an individual element will look within a Turbo-page.

Action Point

By using the Turbo Pages Debugger, webmasters should be able to create and verify CDATA for multiple web pages within the entire RSS feed.

Algorithm Change to Combat Misleading Ads

On 24 April Yandex announced that it was releasing an algorithm change to specifically combat misleading advertisements.

After conducting research, the search engine concluded that advertising hampering the “perception” of primary content, or interfering with site navigation, causes a negative user experience and leads to people spending less time on sites.

The Search Team therefore announced that it had updated the algorithm that determines the impact of advertising on the usability of sites. The team also noted that it had improved the definition of misleading ads and those that it considered to interfere with site navigation.

Action Point

Webmasters can check if their site is in violation of the search license in Yandex.Webmaster, in the “Security and violations” section within “Diagnostics”.

Helping New Brands Become Discoverable Through Virtual Assistant, Alice

Six months after its release, Yandex has introduced a way for its virtual assistant, used on Android, iOS and Windows operating systems, to understand brand terminology.

Posting on the Yandex Blog for Webmasters, the development team behind Alice admitted that it sometimes took the assistant a few days to understand new brand terms.

It is therefore now possible for webmasters to submit a new word to the team so that Alice can expand its vocabulary.

Action Point

If you are implementing new brand terminology, webmasters can submit new words using this form so that they can be used and understood by Alice.

New Site Notifications

In what it called a “small gift”, the Yandex.Webmaster team announced on 4 April that it had redesigned its site notifications with an array of new alerts.

The team reassured webmasters that notifications can still be specified on a site by site basis or all at once, but that they can now learn about:

  • Delegation of rights to manage the site and change the list of users managing the site.
  • Changing the status of the application for changing the regions of the site.
  • Changing the status of the application for the change of the site register.
  • The appearance of data in the “Recommended Queries” section.
  • The appearance of data in the section “Trends”.
  • Notifications about the binding of the Yandex.Metrica counter.

Webmasters should now be able to monitor data changes more conveniently, while there is also a list of new notifications than was previously available. These include:

  • Global messages and news services for webmasters.
  • A weekly summary of the site.
  • The updating of the search base.
  • The updating of the main mirror.
  • Detection of fatal problems.
  • Detection of critical problems.
  • Detection of possible problems.
  • Availability of recommendations for the site.
Action Point

Take a look through the new notification settings but note that it is more convenient to set general notification settings, as they will act immediately on all sites.

Specific alerts can be made for individual sites by clicking the “set notifications for each site” separately at the bottom of the “Notifications settings” section.

Additional Reading

DNS Privacy

Internet users are becoming more savvy about their privacy, especially with GDPR, Cambridge Analytica et al capturing the news.

Cloudflare and APNIC have created a solution to help prevent Internet Service Providers from listening at Domain Name Server (“DNS”) level. Consider switching your DNS resolves to

Conversion Probability

Google released Conversion Probability in Google Analytics on April 19, which helps Webmasters understand the likelihood of a user converting. This new information can be found in Audience Behaviour.

Amazon Blueprints

Amazon now allows users to create their own skills and responses for Alexa, in what the giant is calling Alexa Blueprints. Without writing any code, users can create voice apps, and teach the virtual assistant to respond to questions that they design.

Instagram Limits

In a bid to help boost Facebook’s flagging reputation for privacy, Instagram has cut all unofficial apps built on its platform. The app also shrank the API limit from 5,000 calls per hour down to just 200.

Facebook updates number of users by affected by Data Analytica leak to 87 million

Although only 250,000 Facebook users were found to download the Data Analytica psychology app, the company was able to collect data on more than 87 million people, far higher than the 50 million estimate that was previously reported.