Every single year Google releases thousands of algorithm updates to tweak its search engine results page to increase the reliability of their search results and therefore the trust of its users…
How many times exactly in 2018 did they do this?
- 1.1 updates every 3 hours.
- 3.3 updates every average UK and US working day (9 hours).
- 8.8 updates every day.
- 62 updates every week.
- 269.5 updates every month.
- 3234 updates in total in 2018.
As you can see above Google is always making a constant, concerted effort to make its SERPS (Search.Engine.Results.Page) more trustworthy by tweaking the webpages shown on its results page, based on how well they fulfil searchers keywords.
Crucially, the trust and success of Googles search engine is completely dependant on the SERPS fulfilling users search criteria, whilst also creating a positive experience that leads to the consumer trusting Google with more search queries. Why is this important? Well, the more users, the greater the chance of Googles ad’s being clicked on, meaning more profit to be made!
We would be wise to always remember this because if we want to get consistent traffic from Google, then we need to provide satisfying answers to the questions asked by searchers and most importantly, not get pinged by Google updates.
In our guide below we’ve documented all of the significant Google algorithm updates since day zero. Check back here daily for any new updates that we may have added to this guide. If there’s an update we will cover it and we will provide actionable advice to avoid getting hit by the update and/or recover from it.
What makes our Google update guide so good?
Unlike other sites, we have compiled all the information on every single update in one place for you to read! You won’t have to open loads of tabs to try and understand an update or try to piece together information from different sources!
Furthermore, with almost every update we have tried to explain; what the update is about, why it came about, how it can affect you and if necessary, how you can recover!
We believe this is the most important information to help you learn how to rank continuously with Google!
We hope this guide becomes your go-to resource for anything and everything Google updates.
If you think we’ve missed any updates out then drop us a message in live chat and we’ll look into it, and credit you if you’d like when we post it up 🙂
That’s enough background information for now.
So, let’s get down to the nitty-gritty of it, and take a look at the most prominent Google updates from day zero and most importantly, how you can avoid your website getting slammed by one of them.
2019 ALGORITHM UPDATES:
Google B.E.R.T Update – October 25th 2019
Last week Google began rolling out it’s biggest update since Rankbrain was released back in 2015. Their announcement on their Webmasters blog by vice-head of search Pandu Nayak claims that this update will affect 1 in 10 of all searches!
What is B.E.R.T and how does it work?
B.E.R.T stands for Bidirectional Encoder Representations from Transformers.
Sounds complicated right? Don’t worry you don’t need to fully understand the science behind it to understand what it means.
Basically, it is a technique that has been developed for natural language processing.
They wrote about this open-sourced technique back in November 2018 on their AI blog.
Essentially these transformers, are models that process the words in a phrase in relation to one and other, rather than looking at one word after another, or simply as separate words. This means that Google can look at the full context of a search phrase, as these systems learn how different words interact with each other to create different contexts.
In fact, this new system is so advanced and complex that it needed new cloud TPU software to take on some of the strain, as Google’s traditional hardware struggled to cope.
What does this mean for users?
As we mentioned previously, Google claims this update will help answer 1 in 10 searches that are currently ‘unknown’ to Google. (15% of searches that Google receives every day have never been seen before!)
At the moment this works in the US for English searches, but over time it will be rolled out in other languages and other countries.
It is predicted this will particularly help more long-tail phrases, especially for those searches which use filler words such as ‘to’ or ‘for’.
Here is an example Google gives on how it can help return more relevant results, by better understanding the context behind the query.
As you can see in this example, previously Google had found a related article that had the word ‘stand’ in it, but the context behind it was incorrect. The search query had been a question about the physical act of standing, but the result was about ‘stand-alone’ esthetics schools.
As we can see, when using B.E.R.T this system had returned a result that was relevant to the context behind the search query, even though the word ‘stand’ wasn’t in the second example, B.E.R.T had understood this was related to the ‘physical demands’ of the job. Essentially it had understood the nuances of the query, as a human would.
This update will also affect featured snippet results, as you can see here in this example given by Google.
Google has also clarified that B.E.R.T has rolled out for all 25 countries that support featured snippets.
B.E.R.T vs Rankbrain
B.E.R.T is not a replacement for Rankbrain but rather an addition to it’s NLP (natural language processing). Google will use a number of different algorithms dependant on the nature of the search query. For some searches, Rankbrain will be used, and for others, they will use B.E.R.T.
In fact, Multiple methods can be used at once to understand a search query. For example; if it’s a query that hasn’t been seen before they may apply B.E.R.T but then if it was misspelt Google can apply it’s spelling system to return what it believes you meant to spell, and so on.
How to optimise for B.E.R.T?
We thought this tweet summed up how optimising for B.E.R.T works!
Our advice remains the same as it was for the Rankbrain update. Just create well-written NATURAL content.
Don’t keyword stuff, don’t use bots to create content and try to write in a language that is native to you, or if not hire someone who can!
These give you the best chance of even getting on the ladder, but of course, we all know there’s still plenty more that needs to be done to climb it 🙂
What has the effect of B.E.R.T been so far?
It’s still early days, and we’ll update more information as we get it, but for now, it seems that about 10% of all search queries have been affected.
Of course with an update like this, there will be fluctuations, but as SEO’s the impact won’t feel as big, compared to what we may see with core updates, or updates such as Panda, Penguin etc because realistically there is not much we can do about it, apart from continuing to create naturally-written content.
Broad Core Algorithm Update – September 24th 2019
Google has yet again announced a core algorithm update on the 24th of September, as you can see by these tweets;
What is a broad core algorithm update?
A broad core update refers to a non-specific update, this means that Google will be changing a wide range of factors in order to improve their search engine results.
It may be that sites that have suffered in one update may begin to see improvements, especially if they have made improvements in certain areas.
In general, these updates are not about ‘punishing’ a site. It’s a case of Google assessing the new content put out to see which is better, and of course, this means there will be winners and losers.
What has happened so far?
First off Daily Mail, which was a big ‘loser’ in the June Broad Core Update has seen some nice gains back:
And here are the biggest losers and winners according to Sistrix:
And the losers:
As you can see there are several health sites in the winners, as well as some media sites.
Glenn Gabe also published some of his findings here:
As he says there has been a lot of changes in the health and wellness section, again, this could be sites gaining back rankings that had lost a little last time. This feeds into the idea that this update has been about levelling out some of the big drops that sites saw in the last broad core update.
Lilly ray also pointed out the success story that is: whattoexpect.com
And when you look through their site it’s clear that they are doing great things when it comes to E-A-T!
How can you recover/ avoid being hit in these updates?
As we said previously the main aim of broad core updates is quality, and Google tweaking their algorithms to make sure it offers up the best results. This means some sites fall for others to gain.
However, you still want to make sure that your site isn’t one that falls.
We’ll take the example of whattoexpect.com. This is a site that falls into the YMYL group of sites.
But by showing off their E-A-T to the maximum they’ve used this to their advantage and seen consistent gains in the past year as this chart shows us.
When looking through their site, we found a few examples of what they’re doing right:
As you can see each article is reviewed to ensure it’s medical validity.
They follow the HONcode, for which they have to apply for certification to ensure the site meets its requirements:
As this image shows this is a great (trusted) external resource that goes a long way to showing the expertise, authoritativeness and trustworthiness of the sites that receive certification from them. Automatically this is a big green tick to Google.
Not only this but the fact they link out to sources (see below images) to back up the validity of their statements is yet again a huge tick for their site. As we’ve said previously Google loves pages that pretty much represent a college degree essay, especially in YMYL industries, where E-A-T is so key. Having this information backed up by peer-reviewed journals is as good as it gets in terms of E-A-T.
So, what’s the take-away?
If you have a YMYL site (or even if you don’t) look at what the ‘winners’ are doing! Find ways you can show off your E-A-T! We’ve discussed this at length before in previous posts about E-A-T.
Controlling Preview Content For Google Search – September 24th 2019
Google announced through their webmaster blog today that they are adding new features to allow webmasters to control the content seen in their snippets.
Previously Google would pull the text or images that they saw as relevant to include in the snippet, but will now be giving more freedom to the webmaster.
How does it work?
Webmasters can now control what is seen in in the SERPs for their page previews in two ways:
The robots meta tag:
added to an HTML page’s <head>, or specified via the x-robots-tag HTTP header. The robots meta tags addressing the preview content for a page are:
- An existing option specifying that you don’t want any textual snippet shown for this page.
- New! Specifying a maximum text-length, in characters, of a snippet for your page.
- New! Specifying a maximum duration in seconds of an animated video preview.
- New! Specifying a maximum size of image preview to be shown for images on this page, using either “none”, “standard”, or “large”.
They can be combined, for example:
<meta name=”robots” content=”max-snippet:50, max-image-preview:large”>
Google has clarified preview settings will become available in mid to late October for this.
Data-nosnippet HTML attribute
Google has also introduced a new way to help limit which part of a page is eligible to be shown as a snippet, this is the “data-nosnippet” HTML attribute on span, div, and section elements.
By using this, you will be able to prevent that part of an HTML page from being shown within the textual snippet on the page.
This is the example given by Google:
<p><span data-nosnippet>Harry Houdini</span> is undoubtedly the most famous magician ever to live.</p>
The data-nosnippet HTML attribute will start coming into effect later this year.
Google has said you can read more about this in their documentation.
How will this affect featured snippets and rich results?
Google has said that featured snippets and rich results will not use the above settings. That is to say that the best way to edit what will be seen in a potential featured snippet is to edit the structured data.
For example, if you put a recipe in structured data, it will be likely this will be seen in the featured snippet if you wanted to change what or how much was seen in that featured snippet you would have to change or limit what you put in that structured data on your page.
They also warn against over limiting what can be seen as a preview by Google, as oftentimes much of their special features require a minimum word count
Formatting for AMP
For publishers using AMP formatting who do not want Google to use larger thumbnail images when their AMP pages are shown in search and Discover, Google has said they can use the meta robots settings we posted above to specify max-image-preview of “standard” or “none.”
Overall this update is nothing to write home about, but it does show that Google is trying to give a little more control over to the webmaster.
Review Rich Results Update – September 16th 2019
Google announced via their webmaster’s blog that they would be algorithmically updating the reviews we see in snippets to make them even more helpful for the users.
They gave this as an example of a rich review which may be less than helpful:
So, what have they done?
They are now limiting the schema types that can trigger rich review results in the SERPs as they believe that not all the types used before were of real use to the user.
This is the new list of schema types (and subtypes) that can trigger a rich review:
Self-serving reviews will no longer be used in rich reviews.
Google classifies self-serving reviews as those which are controlled by the webmaster of the page.
They judge this by whether the reviews are directly put into the markup of that page, or via a third-party widget. Either way, because the webmaster can control these they are not seen as ‘useful’. Which basically means they’re not seen as trustworthy because they can cherry-pick the reviews that are seen.
They specify that for the schema types localbusiness and organization they will no longer allow rich reviews for sites that use the reviews in their own markups or third-party widgets.
Why was this implemented?
Google explained this update was put in place to stop webmasters using reviews in a self-serving way, or basically stopping them from being able to potentially ‘dupe’ the user. By disallowing this type of review it adds another level of objectivity to the process, meaning more trustworthiness for Google’s users.
What can you do if you use rich results?
Google specifies that if you use ‘self-serving’ ratings they will no longer show up on the results page.
But there won’t be any sort of manual action if you continue to keep your reviews on your page.
They’ve also specified that this does not affect Google My Business, it only affects organic search.
However, perhaps the biggest change is that Google now specifies that the name of the property being reviewed is included.
For many plugins, this may not be available at the moment so we recommend looking into this to see if there is an update available that can include the property “name”, or if not contact the creators of the plugin to see if they can make it possible.
Whilst Google hasn’t outright specified that the “name” is now needed or the review won’t be shown, what they did say is:
“With this update, the name property is now required, so you’ll want to make sure that you specify the name of the item that’s being reviewed.”
So we can take from this that if the name isn’t included there’s a chance the review won’t be shown.
Google algorithm tweak for term “Lesbienne” – June 2019
So, what happened?
French website Numerama pointed out in mid-June to Google that search terms such as ‘Lesbienne’ were returning pornographic results on the first page underneath their pride banner.
And then a few days later the pride banner had disappeared completely.
They also noted that the pride banner appeared for the search terms ‘homosexuel’ (male version) but not for ‘homosexuelle’ (female version).
Numerama pointed out that it was only this female version – Lesbienne that seemed to be affected with porn results on page one, and that terms such as ‘gay’ or ‘trans’ returned blogs, Wikipedia pages, news articles etc.
It’s also important to note that this was affecting the search term ‘lesbienne’ and not the English version; ‘Lesbian’.
It was argued that these search results only added to the over-sexualisation that lesbians receive, treating them more as sexual fetishes for entertainment, rather than humans firstly.
What did Google have to say?
Pandu Nayak, the vice head of search at Google responded firstly by saying;
“I find that these [search] results are terrible, there is no doubt about it,”
“We are aware that there are problems like this, in many languages and different researches. We have developed algorithms to improve this research, one after the other.”
They also pointed out that they have seen these issues before with other innocent search terms such as ‘girls’ and ‘teen’ which also used to link to porn sites before changes to the algorithm were made.
In the end, they confirmed that an algorithmic update had occurred and that pornographic results would no longer be returned for the term ‘lesbienne’.
“We work hard to prevent potentially shocking or offensive content from rising high in search results if users are not explicitly seeking that content.
Freshness update for Featured Snippets – February 2019 (announced August 2019)
This update was actually released way back in February 2019, according to the Google powers that be. However, Google’s vice president of search; Pandu Nayak only announced it in a Google blog post on the 1st of August.
So, what does this update do?
This update aimed to ensure that the results found in featured snippets were as fresh as possible, where necessary!
The new system aimed to understand what information would need to be updated regularly and what information may remain accurate for a longer period of time.
This would be particularly useful for featured snippets, where the freshness of the information may be an important factor!
Of course, in this instance, the information HAS to be relevant and up to date, otherwise, it is of little importance or use to the user!
Barry over at SEO roundtable did point out last month that they may not have perfected this yet, as the search term “best smartphone for product photography” still showed up some pretty ancient results!
However, when we checked back on this exact same search term today, the featured snippet has now been updated to show a much more relevant range of mobiles.
This may be that the new system is learning, and developing over time, or could be that webmasters have caught onto the need for fresh content for a range of long-tail keywords that offer them the opportunity to rank in snippets – who knows 🙂
Who will this update impact?
Well, it won’t really impact anyone in particular, but what it does offer is an opportunity to rank in the snippets for a range of keywords that require updating often. As we saw with the example above, there are a number of ways to word “best smartphone for product photography” if you can create an up to date article regularly targeting those keywords then you will have the opportunity to rank.
Essentially, this update offers opportunities to sites willing to put in the work to create fresh content regularly!
Site Diversity Update – June 6th 2019
The diversity update was announced on the 6th of June and yet again Google bucked trends by actually announcing it as it was released.
This, of course, sent SEO professionals and webmasters into even more of a meltdown after the June core algorithm update was announced only a few days before.
There were concerns that any findings, changes in rankings etc would be harder to attribute to one specific update, but nonetheless we tried anyway. 🙂
So, what were the aims of this update?
As Google laid out in their tweets, they wanted to be able to stop certain bigger domains from being able to rank multiple URLs for one search. Domains after this update should not be able to rank for more than two spots.
Of course, there would be caveats to this dependant on the type of search and the relevancy of the results. But the overall aim was to reduce the dominance of certain sites for certain search terms.
Impact of the Diversity update
Search metrics did a great study into this update to find out what effect on traffic this update had had.
They found that sites that showed more than 4 results for one search were down from 1.8% to effectively zero.
Sites that showed three URLs for one search were down from 6.7% to 3.5% (effectively halving it).
The proportion of searches with two URLs from one domain had actually risen slightly from 43.6% to 44.2%! And of course the same with one URL result.
This reduction in the higher numbers of URLs has meant that overall there has been a decrease of domains returning multiples URL from 53.2% to 47.9%.
Relevance is still the most important factor.
The idea of diverse results often isn’t the most useful when a user’s search term has a navigational intent.
What does this mean?
Well say for example I am looking for a dress from Asos and I type ‘Asos Dresses’ in on Google;
As you can see the results are all for Asos and the different sub-categories of the dresses they offer.
This is useful because realistically if a user is typing in Asos and Google returns them 10 different options it will not be satisfying the user’s query as much, and will not be as relevant. This search term is purely navigational in intent.
This shows us that Google hasn’t sacrificed relevancy in order to show diverse results.
Who has this update targeted?
Overall when Search metrics analysed the effect on queries of an informational or transactional manner it was found that transactional queries had seen a more profound effect in this update.
And of course; the navigational searches had seen the least effect, as we discussed previously.
Many informational queries remain unchanged; for example; Searchmetrics pointed out that the search term ‘Anise seed recipes’ still returns its top 3 results from allrecipes.com (a popular US cooking site)
This may mean that Google is not OVER favouring diversity where it will have an effect on the relevancy.
As they do say in their tweet:
“However, we may still show more than two in cases where our systems determine it’s especially relevant to do so for a particular search….”
The effects of this update.
The result for many big brands such as Amazon who have previously been able to dominate transactional search queries with multiples URLs may be a need to increase their ad revenue with Google.
As well as the need to focus on snippets, videos, images etc which have not been affected by this update and therefore their importance will have only increased!
The good news is that this may give smaller, niche sites a chance to rank where before they couldn’t! 🙂
If you have any more questions about the diversity update, don’t hesitate to drop us a message in live chat!
June Broad Core Algorithm Update – June 2019
Note: we have written an extended update to the Google June core algorithm update here.
On June 2nd 2019 Google followed in its own footsteps from a few months previous and announced that a new Google core algorithm update would be rolling out over the next few days.
They kept us informed over the days that the update was rolled out…
And yet again we waited with bated breath to see what the effects on our clients would be.
Why was this update important?
Well, it followed on from the recently updated Quality Rater Guidelines, and we normally recognise that when new QRGs are released, an update isn’t far behind. And that was certainly the case this time.
We suspected that the update would, perhaps, have an effect on a slightly broader range of sites than simply YMYL sites alone, as the QRGs had focused on not only E-A-T this time, but also page quality.
And this seems to have been the case.
Who did it target?
Whilst we can’t say that an update ever ‘targets’ anyone specifically, just by the nature of the update, certain industries seem to be more affected than others.
As this chart by Sistrix shows:
Yet again, industries that can do severe damage or harm to Googles users well-being were targeted. Such as gambling, finance and the health industry.
But not only this, we saw a lot of British newspaper media seeing either big losses or big gains.
E-A-T is still a huge factor in ensuring quality journalism when we consider the far-reaching influence that the media can have.
You can read more about our discoveries on why we believe these sites either lost traffic or gained here.
A major gainer in the health and wellness industry from this update was Healthline. When we looked into their site we found that they exhibited incredibly high levels of E-A-T and were likely rewarded in this update because of it. Yet again we have written a more in-depth report on this here.
So, what can you do to recover or avoid being hit in the first place?
Of course; if you are in an industry where your site’s advice can do direct harm to the health or financial well-being of your users you must show Google WHY you are an expert, authoritative and trustworthy.
In-depth author pages which show areas of speciality, awards, degrees-achieved and to what level, work and study experience, past papers published, mentions in media etc! All of these will help Google build a picture of your level of authority.
Make sure where possible to always reference out to other authoritative sources also in order to back up your work.
If you are in a less ‘risky’ industry, still think about ways you can show off your E-A-T.
Googles QRGs specifically mention that ‘everyday expertise’ in certain industries is perfectly acceptable.
By this they mean; having built up a good following, having positive comments made about you or your site, having good engagement, consistently producing quality content.
All of these will be beneficial if you are in less risky industries where qualifications may not be necessary to be ‘the best’.
Note: that isn’t to say that writing up why you’re trustworthy still isn’t worth it. If you are a hairdresser that has their own blog or website selling their products, an author page showing WHY you are good at what you do is still useful; that can be awards won, a link to your own salon, a link to online reviews, showing the qualifications you have undergone in order to get to where you are.
BUT these factors are not as important as they would be if you were in an industry where health and financial well-being is at direct risk by untrustworthy advice.
John Mueller also specifically mentioned looking at the 23 questions used to create the Panda update:
These are brilliant for taking an outsider look at your own site.
You can even ask someone else to look at your site and go through these questions. If your site fails on any of these points dramatically, chances are that may be an area that needs improving upon. If you’re interested in reading more, we have written an in-depth analysis of the June 2019 core update here.
March Broad Core Algorithm Update – March 2019
Why is it important?
Well for once Google actually confirmed an update:
This meant that SEO professionals and webmasters could watch this update happening in real-time. A welcome departure from the norm!
So, what happened?
Following on from the Medic update in August 2018, this broad-core algorithm update continued to fine-tune the health websites it ranked.
As you can see from this chart:
Because of this, many health sites were knocked down the rankings again. However, as per usual, when one site falls down the rankings, another must rise to take its place. And as you can see, the site draxe.com saw an uptake in traffic of over 300% following on from the release of this broad core update.
User signals gaining in importance
Search metrics found that the March update increased the importance of user signals; such as bounce rate and dwell time. As well as the number of pages visited when on the site.
As you can see here in this chart, the sites that did well in this update had higher levels of time spent on the page, more pages visited and lower bounce rate.
These findings are really interesting; however, it can be inherently difficult for some sites to resolve this issue if their query is quite short to answer, and therefore users do not need to be on the page for long periods of time.
How to resolve any drops in traffic?
Well as usual Google has been relatively tight-lipped. But all they can say is that sites that are doing what they should do (e.g. no spam techniques, producing quality content etc) shouldn’t be affected. However, if Google now finds a page that is outperforming your site in certain areas, you may see a drop in the rankings.
They don’t specify that there is anything in particular that can be changed! It is just a case of better sites having the chance to be rewarded.
Whilst nothing much to write home about, it is still nice to see Google actually forewarning (or post-warning) us when they are performing a broader core algorithm update.
Other than that, these are the actionable ways to combat against this update;
1. Continue to build on E-A-T.
You can find our recommendations on how to do that here. But trust and expertise are becoming more and more important! Especially since the 2018, Quality Rater Guidelines were released, which focus on the importance of E-A-T
2. Ensure your content will fulfil user intent.
As user signals become more important, as does the need to ensure your page answers the user query that you are ranking for. Make sure you are covering the topic in the most in-depth way possible, without straying too far from the keywords you are ranking for.
Also, look at ways to keep your user on the page – we discussed this in our recent guide to SEO in 2019.
And here’s an example of A.P.P
If you follow these tips you will have a higher chance of ensuring your site is storm proof against upcoming Google updates!
Good luck and if you have any other questions about the March 2019 update, don’t hesitate to drop us a message 🙂
2018 ALGORITHM UPDATES:
Medic update – August 1st 2018
Coined the medic update as it had a disproportionate effect on ‘your money your life sites’ (YMYL).
Who was impacted?
Google claimed that this was a broad algorithm update, however, it seems to have deeply affected sites in the medic/YMYL industry. This graph by Moz shows the 30 biggest losers, and as you can see they lost hard with this update!
But why did some sites crash so hard?
There was some conjecture that this update was linked to their new quality guidelines, and although quality guidelines wouldn’t dictate an entire algorithmic update, it does show that on Googles part they were making changes to their ethos regarding who they rank, especially in industries that can directly affect the health and well being of the users.
In Googles ever long quest to gain trust and reliability it would seem that sites in the YMYL (your money your life) sector that had a mixed to low reputation would automatically be considered low trust by Google.
Now this is incredibly important, Maria Haynes found that in areas such as ‘Keto diets’ many sites that were pushing a product, with little to no scientific backing would have suffered, and inversely sites that had a good reputation in the scientific community saw gains.
Any diet, nutrition sites, as well as sites selling medical products that didn’t have high E-A-T (expertise, authority, trust) were affected.
By that rationale affiliate sites, or sites created to sell a certain diet, medical product etc, but have little to no E-A-T in the field, were demoted by Google.
In the quest for trustworthiness, Google understands that sites trying to sell a product may give undeserved praise to a product in order to increase sales. Whereas companies, doctors, scientists and so on that have created a good reputation for themselves and have research backed up by scientific reports, have won awards and so on, are less likely to put their name to a questionable product.
How to recover from the Medic update
The main thing is to show off as much E-A-T as possible, as a site, author and product – whether that’s by creating bios for the varying authors writing on a site. As well at stating awards, they’ve won, scientific journals that back up what they’re saying or selling, and recognition in the media.
And then, of course, authoritative linking, like we discussed in the Page rank update is also of the utmost importance, as well as reviews and testimonials.
Also, evaluate what you sell, if you know the product is questionable or doesn’t really work, is harmful or has a low reputation then it may be time to change what you’re selling as Google is getting harder and harder to fool!
This advice shouldn’t just apply to the scientific or YMYL community. Any site that wants to be trusted by Google should be showing off their E-A-T as much as possible! When we look at googles guidelines for search evaluators on what a trustworthy site should look like we can see a variety of industries, from news and media to E-commerce meaning no one is truly safe from the judgement of Google, the best we can do as webmasters is show off our E-A-T as much as possible and gain as many authoritative links as possible.
This update also came at an interesting time when the rise of fake news has meant that Google is being more vigilant than ever with the sites that it gives ranking to.
Page Rank Patent – April 2018
A game-changer for link building and a follow up to the Penguin update. It focuses on the distance between authoritative sites (also known as seed sites) and the sites it links out to, and essentially the closer the sites are to the seed site in terms of linking, the better!
How does the Page Rank update work?
The author thought to have created this update is also the patent holder of the ‘Scalable system for determining short paths within weblink network’ – sound interesting?
This is what the patent is described as:
Systems and methods for finding multiple shortest paths. A directed graph representing web resources and links are divided into shards, each shard comprising a portion of the graph representing multiple web resources. Each of the shards is assigned to a server, and a distance table is calculated in parallel for each of the web resources in each shard using a nearest seed computation in the server to which the shard was assigned
Sounds complicated? Basically different niches are assigned to different servers and those servers will calculate the link distance between the most authoritative sites and the sites it links out to. It is done in a distributed way so that if a server goes down another will pick it up.
What does this mean for ranking?
This update can have a profound impact on smaller niche sites.
Originally the Trust rank algorithm was biased towards bigger sites, but by creating topical trusted seed sites, this division into topics has meant that more obscure niche sites have the chance to rank, as they may have the authority in that area.
This Page rank update is not a trust rank algorithm.
Sites and webpages have not been assigned trust ranking themselves. They are simply starting with trusted sites, meaning those sites that have no spam, and produce legitimate content. If we think of it as a pyramid, those trusted seed sites would be at the top, and if they directly link out to a webpage then that site/those sites would be the ones with the most link juice passed on, and so on, with the power of the link decreasing as it gets to the bottom of the pyramid.
Trust is not the factor being ranked, but the distance between a seed site and those it links out to.
This makes link building more important than ever!
Especially links out from authoritative sites! Even if those sites are no-follow Google will still look incredibly favourably upon a direct link from an authoritative site.
Mobile-First Indexing – March 2018
It was a big step forward when Google announced that it would now be mobile-first indexing for a large number of sites. They had previously mentioned that a smaller number of sites had been moved over to mobile indexing. But this new announcement by Google seemed to indicate it would now be a larger amount moving over.
So, what is Mobile-first indexing?
The way that Google reads your site is by crawling it and then indexing it. Before; Google would look at the desktop version of a site before anything else, however, now it will look at a mobile version firstly.
So when you are in Google search console; you may see that your site has now been crawled by a Googlebot smartphone instead.
They were careful to specify that:
“Sites that are not in this initial wave don’t need to panic. Mobile-first indexing is about how we gather content, not about how content is ranked. Content gathered by mobile-first indexing has no ranking advantage over mobile content that’s not yet gathered this way or desktop content. Moreover, if you only have desktop content, you will continue to be represented in our index.
Having said that, we continue to encourage webmasters to make their content mobile-friendly. We do evaluate all content in our index — whether it is desktop or mobile — to determine how mobile-friendly it is. Since 2015, this measure can help mobile-friendly content perform better for those who are searching on mobile. Related, we recently announced that beginning in July 2018, content that is slow-loading may perform less well for both desktop and mobile searchers.”
So, what does this mean for users?
Well, whilst Google is clearly stating that this new form of indexing in itself it will not affect rankings, it does show us a change in Google’s ethos.
It is separate to the mobile-friendly update; which looks at the page layout, speed etc to determine whether it is user-friendly on mobile, or not.
But, we do know that if you are continuing to create content and pages that aren’t mobile-friendly, then chances are it will not be doing you any favours! Especially as this update rolls out on a wider scale!
If Google can’t index your page properly then it is likely to cause your site issues, before you even then look at the fact that it is also a negative ranking signal.
What advice is there for non-mobile friendly sites?
Here is some of the advice that Google laid out in its documentation pages:
They basically suggested a number of basic good practices for mobile-friendliness. However, we have to reiterate again that for ranking benefits your site SHOULD only have one version, that is mobile friendly! M-dot sites are not seen as favourably by Google anymore, and can, therefore, be a ranking disadvantage to your site.
Updates to Mobile-first Indexing
Since July 2019 every new site will now be indexed by mobile-first. This means that anyone creating a new website HAS to have mobile-first and mobile-friendliness at the forefront of their mind.
Broad core algorithm update – March 9th 2018
On March 12th Google confirmed a broad algorithm update, the main message being that this update was not aimed at low-quality sites, but rather on helping the algorithm to bring in more relevant, better results.
What was the Broad Core Update?
In order to improve its results, Google looked at two areas for improvement.
Understanding user intent: this is relevant to what we saw in the Hummingbird and RankBrain updates, where Google is trying to understand what it is that the user wants to see in a ‘things not strings’ manner e.g. a semantic understanding of the search term, rather than viewing it as a string of keywords.
Understanding content as best as possible: e.g. Google being able to understand the quality and content of written work as a human being would, so for example, if an article was full of synonyms of a certain keyword they were trying to rank for, it would be able to pick this out, and if the article was also low quality and of no real value it would understand that this work does not deserve to rank as highly as another piece of content that may have delved deeper into a given subject.
The argument by Google was; there is no way to ‘fix’ this issue – if your site has dropped it may just be that other great sites have been boosted.
As previously mentioned, Google claimed there is no real way to ‘fix’ this. It is simply a way of Googles algorithm better understanding humans intent behind searches and also better understanding the content written.
And this is pertinent in SEO because there tends to be a belief that Google is trying to penalise lower quality sites, and therefore SEOs need to fix that issue.
However, googles number one concern is user experience, if they find a way of a user getting a better, more relevant experience then they will do it.
If one site is demoted it doesn’t necessarily mean that site was low quality but rather another site who deserves a higher spot has pushed the other down.
And the only real answer to that is to look at what those top sites are doing, and then continue to produce high quality, long content that ensures you really answer the question rather than just focusing on the keyword and its synonyms. That isn’t to say keywords now have no importance, of course, they always will, BUT the chances are if you’re writing an in-depth, authoritative piece of content on a given subject then those keywords will appear naturally.
At the end of the day, if you answer a users question, and they spend time reading your content then that is how Google classifies great content.
And that is without even delving into the top ranking signal for Google, which continues to be backlinks!
2017 ALGORITHM UPDATES:
Maccabees update – December 12th 2017
In early December 2017, there were rumblings of another update in the works. Google confirmed it had made a few updates to its core algorithm over the last few months, as it was a core algorithm update and therefore had no name, Barry Schwartz called it Maccabee.
A core algorithm update can be when the algorithms that decide how relevant a search query is to a web page are altered, when changes are made to the type of content deemed useful to a certain search query, for example, if the term is looking for information it will look for longer informative pages rather than e-commerce sites. It can also be when the kind of links deemed as valuable to a site is changed, this can, of course, cause fluctuations for many sites.
Who was affected?
Sites that use an array of keyword permutations on a given landing page.
For example; in the travel industry they would target
(A given city/destination name) + (an activity) for example “Bordeaux paragliding”
But it could also be on blogs, e-commerce sites, review sites and so on.
Google targeted sites that had lower quality content and focused on monetisation through affiliate links or a lot of advertisements, whilst offering little to no content.
Recovery from Maccabee
As we’ve seen countless times, quality is key! Don’t plaster pages in ads, don’t offer thin content, don’t use shady link schemes, the list can go on and on!
Fred – March 7th 2017
Fred is different from any update that we’d seen before in that it wasn’t really an algorithmic update. He was the name for any update related to the overall quality of a site.
How did the Google Fred update get its name?
In 2015-2017 there were numerous quality-related updates, such as the quality update, and many other small updates that hadn’t earned themselves names.
Then in March 2017, Gary Illyes of Google was asked to name a recent prominent update by Google and he decided to call it Fred.
Apparently this is what he named anything that he didn’t know what to call, but the name took off and became the name for any Google quality update.
When interviewed on ‘Fred’ in 2017 Gary Illyes had this to say:
Gary Illyes: Right, so the story behind Fred is that basically I’m an asshole on Twitter. And I’m also very sarcastic which is usually a very bad combination. And Barry Schwartz, because who else, was asking me about some update that we did to the search algorithm. And I don’t know if you know, but in average we do three or two to three updates to the search algorithm, ranking algorithm every single day. So usually our response to Barry is that sure, it’s very likely there was an update. But that day I felt even more sarcastic than I actually am, and I had to tell him that. Oh, he was begging me practically for a name for the algorithm or update, because he likes Panda or Penguin and what’s the new one. Pork, owl, shit like that. And I just told him that, you know what, from now on every single update that we make – unless we say otherwise – will be called Fred; every single one of them.
Interviewer: So now we’re in a perpetual state of Freds?
Gary Illyes: Correct. Basically every single update that we make is a Fred. I don’t like, or I was sarcastic because I don’t like that people are focusing on this. Every single update that we make is around quality of the site or general quality, perceived quality of the site, content and the links or whatever. All these are in the Webmaster Guidelines. When there’s something that is not in line with our Webmaster Guidelines, or we change an algorithm that modifies the Webmaster Guidelines, then we update the Webmaster Guidelines as well. Or we publish something like a Penguin algorithm, or work with journalists like you to publish, throw them something like they did with Panda.
Interviewer: So for all these one to two updates a day, when webmasters go on and see their rankings go up or down, how many of those changes are actually actionable?
Can webmasters actually take something away from that, or is it just under the generic and for the quality of your site?
Gary Illyes interview Brighton SEO
Gary Illyes: I would say that for the vast majority, and I’m talking about probably over 95%, 98% of the launches are not actionable for webmasters. And that’s because we may change, for example, which keywords from the page we pick up because we see, let’s say, that people in a certain region put up the content differently and we want to adapt to that. […] Basically, if you publish high quality content that is highly cited on the Internet – and I’m not talking about just links, but also mentions on social networks and people talking about your branding, crap like that. [audience laughter] Then, I shouldn’t have said that right? Then you are doing great. And fluctuations will always happen to your traffic. We can’t help that; it would be really weird if there wasn’t fluctuation, because that would mean we don’t change, we don’t improve our search results anymore. “
Essentially the message was; we publish 100’s of quality updates monthly we don’t have time to name them all and if there is something important that changes then we will discuss it, so all updates will now be referred to as Fred, as nearly all updates are related to the quality of sites anyway. And if you’re producing quality content, have great links, have social signals then your site should be fine, sites will naturally fluctuate as Google makes changes!
The impact of Fred
On March 7th a quality update (another Fred) was rolled out,
It affected some sites enormously – many lost between 60-90% of their traffic immediately however some saw their traffic increase by 125 %.
It seemed that the update was hitting sites that used aggressive monetising techniques, and yet again the impact depended on the quality and trustworthiness that the site offered its users. Now, where have we heard that before?
How to recover from Fred (and yes, we mean ALL the Fred’s)
Surprise, surprise the simple answer is to improve quality of content and to continue to produce long, good quality, relevant content, and to continue obtaining relevant and quality backlinks, as well as social signals. If you’re unsure Google list their guidelines here.
However, it is important to remember that these are only the guidelines of what you HAVE to do in order for Google to even give you a chance of ranking, these are not SEO practices, the number one ranking signal for Google is still backlinks.
Intrusive interstitials – January 10th 2017
Another step forward for mobile search, Google announced an update where sites that use intrusive interstitial pop-ups would be pushed down the rankings.
What is an intrusive interstitial?
Google defines an intrusive interstitial as a pop up where the content on the screen is blocked. These interstitials usually take up the whole screen. This is obviously something that Google doesn’t want as it means a poorer experience for the user.
Who will be affected by intrusive interstitials?
Google is aiming their update at pages where the interstitials pop up straight away or whilst scrolling.
It will also be aimed at pages where above the fold is made to look like an interstitial and the user must scroll down the page to the content.
Pop-ups that also allow users to choose their country are also considered intrusive, however, if pages have to use interstitials for cookies or age verification they will not be affected.
The impact of the update
In a study completed by Search Engine Journal, 51% of users said they had not felt an effect, 43% said they were unsure and 6% said they had seen an effect. Google stated that for those who have been affected, and have done the necessary to remove the intrusive pop-ups, they will have to wait until Google re-crawls their site in order to be re-ranked.
2016 ALGORITHM UPDATES:
Possum Update – September 2016
This update was never confirmed by Google, but around the 1st of September 2016, SEO professionals and webmasters began to notice sites falling and rising. It seemed to be predominantly affecting the 3 pack and local finder that you see in the Google Map results. And is not to be confused with an unconfirmed update that was seen to happen at the same time, affecting organic results.
So, what was the update about?
The update seemed to be attempting to diversify, and widen the scope of results you would see in the map pack.
For example; many businesses up until this point that were outside of city limits were unable to rank for local keywords as Google would not deem their business to be in that city!
This evidently caused issues for many local businesses and many local SEO specialists didn’t see a way around this.
For example, a study on Search Engine Land at the time showed that one SEO professional, who had struggled to rank their client for local keywords in Sarasota, as the business was technically outside the city limits, went from #31 in the rankings to 10th after this update!
That’s a huge leap and evidently showed us that Google was changing their ranking factors to include businesses in a reasonable distance, and not just in the city limits themselves.
Google also started going deeper to ensure that businesses weren’t having multiple listings on the map pack.
For example; before they would filter out local results that shared a telephone number or domain, as with many businesses there can be a number of listings. You can have a dentist’s office, then the individual practitioners within that dentist clinic too. So, Google would want to ensure you are not seeing duplicate content.
However, after this update, Google seemed to use their wider understanding of who ran the businesses to ensure that they could not both be seen in the local search results.
So, say you owned two or more businesses in the same industry and the same town, you would be unlikely to see both of those in the 3 pack or local search anymore at the same time. As Google began to also view this as duplicate.
Why the name Possum?
Many businesses thought they had lost their GMB (Google My Business) listings when in fact they were just not showing and had been filtered. Therefore, the sites were playing possum 🙂
This update seemed to be the biggest of its kind since Pigeon in 2014! Its main aim was to strengthen the reliability of its results for local search, and was obviously welcomed by those who had struggled to rank just due to their address!
2015 ALGORITHM UPDATES:
RankBrain – October 26th 2015
RankBrain was the first of its kind for Google. It was the first live artificial intelligence that Google used for search results. Here’s how Google described it:
RankBrain uses artificial intelligence to embed vast amounts of written language into mathematical entities — called vectors — that the computer can understand. If RankBrain sees a word or phrase it isn’t familiar with, the machine can make a guess as to what words or phrases might have a similar meaning and filter the result accordingly, making it more effective at handling never-before-seen search queries.Bloomberg – Google Turning Its Lucrative Web Search Over to AI Machines.
What is RankBrain?
Some described it as one of the most important ranking signals after content and links, but it isn’t really a ranking signal at all.
RankBrain is a way of trying to figure out what a user ‘means’ by a search that it has never seen before, and when we consider that 15% of all searches every day have never before been seen by Google its importance becomes clear!
Its another step towards ‘humanising’ the way Googles algorithm thinks as we saw in Hummingbird.
A certain search query can be asked a million different ways that Google may never have seen before. Therefore RankBrains job is to figure out the meaning behind the writing.
This becomes even more pertinent with the move into the voice search era.
Over time its use has expanded from 15% of all queries to nearly all.
However, of course, RankBrain is not used in queries where it already understands the meaning well.
RankBrains relation to Hummingbird.
As we discussed before when Hummingbird was implemented Google went from understanding search queries as a string of keywords to the thing (the context/ meaning behind the query).
They used databases such as Wikidata and Freebase to feed Google with known relationships so that Google’s AI system could then figure out the relationship between places, people and things to offer what it thinks is the most relevant result to the user, and now a data fed system is used to do the learning.
What this meant….
Say for example you search ‘apple’ on Google, before Google would have understood what an apple is by looking at H1 tags, inbound anchor text and so on – essentially understanding it as a keyword or ‘string’.
What RankBrain and Hummingbird have done is teach Googles algorithm that an apple is a ‘thing’, meaning it begins to understand the concept of an apple as an item rather than a word, and therefore will be more likely to give you an accurate result.
If Google is unsure, it will offer up variants in its SERPs and can then analyse which result performs better! For example with Apple, it may show results for both the computer and the fruit.
And this is the essence of RankBrain.
Over time it will figure out that when queries are entered like ‘apple’ whether the user meant the fruit or the company Apple. Its machine learning algorithm will learn from user interactions in the SERPs. For example, if most people clicked on the link to Apple the technology company it would begin to show more results for them instead of apple the fruit.
This is yet again very pertinent for searches that have never been seen before. At first Google may offer up a plethora of search results related to what they believe the user meant, but over time as the users will show which pages are most relevant to the search term by the pages that get the most interaction in the SERPs, and therefore RankBrain can learn what the query means, in order to offer better results in the future.
How to work with RankBrain
As always Google recommends to write naturally, otherwise, RankBrain may get confused. Conversational, natural language is what it is optimised for.
There is no real way to get on top of RankBrain other than that.
Its focus on finding the best results for unknown queries means that:
- Those queries don’t receive much traffic in the first place
- As Google is constantly changing its top ranking as the AI learns the meaning of the search, there is little point in investing time or effort in it anyway.
Quality update – May 3rd 2015
Also known as the Phantom update, there was no warning it was coming and then one day it appeared out of nowhere and with not a word said by Google. All data pointed towards it being another update focused on a site’s quality, the first of its kind since Panda in 2011.
So, what did it do?
Like Panda yet again its aim was to demote lower quality sites that were untrustworthy and unreliable in the eyes of Google. This image released by the Google search optimisation starter guide spelt it out pretty clearly.
Who was impacted by the quality update?
Articles with little to no content and clickbait titles were hit hard. The kind of pages that lure you in and leave you feeling disappointed and unsatisfied! Whether the content isn’t relevant or is far too vague, it can be frustrating for users and of course is something Google wants to avoid.
Any pages that over-advertise.
Like an extension of the page layout update that focused on annoying, heavy laden above the fold advertising yet again a complete nuisance for users when they have to trawl through ads to find content, and therefore something yet again Google wants to be rid of.
Content farming and how-to pages.
As we discussed in Panda the content farming business had been booming, and whilst Panda had put a huge dent in them, many still lurked beneath the surface. Thin, niche related articles, where the quality of the content is of little importance but to the ability to bring millions of eyes to their ads is of the utmost importance.
Demand Media were still paying writers next to zero to pump out thousands of articles, as you can see in this quote by an editor who worked for them.
One main site – Hubpages– lost 22% of its traffic across their whole site after the update! E-how also saw a drop in traffic (worth mentioning that both are owned by Demand media). Wikihow and answers.com also saw drops but less severely.
Yet again, due to Google’s sheer dominance of the search engine market, if sites didn’t fall in line at some point they were bound to get bitten.
The customer (or user) is king with Google, they are the ones that click on ads and bring in the money – lose their faith and Google loses their business.
How to recover
Yet again the advice being given was similar to what was said for Panda.
- Start straight away with making an improvement, don’t wait to see if the site starts going back up naturally.
- Get rid of excessive advertising and pop-ups, anything that would annoy a user.
- Rewrite (don’t get rid of) any thin content, and going forward only put out high quality (which usually also means longer) and relevant content.
- Do an entire site audit for any other annoyances that could affect a user, such as deceptive links.
Mobilegeddon (mobile-friendly update) – April 21st 2015
This was the start of the mobile taking centre stage, by becoming a ranking signal for Google.
What was the mobile-friendly update?
There were 2 options to this update, your site was either mobile-friendly or it wasn’t!
This image released by Google on their webmasters’ blog showed the difference. If you didn’t make the changes, your site would be impacted.
Why was the Mobile update needed?
Google knew that there was a shift towards mobile browsing, and as always Googles premier interest is ensuring that the user has the best experience possible. Therefore, when a large percentage of its users are now searching by mobile they must ensure that the sites that Google finds are optimised for this. If people stopped using Google, stopped clicking on their ads, then it would all go-to pot for them, so ensuring they keep up to date with consumer needs is of utmost importance to them.
What was the impact of Mobilegeddon?
Google gave a list of the areas this update would impact :
- That it would affect rankings only on mobile devices.
- It would affect search results in all languages.
- It would apply to individual pages, not sites as a whole.
Everyone was fearful that the impact would be worse than we’d seen before (hence the apocalyptic name choice) but after a few short days’ people realised that it wasn’t as bad as they’d first feared. In general, the update worked as planned. Non-mobile-friendly sites fell down the rankings as other mobile-friendly sites rose up. The update did what it said on the tin!
There were many murmurings also that the speed and loading of the sites page were still more important than if the page was mobile-friendly, Colin Guidi of 3Q digital argued that after looking at many pages, the speed and responsiveness of that page outweighed the importance of mobile-friendliness.
It seems that Mobilegeddons effects were minor, but for once Google gave webmasters the chance to prepare, therefore hopefully mitigating any issues. Not only this but by offering a ranking boost to any sites that did become mobile-friendly, they gave webmasters the proverbial ‘kick up the butt’ to get started if they hadn’t already.
Mobilegeddon 2.0 update
Google announced that it would be improving upon their original update on March 16th 2016. They would be increasing mobile-friendliness as a ranking signal to ensure users had the best mobile experience possible and the best chance of finding relevant and mobile-friendly pages.
John Mueller did say that if you were already mobile-friendly then you would not be affected by this update.
How can I see if my site’s mobile friendly?
There are a number of tools out there that can show whether a page on a given site is mobile-friendly or not. One of the best is Google’s mobile-friendly test. This will show for each individual page whether you’re are mobile-friendly or not, as you can see below:
Remember that this is for each individual page, not the site as a whole, so just because one page is friendly, doesn’t mean another will be.
The good news is that even if your site isn’t mobile-friendly Google offers a lot of helpful tips on how to optimise your site, which you can see under ‘additional resources on the page’.
In terms of recovery, Mobilegeddon is not too difficult to recover from, especially with Google offering a vast array of information on what to do, as long as the webmaster is willing to put in the work to make the site mobile friendly there should be no reason they could not re-rank.
2014 ALGORITHM UPDATES:
Pigeon update – July 24th 2014
What was it?
A google search algorithm update, its aim was to improve local search results by remunerating local businesses that have good organic presence with better visibility in the SERPs. This was to offer a boost to small brick and mortar businesses that did well in a local area.
How did they do it?
Google placed more importance on certain ranking signals, for example, location and distance ranking parameters, meaning that more localised results were provided to users.
The impact of Pigeon
Googles SERPs began to become more synonymous with Google maps, meaning you would see similar results whether looking at Google maps or the SERPs:
As we can see, the results between Googles search engine and Maps were harmonised.
Pigeon also helped push directories such as YELP back up the rankings after previous issues the site had had with Google, as they felt that YELP was being pushed out in favour of Google’s own service. This update amended those wrongs for the better!
Changes to the map pack.
Pre-pigeon Google would show 7-10 business on Google map pack, whereas following on afterwards it would only show 3 results with the option to show more.
Of course, this will have enormously benefited the businesses that are found in the top 3. Making the need for good local SEO even more important than before, as fewer options available means less likelihood of being seen if you aren’t at the top. But it also meant that customers received clearer, more precise and localised results.
The aim of pigeon
Google wanted local search to mirror organic search results. The way it affected businesses depended on their previous position. Google wanted to offer users only the most useful and relevant information, which is why it favoured businesses in a certain search radius and relevancy to the search query being entered.
But this could all come at a cost of those sites actually ever receiving traffic onto their pages, as the necessary information can all be found on the map pack. So whilst business may increase, views to the actual website of the business may not.
2013 ALGORITHM UPDATES:
Hummingbird update – September 26th 2013
In the early days of Hummingbird, a year previous, the knowledge graph was introduced by Google, which aimed to look at the intent behind a users search query. Giving rise to the phrase ‘things not strings’ e.g. not looking at the search query as a simple string of keywords, but rather at the whole thing.
For example, type in Indian food and you will have seen results for typical Indian meals, recipes and so on, what the Hummingbird update tried to do was understand the intent behind that search, for example, someone searching for Indian food may actually be interested in local Indian restaurants.
What was the Hummingbird update?
It aimed to revolutionise the way Googles algorithm looks at search queries. Rather than looking at a search query as a set of keywords, it would now attempt to understand the semantics behind a search query, essentially humanising Google.
Hummingbird aimed to give users the ability to speak in a conversational manner with Google, rather than having to use specialised language so that Google can understand the user’s query. It also aimed to simplify searches when the user may not know much on the subject and therefore their search query may be slightly vague or incorrect.
These changes are especially important when we consider the rise of voice search, where users will use a conversational tone and Google will have to interpret the query into a relevant and useful answer for the user.
The results of Hummingbird
The most prevalent impact of the Hummingbird update was its improvement on local search results. This new conversational and semantic way of understanding a users search query led to an increase in local traffic. If Google understood your search to have a local intent, it would show you local results. This again weakened the old system of spamming pages with local keywords, another development from the Venice update in 2012.
The Hummingbird update was dissimilar to the Panda and Penguin updates in that it didn’t ‘destroy’ people’s rankings. Instead, it was more of a core algorithm update, similar to predecessors such as the caffeine and freshness update.
Hummingbirds aim was to create a more humanised interaction with Google, giving users a more natural, reliable experience.
2012 ALGORITHM UPDATES:
Payday Loan update – June 11th 2012
This became known as the payday loan update as it focused on spam-prone query sites in industries such as payday loan sites, casinos, pharmaceuticals, finance, insurance, mortgages and porn.
What did this update do?
It cracked down on spam prone queries, that is to say, that it focused on certain queries such as ‘payday loan’ ‘viagra’ casinos’ and so on and whilst the query in itself cannot be ‘spammy’ the site behind it could be. And these were the types of queries that tended to bring up those spam results. Many in those industries used spam techniques such as; keyword stuffing, link manipulation, thin content and so on. Any sites that fell into these categories would have seen their rankings decrease thanks to this update.
Recovery from the update
At the risk of sounding like a broken record, the recovery for this update was much the same as what we saw for penguin and panda.
Rewrite better, longer content, perform link audits whilst still reaching out for high quality, relevant links. Simply put, all the things that a site should do from the start, but up until that point in time, they could avoid doing.
Noteworthy updates since Payday Loan.
Since 2016 Google now no longer allows payday loan sites or other risky loan sites to use AdWords. This is in stark contrast to Google’s previous ethos of allowing anyone to benefit from AdWords. Is this a way of Google trying to take an ethical stance and distance itself from morally questionable sites?
Exact match domain update – September 28th 2012
Yet again the clue is in the title with this one. The target of this update were sites that used exact match domains (EMDs), however not all EMDs were affected, it was predominantly aimed at low-quality sites that used EMDs as an easy SEO technique to rank in the SERPs without putting in the work to create a quality site.
Why was this update needed?
Simply put, it was to increase the reliability of search results on Google. Many SEO agencies would stick up an exact match domain site and rank almost straight away, without having to actually offer anything to the user. Something which google evidently does not appreciate.
It also sought to further solve issues that Panda hadn’t resolved, in terms of low-quality content. Its aim being to encourage webmasters and SEO professionals to actually produce quality content.
Who was affected?
They targeted spam sites that had thin content and did not really offer anything of value, other than the fact they had an EMD title.
Research also seemed to show that non-dot-com sites were hit harder than dot-com counterparts.
EMD sites that produced great content that was relevant to its sites title tended to fare well in the update.
Impact of EMD update
Overall it made the results on google more reliable, better quality and more trustworthy. Webmasters used to complain that if they did not have exact match domains that they were at a direct disadvantage, even if they may have offered better quality content, this update sought to rectify that.
And whilst it may have made the jobs of SEO agencies more difficult, it levelled the playing field and made SEO professionals work harder in all aspects of SEO.
Recovery from EMD
Much like Panda, getting rid of thin, low-quality content, or even better, rewriting it, was and is the best solution. Ensuring that quality, in-depth content continues to be published on a regular basis.
Also looking at backlink profiles to remove any spammy links that may flag up, whilst gaining authoritative links at the same time.
Yet again it’s clear that this update was about improving the quality of results found on the first page of Google, ensuring a better experience for the user.
Penguin update – April 24th 2012
One of the most well known Google updates to date. It aimed to get rid of link spam and manipulative link building practices. Originally known as the Webspam algorithm, it didn’t come to be known as Penguin until a year later, though the reasoning’s for the name choice is still unconfirmed.
Why was the Penguin update necessary?
Yet again the aim of the Penguin update was to reduce low-quality sites and spam techniques and therefore gain trust in the eyes of its users.
Black-hat linking was becoming a prominent issue, and after Google had released its Panda update a year earlier, tackling low quality and thin content, the Penguin update was now seen as a necessary extension of this, tackling the other aspects of Black-hat SEO techniques.
Penguin aimed to reward sites that had natural, authoritative links pointing to them, whilst demoting sites that used low-quality link schemes such as the purchase or acquisition of low quality or unrelated links from sites.
It’s also worth noting that the Penguin update solely focused on inbound links and did not take into account outbound links at all.
The impact of Penguin
When it was launched in 2012 it was noted to have affected over 3% of all search results and the second update in 2013 affected a further 2.3% of results.
There were numerous refreshes to the original Penguin update in the following years;
March and October 2012.
Refreshes were made so that sites that had been previously affected had the chance to recover if they had taken the steps to get rid of and spammy linking techniques they had previously used.
May 2013 – Google Penguin 2.0 was released
This was the first update to the original Penguin that actually made changes to how the algorithm worked. It now looked further into a site than just the homepages and top category pages for evidence of spammy links.
Two refreshes happened in October allowing sites previously affected to recover.
September 23rd 2016 (Penguin 4.0).
The final penguin update incorporated it into Google’s core algorithm. This meant that Google could now make real-time updates to sites that had been affected, without having to wait for the next update.
What happened after its release?
Sites that had previously been partaking in manipulative link building schemes saw their organic search traffic drop, these drops weren’t necessarily site-wide and could be contained to certain keywords which had been overly linked from with exact match anchor text.
Overall the aim was to create more natural link building, where relevant and authoritative sites link to other niche relevant sites.
Recovery from Google Penguin.
The disavow tool will be your friend, but only after exhausting your other options first.
If you find yourself penalised by Google for spammy linking techniques this is the process you should go through, (it is long and arduous and is why we offer it as a service so we can do the hard work for you)
- Use software such as AH refs or Google search console to find a list of all your sites backlinks.
- Download the backlinks into an excel file and filter out any sites automatically that you can see are irrelevant or look spammy. Highlight these in one colour – such as red.
- Look through the rest of the sites on the list– any that have a high DA automatically highlight in green.
- This will then leave the rest of the sites to manually look through to see if they are relevant or use any spam techniques. Yet again highlight any in red you want to get rid of.
- This will then leave the sites that need to be removed, the first action necessary is to find email addresses on their sites, or use an email address finder such as hunter.io and start manually outreaching, asking them to remove links to your site from their websites.
- If they don’t respond after a week, send out another email. If they do respond and take the link down, you can now highlight those sites in yellow.
- If they still haven’t responded, then you can add that site to the list of sites to be put in the disavow file.
How do disavow files work?
If a site continues to refuse your request to take down a link to your site then a disavow file is the last option. You submit this to Google in Googles search console with a list of the sites that you wish it to ignore when looking at who backlinks to your site. Its key to remember that as you find more links to add to the disavow file, you cannot just enter those and upload it, you need to also enter all the previous disavow websites, as each time the disavow file is uploaded it overrides the old file.
Google will send a confirmation when it receives the disavow file, though changes aren’t instant.
But of course, if you are getting rid of links you should also be adding more high authority and/or relevant backlinks to replace the old ones.
Auditing a whole site can be an incredibly long process, and if you have a whole business to run alongside it, it can be too time-consuming, which is why we offer our backlink auditing service. We look into every backlink to evaluate them (even if they’re in the thousands) and we can then offer manual outreach for high-value backlinks so that your site does not drop down the rankings from the loss of so many backlinks.
Venice update – February 27th 2012
A local search based update, Venice was another step towards Googles desire to fulfil the user requirements, which oftentimes involved local search.
“We launched a new system to find results from a user’s city more reliably. Now we’re better able to detect when both queries and documents are local to the user.”Google blog 2012.
What was the impact of Google Venice?
The main change was that Google stopped showing localised results only in the Google ‘places’ feature and started showing them in the actual SERPs. This is an example of Google in 2011:
Google also used to define local search by the queries they entered into the search engine, for example; ‘electricians near me’ or ‘electricians in London’ after the Venice update Google also started to learn other search queries may also have local intent even if it was not entered in the actual search term.
They could gather IP information of the user or look at the location they had set in order to show relevant local results.
The Venice update allowed smaller businesses to compete with bigger companies on localised search queries where before they may not have been seen.
By using microsites and pages for specific areas, as well as using (Location + keywords) it was easy for sites to gain in the rankings without potentially even having a real-life business in that area.
The downsides of Venice
Whilst giving importance to SEO it also allowed for many spammy techniques, such as keyword stuffing pages with local town names and using doorway pages.
Some of these methods can still be seen in 2017.
On local search terms, Google seems to pay less attention, meaning sites with location stuffing and no real content can slip through the cracks and end up on page 1.
A great example of this was a site flagged up to John Mueller, it showed up on page 1 for a localised search term (Location + keyword) despite being a ‘Lorum Ipsum’ site.
This highlighted that even to this day Google doesn’t spend much time analysing niche, low volume search results.
However, Google has taken some steps since then with the Pigeon update.
Google page layout update – January 19th 2012
What was the update about?
It was a desire by Google to provide a high-quality experience for its users which led to this update, in which sites with too many above the fold advertisements were penalised.
Essentially, if you had a site where the user had to scroll down past ads just to see the content, the likelihood is you would have been affected. This update itself had many small updates allowing sites that had been penalised to do the necessary work to remove the excess above the fold ads and then be recrawled and ranked again by Google. This continued until 2016 when John Mueller announced that from then on any changes sites made would be picked up instantly by Google, meaning no need to wait for the next update to have the site crawled again.
Who was impacted?
Less than 1% of all sites were impacted by this update, but for that 1 %, they would certainly have felt the drop. Matt Cutts specified that this update did not affect sites who placed above the fold ads to a normal degree.
Many were frustrated to see a drop in rankings, but in reality, this update was probably necessary to improve the user experience, most people don’t want to trawl through ads to just to find their content.
Whilst annoying at the time for those affected it did improve the whole experience overall.
How to recover from the Page Layout update?
Google was clear that this update would affect all screen resolutions, whether that be phone, tablet, laptop or screen. The above the fold advertisements had been particularly annoying for users on phones, who would often have to scroll numerous times just to reach the content below.
Therefore, the first step for recovery would be to look at the site through a screen resolution tester, to see how the ad placement looks on each device.
The good news is that after 2016, Googles move to automating updates means that sites that had been previously pinged no longer had to wait for the next update to recover.
Was it a good idea?
Overall yes, whilst it may not have seemed to be for those affected, having a better balance between content and advertisements, means people are more likely to stay on a page longer and come back again.
It was also seen as the first step towards mobile indexing, as ads were a particularly big issue for users browsing through their phone.
2011 ALGORITHM UPDATES:
Freshness update – November 3rd 2011
The clue was certainly in the name with this update. Built to develop on the caffeine update of 2010, this algorithm ranking update aimed to provide fresher, more recent search results.
And with it thought to affect up to 35% of total searches it proved to be an update that interested the SEO community greatly.
Why was the Freshness update needed?
With the high levels of new, unique queries being entered every day, it was an unprecedented time for the internet as a whole. Even looking in 2017, more data was created in that one year than in the previous 5000!
And in an article in 2012, it was found that 16-20% of search queries entered everyday had never been seen before by Google. This means Google has to work constantly to find the most up to date information.
What did the update do?
What this Freshness update, therefore, aimed to do is focus on the ‘newness’ of content, especially in areas such as news, current affairs, politics and celebrity news. Essentially in fields where the user is most likely to want to know up to date information, and where the information can change constantly.
For example, in a leadership contest where you want to keep up to date with the most recent going-on, let’s take Donald Trump as an example. If you type his name in on Google you would want to see the most recent information on him, as his opinions, his speeches, what he announces can change on a daily basis, and something said two days ago could now be irrelevant.
What was the impact of the Freshness update?
It’s pretty clear that the winners of this update were any news/media outlet, video portals and big brands who have the time and capacity to constantly produce new ‘fresh’ information and content.
Overall 6-10% of queries were affected by the update but up to 35% saw some sort of change. Although rather surprisingly most sites implicated tended to have gained rather than lost from the update, a rather rare phenomenon in the Google update sphere.
In general, the changes were well received, with most conceding that the update was necessary and logical.
How to recover from the Freshness update?
Sites that had time-sensitive content that hadn’t been updated recently may have been pushed down the rankings by sites posting fresher content on a given subject.
The way Google figures out how often a topic may change and develop, and therefore its need for ‘freshness’ is with the QDF model (query deserves freshness), which focuses on topic ‘hotness’. For example; if news sites and blogs are constantly updating or writing new articles on a given topic, then Google begins to understand this topic has a continuous need to be refreshed. It also takes into account the billions of searches typed into Google each day, the more searches, the better indicator of human interest on a given topic. And of course, all of this was made possible by the Caffeine update, allowing pages to be indexed almost instantly.
But if a site is affected by the Freshness update, they can:
- Garner interest on social media channels for the site’s content as social signals indicate freshness.
- Look at sites in a similar niche, if they are constantly updating their content, it may be necessary to reconsider the frequency of posts in order to remain competitive. Especially as the demand for new content is increasing constantly.
- Look at all the different channels for getting content out there, from social media to videos to infographics and so on, find a way to be seen on as many platforms as possible.
- Produce evergreen content that can stand the test of time. Usually, this involves in-depth articles on a given topic, and going back and editing the article when and if information changes.
Overall, the means of recovering from, or working with the freshness update are the cornerstones of a good site; providing up to date, quality content and is probably why this update was so well received.
Panda Update – February 23rd 2011
Panda was one of the widest scale updates ever to come out from Google. This update was developed to combat against thin content, content farming, and content of little to no value in general. It continued to be updated until 2016 when it was bought into Google’s core algorithm.
Why did the Panda update come about?
One of the trends in 2010 being noted by Google was the creation and establishment of content farms. There were sites churning out over 7000 thin and low-quality articles per day, on niche content targeted to search engines. Example of these were sites such as; about.com and answers.com.
They would bring in millions through ad revenue every year by getting as many eyes on to their pages as possible. In 2010 one of the main culprits Demand Media had its initial public offering, with a 1.5-billion-dollar evaluation of their company, this is how truly lucrative content farming was. It was also argued that the Caffeine update had allowed great influxes of thin content to be indexed almost instantaneously. And Google was under immense pressure to take control of the issue. In early 2011 Business Insider were claiming Google had lost control of its algorithm.
The launch of Panda
Google announced the launch of its update on February 23rd 2011. Their purpose as they said in their blog post was to:
Reduce rankings for low-quality sites—sites which are low-value add for users, copy content from other websites or sites that are just not very useful. At the same time, it will provide better rankings for high-quality sites—sites with original content and information such as research, in-depth reports, thoughtful analysis and so on.Google Blog 2011.
They claimed that the change would noticeably affect 11.8% of sites.
How did Google create the Panda algorithm update?
In order to develop their algorithm, they sent out test documents to a number of human quality raters. They then compared the human results against the various ranking signals in order to create this new algorithm. These were the 23 questions Google asked the human quality raters:
1. Would you trust the information presented in this article?
2. Is this article written by an expert or enthusiast who knows the topic well, or is it more shallow in nature?
3. Does the site have duplicate, overlapping, or redundant articles on the same or similar topics with slightly different keyword variations?
4. Would you be comfortable giving your credit card information to this site?
5. Does this article have spelling, stylistic, or factual errors?
6. Are the topics driven by genuine interests of readers of the site, or does the site generate content by attempting to guess what might rank well in search engines?
7. Does the article provide original content or information, original reporting, original research, or original analysis?
8. Does the page provide substantial value when compared to other pages in search results? 9. How much quality control is done on content?
9. How much quality control is done on content?
10. Does the article describe both sides of a story?
11. Is the site a recognized authority on its topic?
12. Is the content mass-produced by or outsourced to a large number of creators, or spread across a large network of sites, so that individual pages or sites don’t get as much attention or care?
13. Was the article edited well, or does it appear sloppy or hastily produced?
14. For a health-related query, would you trust information from this site?
15. Would you recognize this site as an authoritative source when mentioned by name?
16. Does this article provide a complete or comprehensive description of the topic?
17. Does this article contain insightful analysis or interesting information that is beyond obvious?
18. Is this the sort of page you’d want to bookmark, share with a friend, or recommend?
19. Does this article have an excessive amount of ads that distract from or interfere with the main content?
20. Would you expect to see this article in a printed magazine, encyclopaedia or book?
21. Are the articles short, unsubstantial, or otherwise lacking in helpful specifics?
22. Are the pages produced with great care and attention to detail vs. less attention to detail?
23. Would users complain when they see pages from this site?
They were asked by Google to consider when a high school student writes an essay that they may do a number of things to facilitate/ speed up the process:
1. Buying papers online or getting someone else to write for them.
2. Making things up.
3. Writing quickly, with no drafts or editing.
4. Filling the report with large pictures or other distracting content.
5. Copying the entire report from an encyclopaedia or paraphrasing content by changing words or sentence structure here and there.
6. Using commonly known facts, for example, “Argentina is a country. People live in Argentina. Argentina has borders.”
7. Using a lot of words to communicate only basic ideas or facts, for example, “Pandas eat bamboo. Pandas eat a lot of bamboo. Bamboo is the best food for a Panda bear.”search quality evaluator guidelines.
How did sites recover from Panda?
A simple answer and yet more tricky to implement; improve the quality of content.
This idea has now been tried and tested many times and it remains true.
- Remove thin, content light pages (think your typical 500-word articles)
- Make sure the grammar and spelling in all future content is impeccable.
- Rewrite bigger, better content (this is better than removing any previous pages).
- Group together any pages with thin content in a relevant niche into one page, of course making sure it all flows!
Fallacies about Panda
It’s also worth noting that later Googles Gary Illyes clarified that the complete removal of pages is not a good idea, but rather adding in better quality content to the page.
Here are a few other myth busters;
- Panda was not created to get rid of duplicate content (confirmed by Google).
- It also does not target user-generated content, such as comments, article contributions, as long as the site itself is high quality then Panda would not affect these areas.
Updates to Panda
Panda update 1.0 – February 23rd 2011 (1)
The original; put an end to spam content techniques. Thought to affect 12% of all queries.
Panda update 2.0 – April 11th 2011 (2)
Added additional signals, such as sites blocked by users.
Panda updates 2.1 to 2.3 – May 9th 2011 to July 23rd 2011 (3 to 5)
Panda update 2.4 – August 12th 2011 (6)
The Panda update was put out on the international stage, to all English and non-English speaking countries apart from China, Korea and Japan.
Panda update 2.5 – September 28th 2011 (7)
Panda update 3.0 – October 19th 2011 (8)
New signals added to update the algorithm, as well as how it affects the sites.
Panda update 3.1 to 3.6 – November 18th 2011 to April 27th 2012 (9-14)
Various data refreshes.
Panda update 3.7 – June 8th 2012 (15)
Data refresh with a more pronounced effect.
Panda update 3.8 and 3.9 – June 25th 2012 to July 18th 2012 (16-17)
Panda updates 3.9.1 to 3.9.2 – August 20th 2012 to September 18th 2012 (18-19)
Panda updates 20 to 24 – September 27th 2012 to January 22nd 2013.
Panda update and Matt Cutts statement – March 14th 2013 (25)
Cutts claimed the update that occurred on this date would be the last before it was incorporated into Google’s core algorithm. This turned out not to be the case.
Another Matt Cutts statement – June 11th 2013
He clarified that Panda would not be absorbed into Google’s core algorithm as of yet.
Site recovery – July 18th 2013
Correction for over-penalising some sites.
Panda update 4.0 – May 19th 2014 (26)
Not just a data refresh but an update to the Panda algorithm. Thought to affect 7.5% of queries.
Panda update 4.1 – September 23rd 2014 (27)
Another update, affecting 3 to 5% of queries.
Panda update 4.2 – July 27th 2015 (28)
A preannounced data refresh that would take ‘months to roll out’. This was the final update before Panda was incorporated into the algorithm.
Panda incorporated into core algorithm – January 11th 2016.
The Panda update was no longer its own filter added on after, but rather a part of Google’s core algorithm. Jennifer Slegg referred to it as “In other words, it is what many of us call “baked in” the Google core algo, not a spam filter applied after the core did its work.”
Attribution update – January 28th 2011
This update focused on sites with high amounts of duplicate content. It was thought to affect about 2% of sites, and was a clear foreshadowing of what was to come with the Panda update!
What was the aim?
Google wanted to ensure that users got original content, and drive down spam content, by looking at sites that had high amounts of duplicate content.
As Matt Cutts said in his announcement;
“The net effect is that searchers are more likely to see the sites that wrote the original content rather than a site that scraped or copied the original site’s content”.
Clearly this was a very small precursor to what we saw in the next month with the Panda update!
2010 ALGORITHM UPDATES:
Instant Preview Update – November 2010
In November a magnifying glass appeared next to result in the SERPs. This allowed users to see a quick preview of the page they would be clicking through to!
So why did they offer this?
Google claimed on their Webmasters blog that Instant preview improved satisfaction with results by 5%, as they provided a new way for users to quickly judge whether a landing page would be relevant to their query.
As they said; you could pinpoint relevant content without even having to click on it! And they would often highlight the information relevant to your search in Orange, as you can see below!
How was it received?
Well after 3 years, in 2013 Google told Barry Schwartz that they had discontinued Instant Preview due to the fact it wasn’t used much!
“As we’ve streamlined the results page, we’ve had to remove certain features, such as Instant Previews. Instant previews saw very low usage by our users, and we’ve decided to focus on streamlining the page to benefit more users.”
So, we guess that was the end of that! Does anyone else remember ever using the preview function? If you do drop us a comment down below with whether you actually liked the feature or not 🙂
May Day update – April 28th 2010
What was it?
The May Day update was a ranking change, aimed at getting higher quality sites to rank for long-tail queries. This update was not a change in the indexing or crawling. Some sites would simply rank lower if their quality wasn’t up to scratch.
Who did it impact?
The update tended to target e-commerce sites, with a lot of item pages that;
- didn’t have many links to them
- did not have much content of value
- where the description simply copied and pasted from the manufacturers’ sites.
- The item page may also have been several clicks away from the homepage.
On the contrary, the update was also seen to benefit those with high quality and relevant content (noticing a pattern at all?).
What did Google have to say on the update?
Matt Cutts posted a video on YouTube on the 30th of May confirming that the May Day update was an algorithmic change, affecting rankings for long-tail keywords, reiterating that if a site was impacted, the necessary steps would be to look over the site’s quality, and then if the site-owner still thinks the site is relevant, see where they can add great quality content in order to get a boost up the rankings.
2009 ALGORITHM UPDATES:
Caffeine update – August 10th 2009
This update was so enormous that it took months and months to roll out, and wasn’t available to the general public until June 2010. In the meantime, they allowed site developers and SEO agencies to test it out for bugs, glitches and so on.
What was the Caffeine update?
It was a complete overhaul of Googles web indexing system. The old system created categories of perceived necessary freshness. The classification of the content affected how often Googles spider would crawl for updates.
This meant most sites would only be re-indexed every couple of weeks, leading to new content often being missing from Googles index for extended periods of time.
The Caffeine update gave googles spider the ability to crawl, collect data and add it to googles index within seconds, meaning newer, fresher information available on Google.
Why did the update come about?
Simply due to the expansion and change of the internet since the first indexing system was put in place
When the index was created in 1998 there were 2.4 million sites on the internet, compare that to 2009 and there were 100 times the amount of sites, at 239 million.
The old index simply couldn’t cope with the amount, but also the different forms of content, such as; videos, maps, images and so on that were now available.
Who was impacted?
As it was not an algorithmic change there wasn’t a particularly negative impact on sites. However, sites that had previously benefited from being in the ‘fresh’ category of Google’s indexing system and could, therefore, rest on their laurels, found themselves now competing with other sites on who could get their content out quickest and therefore who was rewarded.
But really it was putting everyone on equal footing in terms of indexing.
The caffeine update paved the way for the major updates we see today. There is no way that the pre-caffeine indexing system could have dealt with the 1.8 billion sites available today and the variety of query inputs we now have; such as voice search!
Vince update – January 18th 2009
Known as the update that let big brands win!
Released almost four years after Big Daddy, in 2009 this time the update was simply named after the engineer behind it.
It was a relatively simple but wide-reaching change. Aimed at competitive keywords, it favoured big brand domains over smaller lesser-known sites.
Why did the Vince update come about?
The CEO of the time at Google had been unusually candid in recent interviews claiming that :
“(the internet) was becoming a cesspool, where false information thrives”
“brands are the solution, not the problem”, “brands are how you sort out the cesspool”.Adage Schmidt article.
Those quotes make the intentions behind the update pretty clear, and further on from the CEO’s original points, Matt Cutts stated in a video on March 4th that the update wasn’t just about favouring big brands, but rather looking at the authority, trust and relevance a site has.
Simply put, the bigger brands were more likely to be trustworthy than small, lesser-known sites.
Who was impacted?
It seemed like another attack on affiliate marketing, where they dominated search queries such as insurance, but also commercial search queries such as clothing, jewellery and so on.
Was this an early form of Google building trust?
When we look back now it seems that this update was one of the first that aimed to build trust and value in Googles search results. As we know, Google’s main aim is always to please the user, so they continue to use their search engine, and, of course, big brands were, therefore, less of a risk, because they have more to lose in using spammy techniques (lets not mention JC Penny yet) or lying about what they offer.
They also tended to have more far-reaching and authoritative links from major media and news sites, only increasing their reliability.
Yet again this update seemed to match its predecessors in trying to rid google of the over-abundance of affiliate links (like Florida) and improve spam (think Jagger). It paved the way for creating sites that enhance the user experience, getting strong links from media outlets and focusing less on trying to rank in number 1 on google, to the detriment of the sites-quality itself.
2008 ALGORITHM UPDATES:
Google Suggest Update – August 2008
After being tested for years and years (4 to be precise) the suggest feature was finally rolled out to Google in August 2008.
Of course, now Google Suggest seems like a normal part of our Google experience, but back then it was a new idea that Google wanted to ensure that they got right before rolling it out.
As they said to Search Engine Land at the time;
“Quality is very important to us, and since so many people visit theGoogle.com homepage, we wanted to make sure to evaluate and refine our algorithms to provide a good experience using Google Suggest.”
So how did it work?
They looked in aggregate at searches that contained the word you were typing, and then returned the more popular suggestions. They would then show the number of results next to that search term!
This wasn’t actually anything new at the time! Yahoo also offered a more developed version called Yahoo Search assist!
2005 ALGORITHM UPDATES:
Big Daddy update – December 15th 2005
Another update that aimed to improve the quality of the search results found on Google.
It received its name from an informal meeting held at Pubcon between Matt Cutts (head of Webspam at Google) and other SEO professionals. When he pressed for name suggestions for the update, he was given the name ‘Big Daddy’ by a man attending the informal discussion, and he obliged, which is how the name ‘Big daddy’ came to be.
Did it target you?
Matt Cutts confirmed a few of its aims as; going after untrustworthy linking techniques, for example; excessive reciprocal linking, spammy link neighbourhoods and better detection of paid linking schemes.
There were little to no bad reactions to this new update, unlike its two predecessors Florida and Jagger.
Because Google was so vague on what was part of the Big Daddy update and what wasn’t it’s still hard to draw exact conclusions, many believe its aim was to improve performance of canonical tags and 302/301 redirects, and with no earth-shattering changes produced from it, people didn’t tend to dig deeper.
Jagger update – September 1st 2005
Jagger was a series of updates rolled out at the beginning of September 2005 which affected sites with duplicate content across multiple domains. It was the first major update since Florida in 2003.
Who did it target?
Any site that had duplicate content across multiple domains, or internal duplication. For example, if your site worked both with and without a slash or had /index.html version also.
They also changed their outlook on inbound links, taking into account the content on the page where the link originated from, the anchor text, as well as tracking when links were created, as most SEO’s would make sure links were up by the end of the month.
They also cracked down on sites that used cloaking, a spammer technique that deceives googles spider into seeing different content to what the user sees. The aim being that they could climb the rankings by tricking google.
How was the update received?
Many believed that the update favoured older sites, rather than looking at DA and content as a larger ranking factor. I.e. Google began to heavily favour sites based on domain age rather than content quality. Especially, as that year Google had bought out a patent that stated that; looking at when a site expires in the future must be a ranking factor, as those with legitimate intentions for a site, will buy the domain for a few years in advance, and those with less favourable intentions may do the inverse.
Due to the seasonal positioning of the update (towards the end of the year) many also accused them of trying to force site owners to spend on AdWords to make up for lost revenue from the festive season.
How did sites recover from Jagger?
As jagger was a multi-pronged update there were numerous factors to take into account in order to recover.
- Firstly removing any duplication issues such as the slash/non-slash and the index.HTML issue.
- Looking at backlinks and removing any unrelated, low-quality backlinks, as well getting rid of over the top linking internally.
- Ensuring meta titles were concise and relevant to the content on the page.
This update laid the foundations for many of the updates we have seen up to the modern-day. Bringing into question a site’s quality, not only of its content but also of its backlinks, something of which we would see again 6 or 7 years later, in the panda and penguin updates.
2004 ALGORITHM UPDATES:
Brandy Update – February 1st 2004
Only less than a month on from the Austin update and Google introduced more changes! It seemed to be a slight tweak to the Austin update, as we saw more authoritative sites in the results! But it also bought in some fresh ideas – such as LSI that we know well!
So, what was the update about?
At the time, Brin, one of the founders of Google actually announced that they had been making changes in the past two weeks.
- Latent Semantic Indexing (LSI): As we know LSI is about looking at the context of a piece of content. So say for example I wrote about best dog food in 2019, I may also write a range of other related keywords in the content, such as; ‘dog bowls’ ‘raw food’ ‘biscuits’ etc! You get the idea 🙂 This of course was designed to improve the relevancy of results and discourage keyword stuffing and of course allow Google to better understand the context of content, so that it would show better results!
- Links and anchor text: following on from the Austin update the month previous, Google had begun to look less at the number of links, but rather the quality and nature of the link, as well as the anchor text! It also became increasingly important to get links to the relevant page, rather than having all the links pointing to the homepage.
- Link neighbourhoods: following on from the last point, who you got links from was becoming increasingly important. Links should be from relevant sites with high PR, as these are seen to be in your ‘neighbourhood’.
- Downgrading importance of onsite SEO, that is easily manipulated: for example using the title, headers, CSS and bold or italic tags. As these are techniques that can be used to ‘trick’ Google into ranking sites where they shouldn’t be. By focusing more on LSI and links, they believed it would be harder for sites to manipulate their rankings!
- Google increased its index size: a report at the time by the BBC spoke about how Yahoo was trying to win back some of the search space that they had lost. This announcement by Google was probably put out there to show they were still the top dogs 😉 . Because of this increase to the index size, it was purported that they would have re-added many of the sites that they had dropped in the Florida update.
How could people recover from the Brandy update?
As we’re writing this years down the line, we thought we’d recap some of the suggestions that were put forth at the time, so you have a real idea of where SEO was at, at that point in time:
Firstly, you could use LSI to your advantage, by creating longer content and trying to incorporate as many LSI keywords related to your main topic as possible. Also trying not to keyword stuff!
It was recommended to also link out to other sites in your niche so Google could classify your link neighbourhood, and it was even suggested that asking for reciprocal links would be beneficial (don’t try this in 2019).
This was also the time people began building mini-sites to help boost their main site. This sounds like the start of PBNs to us, as articles were warning users to use separate IP addresses, domains etc. to make it hard for Google to trace them back to your main site!
Overall the Brandy update was pretty far-reaching! Google began to bring in some good tactics for fighting spam, but clearly it wasn’t going far enough at that point, as the use of reciprocal linking and PBNs were still being encouraged as a way to manipulate their system 🙂
Austin Update – January 2004
Following on from the Florida update that obliterated many sites off the face of Google, this update seemed to have a similar effect, with many reporting similar results.
So, what was it about, and who did it target?
Just like its predecessor, this update seemed to target sites using spam practices, which were many at the time! For example; Free For All link farms (also known as FFAs) – these are sites that allowed essentially anyone to post a link on their pages in order to get a backlink. It also targeted invisible text (that old trick of keyword stuffing irrelevant words to rank for a wide range of keywords) and overly-stuffed meta tags!
Many thought it was also linked to the Hilltop algorithm used by Google; which was designed to identify authoritative web pages! Which they did by choosing ‘expert pages’, from which Google could then decide on quality sites that were linked to from those pages!
This is how it’s described in full:
“Our approach is based on the same assumptions as the other connectivity algorithms, namely that the number and quality of the sources referring to a page are a good measure of the page’s quality. The key difference consists in the fact that we are only considering “expert” sources – pages that have been created with the specific purpose of directing people towards resources.”
Essentially, it was another link-based algorithm that would look at (and value) links from expert pages in that niche. Rather than judging all links from the whole web firstly.
Sound a bit like the Page rank patent update of April 2018?!
What was the effect?
Well, it meant that getting high-quality links was more important than ever if the Hilltop algorithm and Austin update were interlinked. It also meant you were likely to be penalised if you didn’t spend some time cleaning up your backlink profile, getting rid of dodgy FFA links and other spam techniques, such as invisible text etc!
2003 ALGORITHM UPDATES:
Florida update – November 16th 2003
This was the first major update that Google ever released and it certainly set pulses racing as it was the first time webmasters saw their results fall overnight.
What was the Florida update?
Google implemented a filter that applied to commercially based searches, meaning certain keywords were affected: such as ‘jewellery’, ‘watches’. If the sites it found for those keywords had a low page rating (a statistic that was available for everyone to see still at that point), as well as a network of keyword links that all pointed to a homepage, then they were removed from the rankings.
Who did it target?
In general, it hit affiliate sites who partook in spammy techniques the hardest. Tests found that the Florida update removed between 50-98% of all traffic, averaging out at around 72% of all traffic – that’s insane when we think of anything like that happening today!
Unfortunately, it also meant that a lot of smaller commercial sites that may have been reputable, but had not yet built up a high page ranking were knocked out of the rankings completely by Google.
Was the update successful in making Google more trustworthy?
It was successful in so far that it did remove a lot of affiliate junk from the search engine results, meaning more targeted, reliable results were offered, especially in competitive areas.
However, the unintended consequences were that in areas where there was little competition, nearly all the relevant search results were erased. Leaving completely unrelated search results in its place.
The positive that can be taken is that it encouraged webmasters to focus on improving the quality of their own sites rather than that of the affiliate sites.