Tuesday 7 June 2016

How To Optimize Onpage SEO On Blog

How To Optimize Onpage SEO On Blog is actually easy if you have the knowledge. It's different with offpage optimization, even if you already have the knowledge, the execution will still be difficult.

How To Optimize Onpage SEO On Blog
How To Optimize Onpage SEO On Blog

What Is SEO Onpage Optimization?

SEO / Search Engine Optimization Onpage Optimization is everything that you do on the components of a web page. The main purpose of this optimization is to improve Onpage SEO Factors so that the search engines get the emphasis of right keywords from a page.
Why optimization must be done? So that the search engines is not confused when arrive on your blog pages.

Attributes In SEO Onpage Optimization

These are attributes in SEO onpage optimization that commonly used:
1. Title tag
2. Meta description
3. URL
4. Page title <h1>
5. Post title <h2>
6. Sub title in article <h3>
7. Bold / italic <b> or <i>
8. Image must have alt tag.

If a word is wearing none of the above attributes, then that word is not important. If a word is wearing 1-3 above attributes, then that word must be important. If a word appears in all of the above attributes, then surely that's the most important word in the page.

Before we begin, go to the web page that you'll look it's onpage SEO value. Press CTRL+U to open source code from that web page...

The Highest Onpage SEO Point <head> In Title Tag


It is only in part <head>, and the code is...

<title>Title tag</title>

Title tag is the most important onpage SEO attribute. The best way to fill the title tag is to use keywords and no longer than 70 characters. The format of title tag should be:
  • Article Title only, or
  • Article Title - Blog Name....

Meta Description

The meta description should consider several things:
1. Maximum length 150 characters.
2. Not repeat a word more than twice.
3. Contain targeted keywords at the earlier parts ....

If you still confused about title tag and meta description, read article How To Make Meta Description, Title Tag, And Heading Tag Different Every Post.

URL

The good URL should contain keyword.

High Point Onpage Optimization At <body> Is In Page Title <h1>

Page Title should have keyword that you target. Because no matter what <h1>  should contain the most important words on the page. This become the basis for the development of Dynamic Heading techniques, where blog maker attempt so that article page display article title as page title.

Point of  <h1> is big in onpage SEO technique for web page, so if you don't know how to make dynamic heading, please read article Dynamic Heading.

Post Title <h2>

Post Title is given tag h2. Is onpage SEO value not excessive if the post title same as the title page? Of course not, because page title is h1, and post title is h2. So although the content is the same, but for each heading appear only once.

Sub Title In Post <h3>

It can be regarded as complementary, but in many cases this addition is very good to strengthening onpage SEO

Bold / Italic

Bold / Italic is also important in SEO Onpage Optimization. But it doesn't need to be all keyword, just the first appearance that you make bold or italic. If too excessive can also disturb reader of that page.

Alt Tag

Since Google can't recognize image, then the best way to tell Google what the content of the image is to use alt tags. It is included as one of the onpage SEO techniques, because the primary purpose of a web page is to provide the most relevant information to the reader, and sometimes the information in the form of pictures...

SEO Onpage Optimization Complementary

If all SEO Onpage Optimization above you have done, then there are some additional you need to take into account:
1. Make sure your content is unique.
2. Also make sure there is a link to the label.
3. There should be a link to the homepage of the blog.
4. Have related posts based on labels that can be read by search engines.

Hopefully this material can assist you in building a website or blog that has good onpage SEO.

Blogger Sitemap More Than 3000 Posts

In my previous post, there's some problem if you have more than 3000 posts. Because sitemap.xml only contain 150x20 = 3000 posts. So if you have more than 3000 posts, how to submit it?
You must add ?page=21 after sitemap.xml.
So URL become like this:
http://yourblogname.blogspot.com/sitemap.xml?page=21
http://yourblogname.blogspot.com/sitemap.xml?page=22
http://yourblogname.blogspot.com/sitemap.xml?page=23
etc...
So if you have 10000 posts you must add these until
http://yourblogname.blogspot.com/sitemap.xml?page=67
Yes this is tedious work but this is result from change from 500 to 150. I don't know why google doing it.

If you like this post, please share it.
Thank you.

Saturday 4 June 2016

How To Fix Your Website When It Gets Hacked?

It's not uncommon for sites - even large ones with lots of protection - to get hacked. Security is a major problem these days. And if your site gets hacked, it can get damaged in a number of ways. You could lose all your data, or lose its ranking due to malicious activity. So while you can take periodic backups, you cannot prevent someone from hacking into your site. The best and most practical thing to do in such an event is to recover your site as fast as possible so that the effect of the attack is neutralized/minimized.


 
Here are some tips shared by Google for getting your website back on track after it has been hacked.
Clean up malicious scripts

Hackers can target your site for any number of motives. From taking down your website and deleting its content to simply adding backlinks discreetly, there's a lot that can be done. If you notice suspicious content appearing on your website, delete those unnecessary pages immediately. However, don't just stop there.

Hackers will often insert malicious scripts into your HTML and PHP files. These could automatically be creating rogue backlinks or even new pages. Make sure you check your website's source code and see for any malicious PHP or JavaScript code that could be creating such content.
Maintain your CMS

Websites often get hacked due to vulnerabilities in a CMS that get patched with updates. If you're running an older version, your site is more susceptible to attack. Make sure you keep your CMS updated, and use a strong password for login. If possible, enable two-step verification to secure the login process.
www vs. non-www

www and non-www URLs are not the same. http://www.example.com is not the same as http://example.com - the former refers to a sub-domain 'www', whereas the latter is the root of your site. When checking for malicious content, verify the non-www version of your site as hackers often try to hide content in folders that may be overlooked by the webmaster
Other useful security tips

Avoid using FTP when transferring files to your servers. FTP does not encrypt any traffic, including passwords. Instead, use SFTP, which will encrypt everything, including your password, as a protection against eavesdroppers examining network traffic.
Check the permissions on sensitive files like .htaccess. Your hosting provider may be able to assist you if you need help. The .htaccess file can be used to improve and protect your site, but it can also be used for malicious hacks if they are able to gain access to it.
Be vigilant and look for new and unfamiliar users in your administrative panel and any other place where there may be users that can modify your site.
Got any questions? Feel free to leave a new thread in our discussion forum. You can read the post from Google along with a couple case studies here. Good luck (:

How RankBrain Changes Entity Search

Columnist Kristine Schachinger provides a handy primer on entity search, explaining how it works and how Google is using its RankBrain machine learning system to make it better.

rankbrain-mind-knowledge-schachinger
Earlier this week, news broke about Google’s RankBrain, a machine learning system that, along with other algorithm factors, helps to determine what the best results will be for a specific query set.
Specifically, RankBrain appears to be related to query processing and refinement, using pattern recognition to take complex and/or ambiguous search queries and connect them to specific topics.
This allows Google to serve better search results to users, especially in the case of the hundreds of millions of search queries per day that the search engine has never seen before.
Not to be taken lightly, Google has said that RankBrain is among the most important of the hundreds of ranking signals the algorithm takes into account.
RankBrain is one of the “hundreds” of signals that go into an algorithm that determines what results appear on a Google search page and where they are ranked, Corrado said. In the few months it has been deployed, RankBrain has become the third-most important signal contributing to the result of a search query, he said.
(Note: RankBrain is more likely a “query processor” than a true “ranking factor.” It is currently unclear how exactly RankBrain functions as a ranking signal, since those are typically tied to content in some way.)
This is not the only major change to search in recent memory, however. In the past few years, Google has made quite a few important changes to how search works, from algorithm updates to search results page layout. Google has grown and changed into a much different animal than it waspre-Penguin and pre-Panda.
These changes don’t stop at search, either. The company has changed how it is structured. With the new and separate “Alphabet” umbrella, Google is no longer one organism, or even the main one.
Even communication from Google to SEOs and Webmasters has largely gone the way of the dodo. Matt Cutts is no longer the “Google go-to,” and reliable information has become difficult to obtain. So many changes in such a short time. It seems that Google is pushing forward.
Yet, RankBrain is much different from previous changes. RankBrain is an effort to refine the query results of Google’s Knowledge Graph-based entity search. While entity search is not new, the addition of a fully rolled-out machine learning algorithm to these results is only about three months old.
So what is entity search? How does this work with RankBrain? Where is Google going?
To understand the context, we need to go back a few years.

Hummingbird

The launch of the Hummingbird algorithm was a radical change. It was the overhaul of the entire way Google processed organic queries. Overnight, search went from finding “strings” (i.e., strings of letters in a search query) to finding “things” (i.e., entities).
Where did Hummingbird come from? The new Hummingbird algorithm was born out of Google’s efforts to incorporate semantic search into its search engine.
This was supposed to be Google’s foray into not only machine learning, but the understanding and processing of natural language (or NLP). No more need for those pesky keywords — Google would just understand what you meant by what you typed in the search box.
Semantic search seeks to improve search accuracy by understanding searcher intent and the contextual meaning of terms as they appear in the searchable dataspace, whether on the Web or within a closed system, to generate more relevant results. Semantic search systems consider various points including context of search, location, intent, variation of words, synonyms, generalized and specialized queries, concept matching and natural language queries to provide relevant search results. Major web search engines like Google and Bing incorporate some elements of semantic search.
Yet we’re two years on, and anyone who uses Google knows the dream of semantic search has not been realized. It’s not that Google meets none of the criteria, but Google falls far short of the full definition.
For instance, it does use databases to define and associate entities. However, a semantic engine would understand how context affects words and then be able to assess and interpret meaning.
Google does not have this understanding. In fact, according to some, Google is simply navigational search — and navigational search is not considered by definition to be semantic in nature.
So while Google can understand known entities and relationships via data definitions, distance and machine learning, it cannot yet understand natural (human) language. It also cannot easily interpret attribute association without additional clarification when those relationships in Google’s repository are weakly correlated or nonexistent. This clarification is often a result of additional user input.
Of course, Google can learn many of these definitions and relationships over time if enough people search for a set of terms. This is where machine learning (RankBrain) comes into the mix. Instead of the user refining query sets, the machine makes a best guess based on the user’s perceived intent.
However, even with RankBrain, Google is not able to interpret meaning as a human would, and that is the Natural Language portion of the semantic definition.
So by definition, Google is NOT a semantic search engine. Then what is it?

The Move From “Strings” to “Things”

[W]e’ve been working on an intelligent model — in geek-speak, a “graph” — that understands real-world entities and their relationships to one another: things, not strings.
Google Official Blog
As mentioned, Google is now very good at surfacing specific data. Need a weather report? Traffic conditions? Restaurant review? Google can provide this information without the need for you to even visit a website, displaying it right on the top of the search results page. Such placements are often based on the Knowledge Graph and are a result of Google’s move from “strings” to “things.”
The move from “strings” to “things” has been great for data-based searches, especially when it places those bits of data in the Knowledge Graph. These bits of data are the ones that typically answer the who, what, where, when, why, and how questions of Google’s self-defined “Micro-Moments.” Google can give users information they may not have even known they wanted at the moment they want it.
However, this push towards entities is not without a downside. While Google has excelled at surfacing straightforward, data-based information, what it hasn’t been doing as well anymore is returning highly relevant answers for complex query sets.
Here, I use “complex queries” to refer simply to queries that do not easily map to an entity, a piece of known data and/or a data attribute — thereby making such queries difficult for Google to “understand.”
As a result, when you search for a set of complex terms, there is a good chance you will get only a few relevant results and not necessarily highly relevant ones. The result is much more a kitchen sink of possibilities than a set of direct answers, but why?

Complex Queries And Their Effect On Search

RankBrain uses artificial intelligence to embed vast amounts of written language into mathematical entities — called vectors — that the computer can understand. If RankBrain sees a word or phrase it isn’t familiar with, the machine can make a guess as to what words or phrases might have a similar meaning and filter the result accordingly, making it more effective at handling never-before-seen search queries.
Bloomberg Business
Want to see complex queries in action? Go type a search into Google as you normally would. Now check the results. If you used an uncommon or unrelated set of terms, you will see Google throws up a kitchen sink of results for the unknown or unmapped items. Why is this?
Google is searching against items known to Google and using machine learning (RankBrain) to create/understand/infer relationships when they are not easily derived. Basically, when the entity or relationship is not known, Google is not able to infer context or meaning very well — so it guesses.
Even when the entity is known, an inability to determine relevance between the searched items decreases when relevance is not already known. Remember the searches where Google showed you the words it did not use in the search? It works like that, we just don’t see those removed search terms any more.
But don’t take my word for it.
We can see this in action if you type your query again — but as you type, look in the drop-down box and see what results appear. This time, instead of the query you originally searched for, pick one of the drop-down terms that most closely resembles your intent.
Notice how much more accurate the results are when you use Google’s words? Why? Google cannot understand language without knowing how the word is defined, and it cannot understand the relationship if not enough people have told it (or it does not previously know) the attributes are correlated.
These are how entities work in search in simplified terms.
Again, though, just what are entities?
Generally speaking, nouns — or Persons/Places/Ideas/Things — are what we call entities. Entities are known to Google, and their meaning is defined in the databases that Google references.
As we know, Google has become really excellent at telling you all about the weather, the movie, the restaurant and what the score of last night’s game happened to be. It can give you definitions and related terms and even act like a digital encyclopedia. It is great at pulling back data points based around entity understanding.
There in lies the rub. Things Google returns well are known and have known, mapped or inferred relationships. However, if the item is not easily mapped or the items are not mapped to each other, Google has difficulty in understanding the query. As mentioned previously, Google basically guesses what you meant.
Google now wants to transform words that appear on a page into entities that mean something and have related attributes. It’s what the human brain does naturally, but for computers, it’s known as Artificial Intelligence.
It’s a challenging task, but the work has already begun. Google is “building a huge, in-house understanding of what an entity is and a repository of what entities are in the world and what should you know about those entities,” said [Google software engineer Amit] Singhal.

So, How Does This Work?

To give an example, “Iced Tea,” “Lemons” and “Glass” are all entities (things), and these entities have a known relationship. This means that when you search for these items — [Iced Tea, Lemons, Glass] — Google can easily pull back many highly relevant results. Google “knows” what you want. The user intent is very clear.
  • What if, however, I change the query to…Iced Tea, Rooibos, GlassGoogle still mostly understands this search, but it is not as clear an understanding.
    Why? Rooibos is not commonly used for Iced Tea, even though it is a tea.
  • Now, what if we change this query to…Iced Tea, Goji, GlassNow, Google is starting to throw in the kitchen sink. Some items are dead on. Some items are only relevant to goji tea, not iced tea.
    Google is confused.
  • Now, if I make a final change to…Iced tea, Dissolved Sugar, GlassGoogle loses almost any understanding of what this query set means.  Although these are the ingredients in the recipe for sweet tea, you will see (amidst a few sweet tea recipes) some chemistry-related pages.
    Why? Google does not know how to accurately map the relationship.
  • But what if I look at the drop-down for other terms that mean the same to me as a human when Google can no longer determine these entities and their relationship? What if I search the drop-down suggested result?
    Glass of Sugary Iced TeaThe only meaningful words changed were “sugar” to “sugary,” and the word “dissolved” was dropped. Yet, this leads us to a perfect set of Sweet Tea results.

But why?

What Google can do is understand that the entity Iced Tea is, in fact, a thing known as Iced Tea. It can tell that a Glass is indeed a Glass.
However, in last example, it does not know what to do with the modifier Dissolved in relation to Iced Tea, Sugar and Glass.
Since this query could refer to the sugar in Iced Tea or (in Google’s “mind”) a sugar solution used in a lab, it gives you results that have Iced Tea. It then gives you results that do not have Iced Tea in them but do have Dissolved Sugar. Then, you have some results with both items, but they’re not clearly related to making Iced Tea.
What we see are pages that are most likely the result of RankBrain trying to decipher intent. It tries to determine the relationship but has to return a kitchen sink of probable results because it is not sure of your intent.
So what we have now is a set of query terms that Google must assess against known “things” (entities). Then, the relationship between these things is analyzed against known relationships, at which time it hopes to have a clear understanding of your intent.
When it has a poor understanding of this intent, however, it may utilize RankBrain to list you the probable result set for your query. Simply put, when they cannot match intent to a result, they use a machine to help refine that query to probabilities.
So where is Google going?

Google’s Future

While Google has been experimenting with RankBrain, they have lost market share — not a lot, but still, their US numbers are down. In fact, Google has lost approximately three percent of share since Hummingbird launched, so it seems these results were not received as more relevant or improved (and in some cases, you could say they are worse).
Google might have to decide whether it is an answer engine or a search engine, or maybe it will separate these and do both.
Unable to produce a semantic engine, Google built one based on facts. RankBrain has now been added to help refine search result because entity search requires not only understanding what the nouns in a search mean, but also how they are related.
Over time, RankBrain will get better. It will learn new entities and the likely relationships between them. It will present better results than it does today. However, they are running against a ticking clock known as user share.
Only time will tell, but that time is limited.

Some opinions expressed in this article may be those of a guest author and not necessarily Search Engine Land. Staff authors are listed here.

Search Update Impact On SEO & Content Strategies: Staying Ahead With A Focus On Quality

Columnist Jim Yu explores how Google's numerous algorithm updates over the years have shaped search engine optimization strategies. Can this information provide a clue for what to expect in the future?

google-data-tech-analytics2-ss-1920
Since Google was first launched in 1998, the company has been continually refining its search algorithm to better match users with online content.
Over the years, many algorithm updates have targeted spammy and low-quality content in an effort to surface this content less frequently in search results. Other algorithm updates have been aimed at improving Google’s “understanding” of search queries and page content to better align search results with user intent.
The bottom line is that focusing on quality content and the user experience really is the best way to ensure your search engine optimization (SEO) and content marketing campaigns are update proactive rather than update reactive.
Many Google updates have impacted numerous reputable sites. Search marketers have had to learn how to better optimize their pages with each update to avoid losing rankings. Considering that67.60 percent of clicks go to the top five slots on SERPs, a drop of just a few positions because of an algorithm update can have massive impact on traffic, revenue and conversions.
Over the coming weeks and months, as recent updates set in and impending updates come to pass, it will be interesting to see how SEO and content strategies evolve in response. In the meantime, here’s my overview of Google’s major algorithm updates (past, present and future) and their impact on the digital marketing landscape.

Panda

The Panda update was first launched in February 2011, though it has been updated several times since then. This update is designed to target sites with low-quality content and prevent them from ranking well in search engine results pages.
Sites that have pages of spammy content, too many ads or excessive duplicate content, for example, often experience Panda penalties.
It was recently announced that Panda was added to Google’s core ranking algorithm, which has caused considerable buzz in the industry.
While there are still some questions about what it means, there are some things we’re fairly certain about. Panda updates are expected to run more regularly, for example, which will be very helpful for brands who have seen their websites hit by Panda penalties.
However, contrary to early rumors, the update will not be run in real time.
When it comes to content production, since the initial Panda release, websites have needed to really focus on providing high-quality information. Websites that have pages of low-quality content, such as thin material with little insight, should improve the existing pages, rather than just deleting them.
Keep in mind that “quality” isn’t measured in content length, so you won’t improve your low-quality pages simply by adding more text. Content can be short or long — what matters is that it provides the information the user seeks. The quality of the content on a website matters more than the quantity.

Penguin

The Penguin update was first released about a year after the Panda update, in April 2012. The two are often grouped together when discussing Google’s big push to raise the quality of content that appears in search engine results.
This update focused largely on targeting spammy links. Google looks at backlinks as a signal of a website’s authority and reputation, taking a site or page’s backlink profile into consideration when determining rankings.
Back when its core algorithm was less sophisticated, people figured out that they could effectively game search engine rankings simply by obtaining significant numbers of (often spammy and irrelevant) backlinks.
Penguin combatted this manipulative technique by targeting pages that depended upon poor-quality links, such as link farms, to artificially raise their rankings. Websites with spammy backlink profiles have been forced to remove or disavow bad links in order to avoid ranking penalties.
Quality links still have something of value to offer websites, although Google emphasizes that sites should focus on developing a quality backlink profile organically. This means creating informative pieces that people will want to source with a backlink.
To attract attention to your piece, you can leverage the search, social and content trifecta. By creating high-quality pieces and then distributing them on social media, you start to attract attention to your work.
This can increase your readership and (in theory) help you acquire more backlinks. You can also use techniques such as posting guest posts on other reputable blogs to leverage your content and build a strong backlink profile.

Hummingbird

The Hummingbird update followed in the summer of 2013. This update was designed to improve Google’s semantic search capabilities. It was becoming increasingly common for people to use Google in a conversational way, to type their queries as though they were asking a friend.
This update was designed to help Google respond by understanding intent and context.
With this update, the development of content had to shift slightly again. With the emphasis on intent, Google was not simply playing a matching game where they connect the keywords in the query with the keywords in the content.
Content needed now to go beyond just the keyword. It needed to demonstrate an understanding of what users are interested in and what they would like to learn.
While keywords still are an important part of communicating with the search engine about the topic of the content, the way they were used shifted. Long-tail keywords became more important, and intent became crucial.
Content developers needed to direct their focus toward understanding why customers might be typing particular words into the search engine and producing content that addressed their needs.

Mobile Update

The year 2015 saw several major updates that impacted content development. The first, Google’s mobile-friendly update, occurred in April. This update was unique because Google actually warnedwebsite users in advance that it was coming.
With this update, Google recognized that mobile was beginning to dominate much of search and online customer behavior — in fact, just a couple months after the mobile-friendly update was announced, Google noted that mobile searches had officially surpassed desktop. The mobile-friendly update forced sites to become mobile-friendly or risk losing visibility to sites that were.
With this update, Google wanted sites to take into account what mobile users wanted to do online and how these needs were being addressed.
This meant that SEOs and content marketers had to start considering design factors such as:
  • Responsive design or a mobile page.
  • Having site navigation front and center and easy for customers to use with their fingers.
  • Avoiding frustrations caused by issues such as buttons too close together.
  • Having all forms as efficient and as easy as possible to fill out on a smartphone screen.
This mobile update also brought to the forefront the importance of brands optimizing for mobile, even going beyond what was required by Google to avoid a penalty.
For example, customers on mobile are often very action-oriented. They want to be able to call you or find your address. They want to view the information on your screen easily, without excessive scrolling. While long-form content is commonly read on mobile devices, making it easier for people to get back to the top is very beneficial.
Mobile users also tend to be very local-oriented. Content developed for mobile devices should take local SEO into account to maximize the mobile opportunities that present themselves.

Quality Update

Not long after the mobile update went live, people began reporting evidence of another Google update, which has since been nicknamed the Quality Update. It happened so quietly that even Google did not acknowledge the change at first.
During this update, sites that focused on the user experience and distributing high-quality content were rewarded, while sites that had many ads and certain types of user-generated content were more likely to be penalized. This was even true for established sites like HubPages.
Interestingly, however, not all user-generated content was hit on all sites. Some pages, like Quora, actually received a boost from the update; it is suspected that this is because this site is very careful about the quality of the responses and content that are posted on the page.
The key to avoiding a penalty with this update seemed to be avoiding thin content or other material that did not place the needs of the user first.
Sites also need to make sure that their pages are working well, as error messages place a site at risk for a penalty from this quality update. Google knows how frustrating it is to try to find an answer to a question and instead get treated to an overly promotional article or a 404.

RankBrain

RankBrain was announced in the fall of 2015, and it was also a unique change to the Google algorithm. With this update, the search engine ventured into the world of AI (artificial intelligence) and machine learning.
This system was designed to learn and predict user behaviors, which helps Google interpret and respond to the hundreds of millions of completely unique, never-before-seen queries that it encounters each day.
It is also assumed that RankBrain helps Google to interpret content and intent in some way. Although Google has given little information about how their new AI works, they have said that it has become the third most important ranking signal. For site owners, this has placed an even greater emphasis on creating content that matches the user intent.
Since RankBrain has gone live, some marketers have spoken about the importance of making sure that the technical side of SEO, such as schema markup, is all up to date. It is likely that as search engines become more dependent upon AI, these little details will become significant.

The Buzz Over The Last Week: Panda & The Core Algorithm

Last week, some marketers were caught off guard by a new update that seemed to impact ratings for numerous sites. Although there were initially rumors circulating that this update might be the anticipated Penguin update or something to do with Panda, Google put those rumors to rest and officially confirmed that this was a core algorithm update that was not linked to other established updates.
Based upon the patterns established over the past few years, it is most likely that this adjustment, like the others, focused on better understanding user intent and identifying high-quality content.

Google: Hummingbird

Google: Hummingbird

What Is Google Hummingbird?

“Hummingbird” is the name of the new search platform that Google is using as of September 2013, the name comes from being “precise and fast” and is designed to better focus on the meaning behind the words. Read our Google Hummingbird FAQ here.
Hummingbird is paying more attention to each word in a query, ensuring that the whole query — the whole sentence or conversation or meaning — is taken into account, rather than particular words. The goal is that pages matching the meaning do better, rather than pages matching just a few words.
Google Hummingbird is designed to apply the meaning technology to billions of pages from across the web, in addition to Knowledge Graph facts, which may bring back better results.