Managing your Website or Apps with Search Console
Day to day
Relax — Search Console will email you if any unusual events occur with your properties. Unusual events include indications that your website has been hacked or problems that Google had when crawling or indexing your site. We will also email you if we detect that your site violates any of Google’s search quality guidelines.
Note: It can take some time after verifying your site to start seeing data in Search Console reports, so add your site and verify it soon. We’ll let you know when your data becomes available.
Every month or so, take a look at your Search Console dashboard; the dashboard is the simplest way to get a quick health check on your site:
Make sure that you aren’t experiencing an increase in errors for your site.
Check that you don’t have any unusual dips in your click counts. Note that a weekly rhythm of weekend dips, or dips or spikes over holidays, is normal.
When your content changes
Check Search Console whenever you make important site changes to monitor your site’s behavior in Google Search.
Adding new content to your site:
- Test that Google can access your pages using the Fetch as Google tool.
- Tell Google which pages to crawl by updating your sitemap.
- Tell Google which pages not to crawl using robots.txt or noindex tags.
- A few weeks after you post content, confirm that the number of indexed pages in your site is rising and that you don’t have any blocked resources that might impair Google’s crawling of your pages.
Adding new properties:
A new mobile site: We recommend using a single site that adapts to users on any device, but if you decide to have a separate site for mobile users, be sure to add it to Search Console, and add tags to connect it to your existing site.
A new app: Get your app on search.
New international content: Be sure to target the correct country with your site and add hreflang link tags to your pages.
If you change your site’s domain name:
Use the change of address tool to point Google search to your new location.
Removing a page from search results:
- Use the URL removal tool and take other appropriate steps to block crawling and/or indexing.
- When you sit down at your computer and do a Google search, you’re almost instantly presented with a list of results from all over the web. How does Google find web pages matching your query, and determine the order of search results?
- In the simplest terms, you could think of searching the web as looking in a very large book with an impressive index telling you exactly where everything is located. When you perform a Google search, our programs check our index to determine the most relevant search results to be returned (“served”) to you.
The three key processes in delivering search results to you are:
- Crawling: Does Google know about your site? Can we find it?
- Indexing: Can Google index your site?
- Serving: Does the site have good and useful content that is relevant to the user’s search?
Crawling is the process by which Googlebot discovers new and updated pages to be added to the Google index.
We use a huge set of computers to fetch (or “crawl”) billions of pages on the web. The program that does the fetching is called Googlebot (also known as a robot, bot, or spider). Googlebot uses an algorithmic process: computer programs determine which sites to crawl, how often, and how many pages to fetch from each site.
Google’s crawl process begins with a list of web page URLs, generated from previous crawl processes, and augmented with Sitemap data provided by webmasters. As Googlebot visits each of these websites it detects links on each page and adds them to its list of pages to crawl. New sites, changes to existing sites, and dead links are noted and used to update the Google index.
Google doesn’t accept payment to crawl a site more frequently, and we keep the search side of our business separate from our revenue-generating AdWords service.
Googlebot processes each of the pages it crawls in order to compile a massive index of all the words it sees and their location on each page. In addition, we process information included in key content tags and attributes, such as Title tags and ALT attributes. Googlebot can process many, but not all, content types. For example, we cannot process the content of some rich media files or dynamic pages.
When a user enters a query, our machines search the index for matching pages and return the results we believe are the most relevant to the user. Relevancy is determined by over 200 factors, one of which is the PageRank for a given page. PageRank is the measure of the importance of a page based on the incoming links from other pages. In simple terms, each link to a page on your site from another site adds to your site’s PageRank. Not all links are equal: Google works hard to improve the user experience by identifying spam links and other practices that negatively impact search results. The best types of links are those that are given based on the quality of your content.
In order for your site to rank well in search results pages, it’s important to make sure that Google can crawl and index your site correctly. Our Webmaster Guidelines outline some best practices that can help you avoid common pitfalls and improve your site’s ranking.
Google’s Did you mean and Google Autocomplete features are designed to help users save time by displaying related terms, common misspellings, and popular queries. Like our google.com search results, the keywords used by these features are automatically generated by our web crawlers and search algorithms. We display these predictions only when we think they might save the user time. If a site ranks well for a keyword, it’s because we’ve algorithmically determined that its content is more relevant to the user’s query.