How to use country domain and international domain in the right way

Google Search returns the most relevant and useful sites in response to a user query. For that reason, the results Google shows to a user in China can vary from the results returned to a user in Japan.


If your site has a generic top-level domain, such as .com or .org, and targets users in a particular geographic location, you can provide Google with information to help it determine how your site appears in our search results. This improves Google Search results for geographic queries, and it won’t impact your appearance in search results unless a user limits the scope of the search to a certain country.

Google treats the following as gTLDs.

Generic Top Level Domains (gTLDs)


Regional top-level domains

Although these domains are associated with a geographical region, they are generally treated as generic top-level domains (much like .com or .org).


Generic Country Code Top Level Domains

Google treats some ccTLDs (such as .tv, .me, etc.) as gTLDs, as we’ve found that users and webmasters frequently see these more generic than country-targeted. Here is a list of those ccTLDs (note that this list may change over time).


If your site has a country-coded top-level domain (such as .ie, NOT in the Generic Country Code Top Level Domains) it is already associated with a geographic region (in this example, Ireland). In this case, you won’t be able to specify a geographic location.

Get you cheap domain registration now

How Search Engines Rank Web Pages

Search engines have developed a lot of sophisticated techniques for weighting and valuing pages on the Web. But they all come down to basically two categories:

  • What does your Web page say?
    The actual text content of your Web page and HTML code. What content does your site convey to the user?
  • Who is linking to you?
    What sort of other Web pages are linking to yours? Do they have the same topic or a related topic?




When you look at a Web page, you see the page displayed on your computer screen. You can read the text, look at the images, and figure out what that page is about.

Search engines don’t see Web pages the same way a person does. In fact, search engines cannot actually see at all, at least not visually. Instead, they read the HTML code of the Web page, and the actual text that it contains.

All the search engines can read is text. They also can look at the HTML code (which is also text) of the site to try and get some clues about what that text means or which text is most important.

Search engines can sometimes use the HTML code to get some clues about other elements on the page, such as images and animation. For example, search engines can look at an image tag and read the alt text attribute, if the page author supplied it, to get an idea of what the image is.

img src="cowpicture.jpg" alt="Picture of a cow" 
However, this is not a replacement for actual text content.


Web links from other sites are also important clues that search engines use to figure out what your page is about, or how important your page is for a particular search query. In a search engine’s view, a link from one page to another is basically a “vote” for that page.

If you have a page about cows, and a local farmer’s Web page links to your page from their website for more information on the topic of cows, that is an extra vote for your page.

More links = more votes.

Not all votes are equal votes, however. Most important is how relevant the link is. For example, a link from a page about  dairy products or cows doesn’t have much to do with cheap domain registration or cheap domain hosting, so a link from that page to your website about cows does not count for very much at all, if anything.

Some Web page owners put a lot of time and effort into chasing down links from other Web page authors, swapping links or trying to get listed on directories or have articles posted to sites like Digg or Reddit. This can be helpful for your site, but you have to remember to focus on your own page content first. If your Web page doesn’t have much value to other site authors, they are unlikely to link to it.

Google Crawling, Indexing, Serving results

Bellowing are three key processes in delivering search results from Google. This will help webmaster understand how Google crawling, indexing and serving results…

Crawling is the process by which Googlebot discovers new and updated pages to be added to the Google index.

We use a huge set of computers to fetch (or “crawl”) billions of pages on the web. The program that does the fetching is called Googlebot (also known as a robot, bot, or spider). Googlebot uses an algorithmic process: computer programs determine which sites to crawl, how often, and how many pages to fetch from each site.

Google’s crawl process begins with a list of web page URLs, generated from previous crawl processes, and augmented with Sitemap data provided by webmasters. As Googlebot visits each of these websites it detects links on each page and adds them to its list of pages to crawl. New sites, changes to existing sites, and dead links are noted and used to update the Google index.

Google doesn’t accept payment to crawl a site more frequently, and we keep the search side of our business separate from our revenue-generating AdWords service.

Googlebot processes each of the pages it crawls in order to compile a massive index of all the words it sees and their location on each page. In addition, we process information included in key content tags and attributes, such as Title tags and ALT attributes. Googlebot can process many, but not all, content types. For example, we cannot process the content of some rich media files or dynamic pages.

Serving results
When a user enters a query, our machines search the index for matching pages and return the results we believe are the most relevant to the user. Relevancy is determined by over 200 factors, one of which is the PageRank for a given page. PageRank is the measure of the importance of a page based on the incoming links from other pages. In simple terms, each link to a page on your site from another site adds to your site’s PageRank. Not all links are equal: Google works hard to improve the user experience by identifying spam links and other practices that negatively impact search results. The best types of links are those that are given based on the quality of your content.

In order for your site to rank well in search results pages, it’s important to make sure that Google can crawl and index your site correctly. Our Webmaster Guidelines outline some best practices that can help you avoid common pitfalls and improve your site’s ranking.

Google’s Related Searches, Spelling Suggestions, and Google Suggest features are designed to help users save time by displaying related terms, common misspellings, and popular queries. Like our search results, the keywords used by these features are automatically generated by our web crawlers and search algorithms. We display these suggestions only when we think they might save the user time. If a site ranks well for a keyword, it’s because we’ve algorithmically determined that its content is more relevant to the user’s query.

Sourced from:

Hidden Text And Links – Google Webmaster Guidelines

Read Google guidelines about “Hiding text or links” and apply it for your Google SEO. Increase website traffic to your website, increase your business…

Hiding text or links in your content can cause your site to be perceived as untrustworthy since it presents information to search engines differently than to visitors. Text (such as excessive keywords) can be hidden in several ways, including:

  • Using white text on a white background
  • Including text behind an image
  • Using CSS to hide text
  • Setting the font size to 0

Hidden links are links that are intended to be crawled by Googlebot, but are unreadable to humans because:

  • The link consists of hidden text (for example, the text color and background color are identical).
  • CSS has been used to make tiny hyperlinks, as little as one pixel high.
  • The link is hidden in a small character – for example, a hyphen in the middle of a paragraph.

If your site is perceived to contain hidden text and links that are deceptive in intent, your site may be removed from the Google index, and will not appear in search results pages. When evaluating your site to see if it includes hidden text or links, look for anything that’s not easily viewable by visitors of your site. Are any text or links there solely for search engines rather than visitors?

If you’re using text to try to describe something search engines can’t access – for example, Javascript, images, or Flash files – remember that many human visitors using screen readers, mobile browsers, browsers without plug-ins, and slow connections will not be able to view that content either. Using descriptive text for these items will improve the accessibility of your site. You can test accessibility by turning off Javascript, Flash, and images in your browser, or by using a text-only browser such asLynx. Some tips on making your site accessible include:

  • Images: Use the alt attribute to provide descriptive text. In addition, we recommend using a human-readable caption and descriptive text around the image.
  • Javascript: Place the same content from the Javascript in a no script tag. If you use this method, ensure the contents are exactly same as what is contained in the Javascript and that this content is shown to visitors who do not have Javascript enabled in their browser.
  • Videos: Include descriptive text about the video in HTML. You might also consider providing transcripts.

If you do find hidden text or links on your site, either remove them or, if they are relevant for your site’s visitors, make them easily viewable. If your site has been removed from our search results, review our webmaster guidelines for more information. Once you’ve made your changes and are confident that your site no longer violates our guidelines, submit your site for reconsideration.

Sourced from Google Webmaster Guidelines