Press enter to begin your search

crawling

Crawling is the process by which Google discovers new and updated pages to be added to the Google index.  A huge set of computers are used to fetch (or “crawl“) billions of pages on the web. The program that does the fetching is called Googlebot (also known as a robot, bot, or spider). Crawling issues arise when elements of your page prevent it from being properly crawled and indexed by a search engine.