Web CRAWLER

“A Web crawler is a computer program that browses the World Wide Web in a methodical, automated manner or in an orderly fashion. Other terms for Web crawlers are ants, automatic indexers, bots, Web spiders, Web robots, or—especially in the FOAF community—Web scutters.
This process is called Web crawling or spidering. Many sites, in particular search engines, use spidering as a means of providing up-to-date data. Web crawlers are mainly used to create a copy of all the visited pages for later processing by a search engine that will index the downloaded pages to provide fast searches. Crawlers can also be used for automating maintenance tasks on a Web site, such as checking links or validating HTML code. Also, crawlers can be used to gather specific types of information from Web pages, such as harvesting e-mail addresses (usually for sending spam).”
 
Our company has developed a web crawler. The crawler operates in a divided way, that is it runs on several computers parallel, but it can be controlled from the centre. With the crawler predetermined type of information can be gleaned from the Internet.
 
Copyright (c) 2001-2011 Program Produkt
+36 1 338 1739