This page uses CSS and javascript. If you can see this message, CSS (or javascript) is not enabled in your browser options.
The page will not appear as intended.
You can visit the other version of the site.

Web Spider Traps

Trap? | Mirror? | Identify | Robots | Detect | Block | Allow & Block | Ideas...

When an author does not want his site to be copied or indexed by search engines, he can use:

  1. A meta tag as <meta name="robots" content="noindex,nofollow"> (well-behaved bots only).
  2. A robots.txt file which indicates the parts of the site not to be explored (well-behaved bots only).
  3. .htaccess to ban known or detected robots (any webbot).
  4. A java applet, some html, a script written in php, javascript or any other language (any webbot).
These methods are detailed in English at the following addresses: in the documentation of Httrack (abuse FAQ for webmasters).

and www.webmasterworld.com (search for "spider traps" / "Blocking Badly Behaved Bots" or have a look at www.webmasterworld.com/forum24/ or www.webmasterworld.com/forum88/ or http://www.webmasterworld.com/forum92/ ).

Trap?

All these traps are likely to prevent search engines from indexing the pages, make browsing more difficult and discourage the users.

Fighting against the "Spam harvesters", "email grabbers", "email collectors" and "spambots" can easily be understood and quite easily done, but as all spiders are not used for bad purposes why should they all be blocked, even if they consume bandwidth and sometimes block or overload some sites.
Captures can be done for good reasons and good people: this site tries to help those who mirror sites for their students or those who cannot afford staying online...

Mirror?

Often, after some time, protections are removed: those whose navigators do not have the plugins (Macromedia, java -JRE 6.0-) or do not interpret Javascript are lost readers or lost customers.

If you think that the site is interesting enough to be mirrored, ask the author for a copy that you could browse offline.
Indeed, if you activate the option "no robots.txt rules" you may block any access to the site with your IP address or you may copy hundreds of pages without interest - error pages, images, documentations etc -.

In all the cases, locate the useful folders, use reasonable bandwidth limits and connections per second (Options - Limits - Max transfer rate and Options - Limits - Max connections / second), and limit the number of connections.
Examples 12 and 17 of website mirrors may help you.

Identify a robot

You can read about the different robots identifying themselves here:

Robots and this site

List of the robots visiting this site (this list indexes the site, tests the link to the site, does surveys or controls for clients' names, plagiarism, spam...):

"1Noonbot search engine" - "50.nu" - "80legs crawler" - "ABACHOBot search engine" - "abcfr_robot search engine" - "Accoona-AI-Agent search engine" - "AcoonBot search engine" - "ActiveBookmark" - "Advanced URL Catalog bookmark manager" - "Advista search engine" - "aiHitBot" - "aipbot search engine" - "alef" - "Aleksika search engine" - "amagit.com search engine" - "Amazonbot crawler" - "Amfibibot search engine" - "Anonymous / Skywalker" - "AnswerBus search engine" - "AntBot search engine" - "antibot crawler" - "appie 1.1 (www.walhello.com) search engine" - "Apple-PubSub RSS monitoring" - "archive.org_bot crawler" - "Argus bookmark managing crawler" - "Art-Online.com 0.9(Beta) crawler" - "Ask Jeeves crawler" - "Asterias crawler" - "atraxbot" - "Baiduspider search engine" - "Bazbot search engine" - "BecomeBot search engine" - "Big Fish log spam" - "Biglotron search engine" - "bingbot crawler" - "binlar" - "bitlybot" - "bixolabs  Data Mining" - "BlackMask.Net search engine" - "BlogCorpusCrawler" - "Bloglines RSS monitoring" - "Bluebot crawler" - "BnF" - "bogospider" - "boitho.com-robot search engine" - "Bookdog bookmark manager" - "bot/1.0" - "botmobi search engine" - "BruinBot crawler" - "Butterfly search engine" - "BuzzRankingBot crawler" - "C4PC" - "CacheBot" - "Caliperbot" - "CamontSpider crawler" - "capek crawler" - "Casper Bot Search zombie" - "CatchBot crawler" - "CazoodleBot crawler" - "CCBot crawler" - "ccubee search engine" - "CentiverseBot search engine" - "cfetch" - "Chanceo log spam" - "Charlotte search engine" - "Cherchonsbot search engine" - "Cityreview" - "CMS Crawler" - "Combine crawler" - "comBot search engine" - "cometsystems crawler" - "Content Crawler crawler" - "ContextAd Bot" - "Convera  RetrievalWare" - "CorenSearchBot" - "Corpora from the web crawler" - "Cosmix crawler" - "CosmixCrawler search engine" - "Covario crawler" - "Crawl Annu" - "Crawllybot search engine" - "csci_b659  Data Mining" - "CSS/HTML/XTHML  Validator" - "CSSCheck" - "cybercity.dk IE 5.5 Compatible Browser" - "CydralSpider search engine" - "darxi spam / email grabbing" - "DataForSEO Link Bot" - "DataFountains/DMOZ Downloader" - "DAUM Web Robot search engine" - "dcbspider search engine" - "DealGates" - "Declumbot" - "deepak-USC/ISI  spider" - "del.icio.us-thumbnails" - "del.icio.us  bookmark manager link checker" - "DepSpid crawler" - "Diamond search engine" - "Diffbot" - "Directcrawler" - "discobot crawler" - "DLE_Spider spam" - "DMOZ Experiment" - "DNSGroup crawler" - "Domains Project crawler" - "DotBot crawler" - "DTAAgent search engine" - "Dumbot search engine" - "e-SocietyRobot crawler" - "eApolloBot search engine" - "EasyDL/3.04" - "EdisterBot crawler" - "ejupiter.com search engine" - "ellerdale search engine" - "EnaBot crawler" - "envolk search engine" - "ePochta_Extractor spam / email grabbing" - "ETS  translation bot" - "europarchive" - "Exabot crawler" - "Exabot-Thumbnails" - "exactseek-crawler-2.63" - "Exalead NG" - "exooba crawler" - "Ezooms" - "facebookexternalhit" - "Factbot search engine" - "Falconsbot search engine" - "FAST crawler" - "FAST Enterprise Crawler" - "FAST FirstPage retriever" - "fast-search-engine" - "FAST-WebCrawler" - "FAST MetaWeb Crawler" - "FavOrg Link checker" - "favorstarbot Advertising" - "FeedBot" - "FeedBurner" - "FeedFetcher-Google" - "Fetch API Request" - "Filangy bookmark managing crawler" - "Findexa crawler" - "findfiles.net search engine" - "findlinks" - "flatlandbot" - "fleck" - "Flight Deck" - "FlightDeckReports" - "Fluffy (searchhippo) search engine" - "flyindex search engine" - "Focal crawler" - "FollowSite" - "Friend search engine" - "FurlBot search engine" - "Gaisbot/3.0 search engine" - "Galbot crawler" - "Generalbot" - "genevabot search engine" - "geniebot search engine" - "GeoBot" - "Gigabot crawler" - "Gigamega.bot search engine" - "GingerCrawler" - "Girafabot" - "Gnomit crawler" - "GOFORITBOT search engine" - "gold crawler" - "Google Desktop RSS monitoring" - "Google-Site-Verification" - "Google-Sitemaps" - "Googlebot crawler" - "Googlebot-Image" - "Googlebot-Mobile" - "Google Web Preview" - "grub search engine" - "grub crawler" - "grub.org" - "gsa-crawler" - "GT::WWW/1." - "gURLChecker Link checker" - "GurujiBot search engine" - "GUSbot" - "GVC-SPIDER" - "Hailoobot search engine" - "Haste" - "hclsreport crawler" - "Helix crawler" - "HenriLeRobotMirago crawler" - "Heritrix crawler" - "hoge" - "Holmes search engine" - "HooWWWer crawler" - "htdig" - "HuaweiSymantecSpider crawler" - "ia_archiver crawler" - "ICC-Crawler crawler" - "ichiro search engine" - "icsbot-0.1" - "IlTrovatore search engine" - "imbot" - "INA dlweb crawler" - "IndoCrew zombie" - "Indy Library  Internet Direct Library for Borland - often spambot" - "InelaBot crawler" - "inet library" - "inktomi Slurp crawler" - "InsiteRobot" - "integromedb.org crawler" - "InternetSeer Connectivity checker" - "Interseek" - "IntranooBot" - "IP*Works Link checker" - "IRLbot crawler" - "iSearch search engine" - "istarthere search engine" - "IXE Crawler" - "Jakarta Commons" - "Jetbot/1.0 crawler" - "JungleKeyBot search engine" - "Jyxobot search engine" - "KaloogaBot search engine" - "Killou.com search engine" - "KiwiStatus search engine" - "kmccrew Bot Search zombie" - "Knowledge.com search engine" - "knowmore" - "KomodiaBot" - "Lachesis" - "larbin crawler" - "ldspider" - "leak" - "lemurwebcrawler" - "librabot search engine" - "libwww-perl" - "LinguaBot search engine" - "linkaGoGo crawler" - "LinkChecker" - "Link Commander bookmark manager" - "linkdex.com" - "Linkman Link checker" - "Links SQL" - "Link Valet Online Link checker" - "LiteFinder search engine" - "livemark.jp Link checker" - "lmspider crawler" - "Look.com search engine" - "Loopy.fr search engine" - "Loserbot" - "Lsearch/sondeur" - "lwp-request" - "lwp-trivial" - "LWP::Simple" - "MagpieRSS" - "Mail.Ru" - "MaMa CaSpEr zombie" - "MaMa CyBer zombie" - "MapoftheInternet search engine" - "Marvin search engine" - "Me.dium OneRiot crawler" - "Mediapartners-Google" - "Megaglobe search engine" - "Megite  news aggregator" - "MetaGeneratorCrawler" - "Metaspinner search engine" - "MileNSbot search engine" - "Mirago (HenriLeRobot) crawler" - "MJ12bot crawler" - "MLBot" - "MnogoSearch/3.2.11" - "MojeekBot search engine" - "Monrobot crawler" - "MOSBookmarks Link checker" - "mozDex crawler" - "Mozilla/4.0 (compatible; MSIE 6.0)" - "Mozilla/4.0+(compatible;+MSIE+6.0;+Windows+NT+5.0;)" - "Mp3Bot search engine" - "MQbot crawler" - "ms research robot" - "MSIE 4.5 log spam" - "MSIE 6.0 (compatible; MSIE 6.0;... log spam" - "MSIE 7.01 log spam" - "MSMOBOT crawler" - "msnbot crawler" - "MSNPTC  MSN search robot" - "MSR-ISRCCrawler" - "MSRBOT crawler" - "MultiCrawler search engine" - "mxbot" - "MyFamilyBot crawler" - "Nambu" - "NaverBot search engine" - "NaverRobot search engine" - "Nelian Pty Ltd" - "Netcraft survey" - "netEstate crawler" - "NetID Bot Advertising" - "NetResearchServer search engine" - "NetSprint search engine" - "NetWhatCrawler search engine" - "newsg8 RSS monitoring" - "NEWT ActiveX spam / email grabbing" - "NG-Search search engine" - "NG/1.0" - "NG/2.0 crawler" - "NGBot crawler" - "nicebot" - "Nigma search engine" - "NimbleCrawler search engine" - "NjuiceBot" - "Norbert the Spider search engine" - "NoteworthyBot" - "NPBot  NameProtect crawler" - "nrsbot search engine" - "NuSearch Spider search engine" - "Nutch crawler" - "Nutch (Princeton) crawler" - "ObjectsSearch search engine" - "oBot crawler" - "octopodus search engine" - "Octora crawler" - "ODP::/0.01 Link checker" - "ODP entries" - "ODP links test" - "OmniExplorer_Bot search engine" - "onalytica" - "onCHECK" - "OnetSzukaj search engine" - "OOZBOT search engine" - "Openbot search engine" - "OpenindexSpider" - "OpenISearch search engine" - "OpenTaggerBot  social bookmarks" - "OpenX Spider Advertising" - "OrangeBot-Mobile search engine" - "OutfoxBot" - "ozelot" - "page-store" - "Pagebull search engine" - "pagepeeker" - "page_verifier" - "Paleoweb crawler" - "PanopeaBot/1.0 (UCLA CS Dpt.)" - "panopta.com Connectivity checker" - "Pathtraq search engine" - "PEERbot search engine" - "PeerFactor crawler" - "petalbot crawler" - "Pete-Spider crawler" - "pflab" - "PHP/4." - "PHP version tracker web stats" - "PicSpider" - "PipeLine  spider" - "Pita crawler" - "plaNETWORK Bot Search zombie" - "Plukkie search engine" - "PollettSearch crawler" - "polybot crawler" - "Pompos - dir.com crawler" - "Popdexter crawler" - "PostFavorites" - "PostRank" - "Powermarks Link checker" - "PrivacyFinder search engine" - "PROBE! search engine" - "Program Shareware" - "psbot crawler" - "Python-urllib" - "QEAVis" - "QihooBot search engine" - "Qualidator.com Bot" - "quickobot crawler" - "RAMPyBot search engine" - "RankurBot" - "Rapid-Finder search engine" - "Reaper/2.06 search engine" - "RedBot crawler" - "RedCarpet" - "RixBot search engine" - "robotgenius  malware detection?" - "Robozilla/1.0" - "RSSMicro search engine" - "RTGI  Data Mining" - "RufusBot" - "sagool search engine" - "savvybot search engine" - "SBIder crawler" - "schibstedsokbot search engine" - "Scooter search engine" - "ScoutJet search engine" - "Scrubby search engine" - "search.updated.com search engine" - "Search17Bot search engine" - "SearchByUsa search engine" - "SearchIt.Bot search engine" - "SearchWebLinks" - "Seekbot crawler" - "Semager search engine" - "SemrushBot" - "Sensis search engine" - "SEOENGBot" - "SEOprofiler bot crawler" - "SETOOZBOT search engine" - "SeznamBot" - "ShablastBot search engine" - "Shelob" - "sherlock search engine" - "Shim-Crawler" - "ShrinkTheWeb crawler" - "ShunixBot crawler" - "silk search engine" - "Sindup RSS monitoring" - "SISTRIX crawler" - "SiteBot log spam" - "SiteIntel.net Bot" - "Skywalker / Anonymous" - "sledink Bot Search zombie" - "Slurpy Verifier" - "snap.com search engine" - "Snapbot search engine" - "SnapPreviewBot" - "socbay search engine" - "sogou spider" - "sohu-search search engine" - "sohu agent search engine" - "Solomono search engine" - "Sosospider search engine" - "SpeedySpider search engine" - "SpiderLing crawler" - "Spinn3r" - "sproose crawler" - "SpurlBot bookmark managing crawler" - "sSearch Crawler" - "statbot" - "StatusCheckBot Link checker" - "Steeler crawler" - "SuperBot search engine" - "Susie  bookmark manager link checker" - "sygol search engine" - "SynapticWalker spam / email grabbing" - "SynooBot search engine" - "Syntryx ANT Chassis crawler" - "Szukacz/1.5 search engine" - "T-H-U-N-D-E-R-S-T-O-N-E" - "TargetYourNews Link checker" - "Teemer" - "Teoma search engine" - "TerraSpider" - "test" - "TFC" - "Theophrastus" - "Thriceler search engine" - "Thumbnail.CZ robot" - "thumbshots-de-bot" - "TinEye crawler" - "TranSGeniKBot" - "trexmod" - "Tubenowbot Link checker" - "TurnitinBot crawler" - "TutorGigBot crawler" - "Tutorial Crawler" - "TweetmemeBot" - "TwengaBot crawler" - "Twiceler crawler" - "Twisted PageGetter" - "Twitterbot" - "Twitturl" - "TygoBot search engine" - "uberbot crawler" - "UnChaosBot search engine" - "Unicorn  Validator" - "updated search engine" - "Update Profile Bot search engine" - "Updownerbot" - "UptimeAuditor Connectivity checker" - "UptimeBot" - "URLBase bookmark manager" - "Valizbot crawler" - "VDSX.nl search engine" - "VelenPublicWebCrawler" - "versus crawler" - "Visbot search engine" - "VoilaBot crawler" - "Voluniabot" - "Vortex crawler" - "voyager search engine" - "VSE/1.0 crawler" - "W3C-checklink" - "WASALive search engine" - "WebAlta crawler" - "WebarooBot crawler" - "WebCorp search engine" - "webcrawl search engine" - "WebFilter" - "WebIndexer search engine" - "WebRACE/1.1" - "Webscan" - "WebsiteWorth log spam" - "wikiwix search engine" - "Willow Internet Crawler" - "Windows-Live-Social-Object-Extractor-Engine" - "WinkBot search engine" - "Winsey search engine" - "WIRE" - "WongBot" - "woriobot search engine" - "WorQmada Link checker" - "Wotbox search engine" - "wume_crawler" - "www.almaden.ibm.com/cs/crawler" - "www.IsMySiteUp.Net" - "www.pisoc.com search engine" - "Xenu Link checker" - "Xerka  Data Mining" - "xirq search engine" - "XmarksFetch bookmark manager  search engine" - "yacybot search engine" - "Yahoo! Slurp crawler" - "Yahoo! Mindset" - "Yahoo-MMCrawler" - "Yahoo-Test crawler" - "YahooSeeker search engine" - "YahooVideoSearch search engine" - "Yandex search engine" - "Yanga search engine" - "yellowJacket Link checker" - "YesupBot" - "Yeti search engine" - "Yooda" - "yoono search engine" - "YottaCars search engine" - "YottaShopping search engine" - "YoudaoBot search engine" - "YRSpider" - "ZeBot search engine" - "zerxbot search engine" - "Zeus search engine" - "Zion crawler" - "ZipppBot search engine" - "ZyBorg/1.0 search engine" - "IP 103.105.167.*** crawler" - "IP 104.128.16.*** crawler" - "IP 104.128.17.*** crawler" - "IP 104.128.18.*** crawler" - "IP 104.128.19.*** crawler" - "IP 104.128.20.*** crawler" - "IP 104.128.21.*** crawler" - "IP 104.128.22.*** crawler" - "IP 104.237.245.*** crawler" - "IP 107.174.242.*** crawler" - "IP 108.161.133.*** crawler" - "IP 109.194.243.*** crawler" - "IP 109.219.117.*** crawler" - "IP 13.67.210.*** crawler" - "IP 13.67.214.*** crawler" - "IP 137.184.114.*** crawler" - "IP 137.184.229.*** crawler" - "IP 137.184.37.*** crawler" - "IP 144.168.157.*** crawler" - "IP 146.70.52.*** crawler" - "IP 162.221.200.*** crawler" - "IP 167.172.62.*** crawler" - "IP 17.121.115.*** crawler" - "IP 173.255.191.8 Link checker" - "IP 176.67.86.64 Link checker" - "IP 178.159.37.*** crawler" - "IP 185.147.213.25 Link checker" - "IP 185.2.28.*** crawler" - "IP 185.37.57.*** crawler" - "IP 185.37.59.*** crawler" - "IP 192.169.139.*** crawler" - "IP 192.200.16.*** crawler" - "IP 192.200.17.*** crawler" - "IP 192.200.18.*** crawler" - "IP 195.246.120.*** crawler" - "IP 197.49.42.*** crawler" - "IP 20.84.225.*** crawler" - "IP 208.80.194.*** crawler" - "IP 209.97.187.*** crawler" - "IP 216.131.116.202 Link checker" - "IP 216.131.72.174 Link checker" - "IP 216.131.75.107 Link checker" - "IP 216.131.75.96 Link checker" - "IP 216.131.80.53 Link checker" - "IP 216.208.214.*** crawler" - "IP 23.95.18.*** crawler" - "IP 3.21.168.*** crawler" - "IP 37.20.128.*** crawler" - "IP 40.83.55.*** crawler" - "IP 45.5.65.*** crawler" - "IP 46.161.11.*** crawler" - "IP 5.188.211.*** crawler" - "IP 5.188.48.*** crawler" - "IP 5.188.84.250" - "IP 5.189.239.*** crawler" - "IP 5.20.33.*** crawler" - "IP 54.39.29.*** crawler" - "IP 62.210.215.1** RSS monitoring" - "IP 62.210.215.117 RSS monitoring" - "IP 64.124.8.*** crawler" - "IP 64.145.79.214 Link checker" - "IP 64.145.94.88 Link checker" - "IP 68.234.44.*** crawler" - "IP 77.88.5.*** crawler" - "IP 80.135.157.*** crawler" - "IP 92.124.29.*** crawler" - "IP 95.65.81.*** crawler" - , "www.dir.com"

You can see their last visits or find their identity (1472 User Agent strings) or download a list.

Some robots regularly request robots.txt but link checkers (inbound links from other sites or search engines), validation tools and log spamming do not read robots.txt.

Among those exploring the site

Did not follow robots.txt rules:

  • Advista AdBot,alef/0.0, AhrefsBot, Alexa, Asterias, BIGLOTRON(Beta 2), bingbot, boitho.com, Content Crawler, DataForSEO Link Bot, DTAAgent, fast-search-engine, Fetch API Request, Gigamega.bot, grub (looksmart & other users), Helix, ia_archiver (Alexa), IRLbot, INA dlweb, Jyxobot, libwww-perl, LiteFinder, Lsearch/sondeur, LWP (simple & trivial), MegaIndex, msnbot/2.0b, MSR-ISRCCrawler, NetResearchServer, NOOS, OmniExplorer_Bot, Pompos (www.dir.com), Program Shareware, Seekport, shunix (libwww-perl/5.803), TygoBot, wbdbot, WebCrawler, Yahoo! Slurp/3.0, ZyBorg

- recently:

  • bingbot, DataForSEO Link Bot, Domains Project, MegaIndex, Seekport

Did not limit bandwidth usage:

  • appie, Ask Jeeves, Exalead ou NG/1.0, Fetch API Request, msnbot/0.1, msnbot/0.11, NaverRobot, Pompos (www.dir.com), Program Shareware, shunix (Xun), TygoBot, WebCrawler

- recently:

  • Cityreview, e-SocietyRobot, INA dlweb, LWP (simple & trivial), NG/2 (Exalead), OmniExplorer_Bot, Seekbot

Followed robots.txt rules except for exe, pdf, tar and zip files:
- recently:

  • larbin, Sensis.com.au, sygol, ZyBorg

Recently for this site:

Older visits:

Explore home page only

  • Anonymous
  • Bazbot
  • Big Fish
  • BuzzRankingBot
  • CentiverseBot
  • Cherchonsbot
  • CMS Crawler
  • comBot
  • ContextAd Bot
  • Cosmix
  • Crawl Annu
  • Crawllybot
  • cybercity.dk
  • DataFountains/DMOZ Downloader
  • Declumbot
  • del.icio.us-thumbnails
  • DMOZ Experiment
  • DNSGroup
  • DomainTaggingbot
  • DuckDuckGo
  • ejupiter.com
  • elefent
  • emefgebot
  • envolk
  • exooba
  • Expanse
  • favorstarbot
  • flatlandbot
  • Flight Deck
  • Fluffy
  • flyindex
  • FollowSite
  • Gaisbot/3.0
  • Galbot
  • GeoBot
  • Gnomit
  • GOFORITBOT
  • google+
  • grub crawler
  • GT::WWW/1.02
  • GVC-SPIDER
  • Holmes
  • HooWWWer
  • HouxouCrawler
  • ICC-Crawler
  • Indy Library
  • InelaBot
  • InsiteRobot
  • InternetSeer
  • IP*Works
  • IP 67.15.68.85
  • IP 67.108.232.229
  • IP 193.109.173.79
  • IP 207.44.188.104
  • iSearch
  • JikeSpider
  • JungleKeyBot
  • KaloogaBot
  • KiwiStatus Update Profile
  • Knowledge.com
  • KomodiaBot
  • linkaGoGo
  • LinkPimpin
  • Links SQL
  • Look.com
  • Loopy.fr
  • Loserbot
  • MapoftheInternet
  • Marvin
  • MetaGenerator
  • Metaspinner
  • Monrobot
  • Monsidobot
  • mozDex
  • MQBOT
  • MSIE 4.5; Windows 98;
  • MSIE 6.0 (compatible; MSIE 6.0;
  • MSIE 7.01
  • MSNPTC
  • MultiCrawler
  • NCBot
  • Netcraft
  • netEstate
  • NetID Bot
  • NetResearchServer
  • NetSprint
  • NetSystemsResearch
  • NetWhatCrawler
  • NimbleCrawler
  • nrsbot
  • ObjectsSearch
  • octopodus
  • ODP::/0.01
  • ODP links test
  • onCHECK
  • OnetSzukaj
  • OpenX Spider
  • PEERbot
  • PHP/4.2.2
  • PHP version tracker
  • PicSpider
  • PipeLiner
  • polybot
  • PrivacyFinder
  • PROBE!
  • RAMPyBot
  • REBOL View
  • Robotzilla
  • savvybot
  • Scrubby
  • search.updated.com
  • SearchByUsa
  • SearchIt.Bot
  • SemanticScholar
  • silk
  • Skywalker
  • Slurpy Verifier
  • snap.com
  • snipsearch
  • sogou spider
  • sohu-search
  • SurdotlyBot
  • SynooBot
  • Syntryx ANT
  • T-H-U-N-D-E-R-S-T-O-N-E
  • Teoma
  • test
  • Thumbnail.CZ robot
  • thumbshots-de-bot
  • trexmod
  • updated
  • UUNET
  • VDSX.nl
  • WebAlta
  • webcrawl
  • webpros
  • WebRACE
  • WebsiteWorth
  • wectarbot
  • wikiwix
  • Willow Internet Crawler
  • WinkBot
  • Winsey
  • WIRE
  • WorQmada
  • www.IsMySiteUp.Net
  • xirq
  • yacybot
  • Yahoo-MMCrawler
  • Yooda
  • YottaCars
  • YottaShopping
  • YoudaoBot
  • ZeBot
  • zerxbot
  • ZipppBot

Explore other pages too

  • 1Noonbot
  • 80legs
  • 360Spider
  • ABACHOBot
  • abcfr_robot
  • Accoona-AI-Agent
  • AcoonBot
  • ActiveBookmark
  • ADmantX
  • AdsBot-Google
  • Advista AdBot
  • aiHitBot
  • aipbot
  • alef
  • Aleksika
  • Alexa
  • amagit
  • Amazonbot%C
  • Amfibibot
  • AnswerBus
  • AntBot
  • antibot
  • appie
  • Apple-PubSub
  • Applebot
  • AraBot
  • archive.org_bot
  • Argus
  • Ask Jeeves
  • Asterias
  • atraxbot
  • BacklinkCrawler
  • Baiduspider
  • Barkrowler / BUbiNG
  • BecomeBot
  • Biglotron
  • Bing
  • binlar
  • bitlybot
  • BitNinja
  • bixolabs
  • BlogCorpusCrawler
  • Blogdimension
  • Bloglines (RSS)
  • Bluebot
  • bogospider
  • boitho
  • Bookdog
  • bot/1.0
  • BruinBot
  • Butterfly
  • C4PC
  • CacheBot
  • Caliperbot
  • capek
  • CatchBot
  • CazoodleBot
  • CCBot
  • ccubee
  • cfetch
  • Chanceo
  • Cincraw
  • Cityreview
  • Claritybot
  • Combine
  • cometsystems
  • CompSpyBot
  • Content Crawler
  • ConveraCrawler
  • CorenSearchBot
  • COrpora from the Web
  • Covario
  • Cox Communications
  • CRAZYWEBCRAWLER
  • csci_b659/0.13
  • CydralSpider
  • Cyveillance
  • darxi
  • DataForSEO Link Bot
  • Dazoobot
  • DealGates
  • deepak-USC/ISI
  • del.icio.us
  • DepSpid
  • Deskyobot
  • Diamond
  • Diffbot
  • discobot
  • Discovery Engine
  • Domains Project
  • DotBot
  • DTAAgent
  • Dumbot
  • e-SocietyRobot
  • eApolloBot
  • EasyDL
  • EdisterBot
  • ellerdale
  • EnaBot
  • ePochta_Extractor
  • ETS
  • Exabot
  • Exabot-Images
  • Exabot-Thumbnails
  • facebookexternalhit
  • Factbot
  • Falconsbot
  • FAST-search-engine
  • FAST-WebCrawler
  • FAST Enterprise Crawler
  • FAST MetaWeb Crawler
  • FavOrg
  • FeedBurner
  • FeedFetcher-Google (RSS)
  • Fetch API Request
  • Filangy
  • Findexa
  • findfiles.net
  • findlinks
  • fleck
  • Focal
  • Friend or Winsey
  • FurlBot
  • Gaisbot
  • Generalbot
  • genevabot
  • geniebot
  • Gigabot/1.0
  • Gigamega.bot
  • GingerCrawler
  • Girafabot
  • gold crawler
  • Google-Site-Verification
  • Google-Sitemaps
  • Googlebot
  • Googlebot-Image
  • Googlebot-Mobile
  • Google Desktop
  • Google Favicon
  • GrapeshotCrawler
  • grub
  • grub.org
  • gsa-crawler
  • gURLChecker
  • GurujiBot
  • GUSbot
  • Hailoobot
  • hclsreport
  • Headline
  • Helix
  • HenriLeRobotMirago
  • Heritrix
  • hoge
  • htdig
  • ia_archiver
  • ichiro
  • IGBot
  • Iltrovatore-Setaccio
  • INA dlweb
  • inet library
  • interseek
  • IntranooBot
  • IP 63.247.72.42
  • IP 89.122.57.185
  • IP 217.74.99.100
  • IRLbot
  • istarthere
  • Jakarta Commons-HttpClient
  • Jetbot
  • Jyxobot
  • KiwiStatus
  • knowmore
  • larbin
  • ldspider
  • leak
  • lemurwebcrawler
  • librabot
  • libwww-perl
  • LinguaBot
  • Link Commander
  • linkdex.com
  • Linkman
  • Linkpad
  • Link Valet Online
  • LiteFinder
  • livemark.jp
  • lmspider
  • Lsearch/sondeur
  • LWP (simple & trivial)
  • Mail.Ru
  • Me.dium
  • Mediapartners-Google
  • Megaglobe
  • Megite
  • Metric Tools
  • MJ12bot
  • MLBot
  • MojeekBot
  • MOSBookmarks
  • Mozilla/4.0 (compatible; MSIE 6.0)
  • Mozilla/4.0+(compatible;+MSIE+6.0;+Windows+NT+5.0;)
  • Mp3Bot
  • MQbot
  • MSMOBOT
  • msnbot
  • MSR-ISRCCrawler
  • MSRBOT
  • mxbot
  • MyFamilyBot
  • Nambu
  • NaverBot
  • NaverRobot
  • neeva
  • Nelian Pty Ltd
  • netsweeper
  • newsg8 (RSS)
  • NEWT ActiveX
  • NG-Search
  • NG/2.0
  • NGBot
  • nicebot
  • Nigma
  • NjuiceBot
  • NOOS
  • Norbert the Spider
  • NoteworthyBot
  • NPBot
  • NuSearch Spider
  • Nutch
  • oBot
  • OmniExplorer
  • onalytica
  • OpenindexSpider
  • OpenISearch
  • OpenTaggerBot
  • OrangeBot-Mobile
  • OutfoxBot
  • ozelot
  • page-store
  • Pagebull
  • page_verifier
  • Paleoweb
  • panopta.com
  • Pathtraq
  • PeerFactor crawler
  • petalbot
  • Pete-Spider
  • pflab
  • Pinboard Dead Link Checker
  • PollettSearch
  • PostFavorites
  • PostRank
  • Powermarks
  • Program Shareware
  • proximic
  • psbot
  • Python-urllib
  • QEAVis
  • QihooBot
  • Qualidator.com Bot
  • quickobot
  • Qwantify
  • RankurBot
  • Rapid-Finder
  • RedBot
  • RixBot
  • Rogerbot
  • RSSMicro
  • RTGI
  • RufusBot
  • Sagool
  • SBIder
  • schibstedsokbot
  • ScoutJet
  • Screaming Frog
  • ScSpider
  • SearchWebLinks
  • Seekbot
  • Semager
  • semetrical
  • SemrushBot
  • Sensis
  • SEOENGBot
  • SEOkicks
  • SEOprofiler bot
  • SETOOZBOT
  • SeznamBot
  • ShablastBot
  • Shelob
  • sherlock
  • Shim-Crawler
  • ShrinkTheWeb
  • ShunixBot
  • SiMilarTech
  • SISTRIX
  • SiteBot
  • Snapbot
  • SnapPreviewBot
  • socbay
  • sogou spider
  • sohu agent
  • Solomono
  • SpeedySpider
  • SpiderLing
  • sproose
  • SpurlBot
  • startmebot
  • statbot
  • StatusCheckBot
  • Steeler
  • SuperBot
  • Susie
  • sygol
  • Synapse
  • SynapticWalker
  • Szukacz
  • TargetYourNews
  • Teemer
  • TerraSpider
  • TFC
  • Theophrastus
  • Thriceler
  • TinEye
  • Toplistbot
  • Tubenowbot
  • TurnitinBot
  • TutorGigBot
  • Tutorial Crawler
  • TweetmemeBot
  • TwengaBot
  • Twiceler
  • Twisted PageGetter
  • Twitterbot
  • Twitturl
  • TygoBot
  • uberbot
  • UnChaosBot
  • Unicorn
  • UptimeAuditor
  • URLBase
  • Valizbot
  • VelenPublicWebCrawler
  • versus crawler
  • Visbot
  • VoilaBot
  • Voluniabot
  • Vortex
  • voyager
  • WASALive
  • wbdbot
  • WebarooBot
  • WebCorp
  • WebFilter
  • WebMeUp
  • WebNL
  • WebSense
  • Winsey or Friend
  • WongBot
  • woriobot
  • Wotbox
  • wume_crawler
  • www.almaden...
  • www.pisoc.com
  • Xenu
  • Xerka
  • XmarksFetch
  • XoviBot
  • Yahoo! Mindset
  • Yahoo! Slurp
  • Yahoo-Test
  • YahooSeeker
  • YahooVideoSearch
  • Yandex
  • Yanga
  • yellowJacket
  • YesupBot
  • Yeti
  • yoono
  • YRSpider
  • Zion
  • ZyBorg
with sometimes strange requests
  • curl
  • Pompos
  • shunix (Xun)
  • DataCha0s
  • libwww-perl
  • LWP (simple & trivial)
  • Mozilla/3.0 (compatible; Indy Library)
  • Mozilla/5.0
topTop of the page

Detecting a robot

Using its User Agent

Here is a PHP script (which is used by the site stats) allowing you to know if a robot or a search engine is requesting a page:

A script using the User Agent is now online here

It is more difficult to spot robots that do not identify:

Using its host

A good example seems to be the www.dir.com (search engine) robot which uses many IP addresses (from 212.27.33.164 to 212.27.33.173 in May 2003, 212.27.41.18 in November 2003). Its activity could be seen on the page logging servers, but is filtered now by the following PHP routine.

if (!$robot)
{
$robot=strchr(gethostbyaddr($no_ip),".dir.com");
}
//if it's the www.dir.com robot then $robot is set as .dir.com

Using its IP address

A robot requesting pages from a few IP addresses can be spotted likewise:

if (!$robot)
{
$robot=strchr($no_ip,"208.53.138.");
}
/*
if the IP address is between 208.53.138. and 208.53.138.
$robot is set as 208.53.138.
*/

In any case, maintaining a list of User Agents, hosts and IP addresses noticed as having a strange behaviour will be necessary.

Using the request method

It seems that, at the present time (June 2005), only robots and download utilities use a HEAD request (then a GET if the page exists or has been modified). Thus $_SERVER["REQUEST_METHOD"] can allow the identification of a robot using a browser User Agent. (Read RSS feed for tests in progress).

/*this method must come first*/
if ($_SERVER["REQUEST_METHOD"]=="HEAD") {$robot="robot";};
/*if head is used, $robot will not be empty*/

All these methods seem to be rather accurate.

topTop of the page

Blocking a robot with PHP

When some Apache modules are not available for use and having access to .htaccess files is restricted (my case) or if we want to cut down the size of the file .htaccess and let the server do what's useful, PHP allows us to redirect or block a robot.

If we want to stop a robot (here Fetch API Request) , we just have begin all our pages (before any output to the browser) with the following script so that the webbot is redirected toward the page bye.html, any other page or send a 403 Access Denied status message.

<?php
$UA=getenv("HTTP_USER_AGENT");
if (stristr($UA,"Fetch API Request")!="")
{
header("Location:http://mydomain/bye.html");
die(); /*this line can be replaced by the HTML redirection*/
}
?>

This page not being linked, the spidering will immediately stop.
The same can be done with an IP address by using getenv("REMOTE_ADDR");.
More sophisticated techniques are listed above.

About two thirds of the robots will follow the redirection if the domain name does not change, almost none if it changes.
A redirection in HTML will be necessary if we want to redirect all of them or let them know where the new page is:

<?php
echo"<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"
 "http://www.w3.org/TR/html4/loose.dtd">
<html>
<head>
<title>Redirection</title>
<meta http-equiv="Refresh" content="0;URL=http://mydomain/bye.html">
</head>
<body>
<p>
Redirection: <a href="http://mydomain/bye.html">http://mydomain/bye.html</a>
</p>
</body>
</html>";
die();
?>

Allowing some robots and blocking others

A function to include and call at the begining of each page can allow us to manage robots.

/*start*/
function redirect_robots()
{
$requested_page=$_SERVER["REQUEST_URI"];
if (preg_match("/([enptux\d]|\b)(ftp|https?|php)(:\/\/|%3A%2F%2F)/i",$requested_page))
   {die();} /*blocks the majority of zombies*/

When we are unlucky and visited by zombies, or when we are using a CMS, the best is to block all these requests.

if ($_SERVER["REQUEST_METHOD"]=="HEAD") return;

Why should we block this type of request? The "harm's done", link checkers toward our site (Xenu, Powermarks, Link Commander, HTTrack, IRLbot...) and search engines (Speedy Spider, sygol...) will have a positive answer and their case, if they come back with a GET or POST request, will be considered later.
There, we can store the IP address in a MySQL table to block any comeback of the utility or webbot.

$UA=getenv("HTTP_USER_AGENT");
if (preg_match("/Googlebot|Yahoo|VoilaBot|Ask Jeeves|SpeedySpider/i",$UA)) return;

No problem for the robots we accept: those who identify themselves and are named in the regular expression above. The host can be checked to see if it matches the User Agent.

/*
Including bot in the expression will block aipbot, antibot, boitho, OmniExplorer...
As for this site, up to 408 robots!
*/
if (preg_match("/[^e]crawler|spider|bot|custo |web(cow|moni|capture)|wysigot|httrack|wget|xenu/i",$UA))
{
header("Location:http://mydomain/bye.html");die();
/*another option is to send a 403 Access Denied status message
handled by Apache .htaccess
header("Status: 403 Forbidden");die();*/
}

Even if I am not convinced by the necessity to block the ones that do not exaggerate, all those in the regular expression will be redirected.
attentionMany utilities like Wysigot leave their name in the User Agent even when they are not active.

$no_ip=getenv("REMOTE_ADDR");
$host=gethostbyaddr($no_ip);
if (preg_match("/(becquerel|66-132|64-225)\.noos\.(net|fr)/i",$host) && (strchr($UA,"MSIE 4.01"))
 {
 header("Location:http://mydomain/bye.html");die();
 }
if (preg_match("/exabot|lehigh/i",$host))
 {
 header("Location:http://mydomain/bye.html");die();
 }

We can test the host and ban a few badly-behaved robots or the reading by a request from a search engine. Is it really useful?

//$no_ip=getenv("REMOTE_ADDR");
if (preg_match("/63\.247\.72\.42|208\.53\.138\.1/",$no_ip))die();

We can ban an IP address or a group of IP addresses, get from a MySQL database the IP address to ban...

return;
}
/*end*/

Now, those who are still here can browse.
We can optimize the code, add a few rules for the referrer, the number of pages requested (stored with MySQL)... It will be easy to update or modify the code, but how many errors?

topTop of the page

A few ideas...

As indexing activity shouldn't be blocked (even if no one can stop a web spider user to declare a robot identifier), knowing whether a human being is viewing a page is done in the site with two bot traps in the French home page (and only one robot trap in the English home page):
They consist in links without text so that no one can see them.
- The first is in an allowed folder. Any access to the file allows me to update the list above.
- The second is in a folder marked as prohibited to robots in the file robots.txt ( Disallow: /interdit/). Even if all indexing robots do not always respect the rules, if the page is hit it must be a web copier.
As the site is rarely copied and even if few users follow robots.txt rules, these two traps do not initiate an action.
If some people find the site interesting enough to be mirrored, they can archive it but I could stop them with a script from the sites mentioned above, the methods following the detection script, an anti-mirroring PHP script, I could limit the number of pages per session or per IP address (robots usually follow the same route), or slow them by counting the number of pages visitors or robots are trying to get by second and allow less than a page per second which will be a problem for web spiders and people who do not read.
Using the IP address to do so works if the visitor's provider gives a unique IP address. This is not the case with AOL and many big companies.
Changing provider is one option: some filter web spiders (just as www.free.fr sometimes does!!!).

Therefore preventing or stopping website mirroring is difficult or risky.

If you prefer offline browsing, you can download the static part of the site (extension of compressed files : exe~597k or bz2~631k - December 2005 / use the site map).

topTop of the page
Valid CSS! Valid XHTML 1.0!