# # robots.txt for http://www.wikipedia.org/ and friends # # Please note: There are a lot of pages on this site, and there are # some misbehaved spiders out there that go _way_ too fast. If you're # irresponsible, your access to the site may be blocked. # Sitemap: https://lagis-hessen.de/sitemap.xml User-agent: * Disallow: /lagis1/ Disallow: /img/ga/zoomify/ Disallow: /img/hkw/zoomify/ Disallow: /img/statl/zoomify/ Disallow: /img/hkw/dzi/ Disallow: /de/mapmaker/ Disallow: /en/mapmaker/ # Tatsaechliche Anfragen laufen ueber index.php Disallow: /index.php/de/mapmaker/ Disallow: /index.php/en/mapmaker/ # Temporary remove all indexation of search results Disallow: /de/subjects/gsearch Disallow: /en/subjects/gsearch Disallow: /de/subjects/xsearch Disallow: /en/subjects/xsearch Disallow: /de/subjects/xsmap Disallow: /en/subjects/xsmap # Xsrec might be indexed already and ranked # 87048 GET /de/subjects/xsrec # 248055 GET /en/subjects/xsrec # Google ignoriert crawl-delay, vgl. https://developers.google.com/search/docs/crawling-indexing/robots/robots_txt?hl=de # crawl-delay: 10 # 2025-02-25 Anfang User-Agent: ImagesiftBot Disallow: / # 2025-02-25 Ende # 2025-02-18 Anfang User-agent: BLEXBot Disallow: / User-agent: AwarioBot Disallow: / User-agent: TurnitinBot Disallow: / User-agent: seokicks.de Disallow: / # 2025-02-18 Ende # 2025-01-21 Anfang User-agent: meta-externalagent Disallow: / # 2025-01-21 Ende # 2024-05-21 Anfang User-agent: ClaudeBot Disallow: / User-agent: GPTBot Disallow: / User-agent: OAI-SearchBot Disallow: / User-agent: ChatGPT-User Disallow: / User-agent: PetalBot Disallow: / User-agent: DataForSeoBot Disallow: / User-agent: SemrushBot Disallow: / User-agent: Amazonbot Disallow: / User-agent: DotBot Disallow: / User-agent: Twitterbot Disallow: / User-agent: SeznamBot Disallow: / User-agent: Applebot Disallow: / User-agent: Pinterestbot Disallow: / User-agent: MojeekBot Disallow: / # 2024-05-21 Ende User-agent: AhrefsBot Disallow: / User-agent: Bytespider Disallow: / # http://mj12bot.com/ User-agent: MJ12bot Disallow: / # advertising-related bots: User-agent: Mediapartners-Google* Disallow: / # Wikipedia work bots: User-agent: IsraBot Disallow: / # German Wikipedia Broken Weblinks Bot (2014/04/30) User-agent: German Wikipedia Broken Weblinks Bot; contact: gifti@tools.wmflabs.org Disallow: / User-agent: Orthogaffe Disallow: / # Crawlers that are kind enough to obey, but which we'd rather not have # unless they're feeding search engines. User-agent: UbiCrawler Disallow: / User-agent: DOC Disallow: / User-agent: Zao Disallow: / # Some bots are known to be trouble, particularly those designed to copy # entire sites. Please obey robots.txt. User-agent: sitecheck.internetseer.com Disallow: / User-agent: Zealbot Disallow: / User-agent: MSIECrawler Disallow: / User-agent: SiteSnagger Disallow: / User-agent: WebStripper Disallow: / User-agent: WebCopier Disallow: / User-agent: Fetch Disallow: / User-agent: Offline Explorer Disallow: / User-agent: Teleport Disallow: / User-agent: TeleportPro Disallow: / User-agent: WebZIP Disallow: / User-agent: linko Disallow: / User-agent: HTTrack Disallow: / User-agent: Microsoft.URL.Control Disallow: / User-agent: Xenu Disallow: / User-agent: larbin Disallow: / User-agent: libwww Disallow: / User-agent: ZyBORG Disallow: / User-agent: Download Ninja Disallow: / # # Sorry, wget in its recursive mode is a frequent problem. # Please read the man page and use it properly; there is a # --wait option you can use to set the delay between hits, # for instance. # User-agent: wget Disallow: / # # The 'grub' distributed client has been *very* poorly behaved. # User-agent: grub-client Disallow: / # # Doesn't follow robots.txt anyway, but... # User-agent: k2spider Disallow: / # # Hits many times per second, not acceptable # http://www.nameprotect.com/botinfo.html User-agent: NPBot Disallow: / # A capture bot, downloads gazillions of pages with no public benefit # http://www.webreaper.net/ User-agent: WebReaper Disallow: /
Following keywords were found. You can check the keyword optimization of this page for each keyword.
(Nice to have)