Loading tools
Loading tool
15 bots, AI crawler blocking, built-in path tester.
User-agent: * Allow: / Disallow: /admin Disallow: /api Disallow: /dashboard Sitemap: https://example.com/sitemap.xml
Disallow: /adminA robots.txt file tells search engine crawlers what they can and can't fetch from your site. This generator builds a valid robots.txt from a form, with 15 common bots as presets (Googlebot, Bingbot, GPTBot, ClaudeBot for AI training, etc.) and 7 common disallow patterns. Then it tests your rules: paste a path, see whether Googlebot would fetch it, with the matched rule explained.
Most online generators give you a form and a download. We add a tester that implements robots.txt's longest-match-wins semantics per Google's official spec -so you can verify your rules before deploying.
Most generators only emit a file. We let you verify it does what you intended before pushing to prod.
GPTBot, ClaudeBot, Google-Extended are first-class presets -important in 2026 when AI training scraping is widespread.
SEOptimer gates beyond their default at $29/mo. We ship the full feature set free.
Your domain map and disallow paths are competitive intel. We never see them.
Yes. Unlimited use, no signup, no daily cap. Generation and validation run entirely in your browser. SEOptimer's free generator caps you at one preset and shows ads; we ship 15 bots and 7 disallow presets free.
robots.txt is a text file at the root of your domain that tells search engine crawlers which paths they should and shouldn't fetch. It doesn't enforce security (anyone can ignore it) but well-behaved bots like Googlebot respect it.
15 common bots: all crawlers (*), Googlebot (and its sub-bots: Image, News), Bingbot, Yahoo Slurp, DuckDuckBot, Baiduspider, YandexBot, GPTBot (OpenAI), ClaudeBot (Anthropic), Google-Extended (AI training), CCBot (Common Crawl), AhrefsBot, SemrushBot. Block AI crawlers if you don't want your content used for training.
It implements robots.txt's longest-match-wins rule per Google's official spec. If you have 'Allow: /admin/public' and 'Disallow: /admin', then /admin/public/file is allowed (longer pattern wins). The tester walks the same logic.
Disallow blocks paths; Allow explicitly permits them (used to whitelist sub-paths inside a disallowed parent). Most files only need Disallow rules. Allow is for the 'block /admin, but allow /admin/public-page' pattern.
Probably not. Googlebot ignores Crawl-delay entirely (Google manages its own crawl rate). Bingbot, Yandex, Baidu accept it. Set 10s only if your server is being hammered. >30s slows indexing on bots that honor it.
Yes. Adding 'Sitemap: https://your-domain.com/sitemap.xml' helps search engines discover all your URLs. The URL must be absolute (full https://...). You can list multiple sitemaps if you split by section.
Yes. Add User-agent rules for GPTBot, ClaudeBot, Google-Extended, CCBot with Disallow: / to opt your content out of AI training corpora. Note that not all AI scrapers respect robots.txt.
At the root of your domain: https://your-domain.com/robots.txt (not in a subfolder). Crawlers only check the root. If your site is on www.example.com, robots.txt goes at https://www.example.com/robots.txt.
No. The generator and tester run in your browser. Your domain, paths, and rules stay on your laptop. The downloaded robots.txt is yours alone -we don't see what you configure.
15 bots, 7 presets, built-in tester. Free unlimited.
Open the robots.txt generatorThe Robots.txt Generator page is built, reviewed, and maintained by the Molixa team. We use the tool we ship and update the docs when the behavior changes.