diff --git a/src/content/docs/bots/additional-configurations/managed-robots-txt.mdx b/src/content/docs/bots/additional-configurations/managed-robots-txt.mdx index 2adf03c61773eff..9970eb9e2d909b4 100644 --- a/src/content/docs/bots/additional-configurations/managed-robots-txt.mdx +++ b/src/content/docs/bots/additional-configurations/managed-robots-txt.mdx @@ -10,8 +10,6 @@ import { Render, Tabs, TabItem, Steps } from "~/components"; Protect your website or application from AI crawlers by implementing a `robots.txt` file on your domain to direct AI bot operators on what content they can and cannot scrape for AI model training. -Cloudflare's managed `robots.txt` explicitly disallows known bots engaged in scraping for AI purposes. - AI bots are expected to follow the `robots.txt` directives. :::note @@ -39,6 +37,18 @@ Sitemap: https://www.crawlstop.com/sitemap.xml With the managed `robots.txt` enabled, Cloudflare will prepend our managed content before your original content, resulting in what you can view at https://crawlstop.com/robots.txt. +**Robots.txt example** +