diff --git a/docs/content/1.getting-started/3.troubleshooting.md b/docs/content/1.getting-started/3.troubleshooting.md index 825962a4..a1a4835a 100644 --- a/docs/content/1.getting-started/3.troubleshooting.md +++ b/docs/content/1.getting-started/3.troubleshooting.md @@ -11,13 +11,43 @@ navigation: The best tool for debugging is the Nuxt DevTools integration with Nuxt Robots. -This will show you the current robot rules and your robots.txt file. +**How to Access:** + +1. Install [Nuxt DevTools](https://devtools.nuxt.com/) if not already installed (it's enabled by default in Nuxt 3.7+) +2. Run your dev server with `npm run dev` or `pnpm dev` +3. Open your browser DevTools (F12 or Cmd+Opt+I on Mac) +4. Look for the Nuxt icon in the bottom-left corner of your browser window +5. Click the icon to open Nuxt DevTools +6. Navigate to the "Robots" tab in the left sidebar + +**What You'll See:** + +The DevTools panel will show you: +- Current robot rules applied to your site +- The generated `robots.txt` file content +- Active configuration from your `nuxt.config.ts` +- Route-specific rules and their sources + +This makes it easy to verify that your robots configuration is working as expected without having to visit `/robots.txt` directly. ### Debug Config -You can enable the [debug](/docs/robots/api/config#debug) option which will give you more granular output. +You can enable the [debug](/docs/robots/api/config#debug) option which will give you more granular output in your server console. + +```ts [nuxt.config.ts] +export default defineNuxtConfig({ + robots: { + debug: true + } +}) +``` + +This is enabled by default in development mode and will log detailed information about: +- Which rules are being applied +- How the robots.txt file is being generated +- Any parsing or configuration issues -This is enabled by default in development mode. +The debug output appears in your terminal/console where you're running the dev server, not in the browser console. ## Submitting an Issue diff --git a/docs/content/2.guides/1.disable-indexing.md b/docs/content/2.guides/1.disable-indexing.md index bbae4d91..b574d0b0 100644 --- a/docs/content/2.guides/1.disable-indexing.md +++ b/docs/content/2.guides/1.disable-indexing.md @@ -1,6 +1,8 @@ --- title: Disabling Site Indexing description: Learn how to disable indexing for different environments and conditions to avoid crawling issues. +navigation: + title: "Disabling Site Indexing" --- ## Introduction @@ -55,3 +57,16 @@ A robots meta tag should also be generated that looks like: ``` For full confidence you can inspect the URL within Google Search Console to see if it's being indexed. + +## Troubleshooting + +If indexing is not being disabled as expected: + +1. **Check your environment variable** - Make sure `NUXT_SITE_ENV` is set correctly in your `.env` file or deployment environment +2. **Verify the configuration** - Check that `site.indexable` is set to `false` in your `nuxt.config.ts` +3. **Clear your cache** - Sometimes cached responses may show old data. Try clearing your browser cache and rebuilding your app +4. **Check the robots.txt file** - Visit `/robots.txt` on your site to see the actual output +5. **Inspect the meta tags** - View page source and look for the robots meta tag in the `` section +6. **Use Nuxt DevTools** - Open the Robots tab in Nuxt DevTools to see the active configuration (see [Troubleshooting](/docs/robots/getting-started/troubleshooting) guide) + +If you're still having issues after trying these steps, please create an issue on the [GitHub repository](https://github.com/nuxt-modules/robots) with details about your setup. diff --git a/docs/content/2.guides/1.robots-txt.md b/docs/content/2.guides/1.robots-txt.md index 369be002..a5843259 100644 --- a/docs/content/2.guides/1.robots-txt.md +++ b/docs/content/2.guides/1.robots-txt.md @@ -17,7 +17,24 @@ If you need programmatic control, you can configure the module using [nuxt.confi ## Creating a `robots.txt` file -You can place your file in any location; the easiest is to use: `/public/_robots.txt`. +You can place your file in any location. The easiest and recommended location is: `/public/_robots.txt` + +**Note:** The file is named `_robots.txt` (with an underscore prefix) in the `public` folder to prevent conflicts with the auto-generated file. The module will automatically merge this file with generated rules. + +**Quick Start:** + +1. Create a file named `_robots.txt` in your `public` folder +2. Add your robots.txt rules to this file +3. The module will automatically detect and merge it with the generated robots.txt + +**Example `public/_robots.txt`:** + +```txt [public/_robots.txt] +User-agent: * +Allow: / + +Sitemap: https://example.com/sitemap.xml +``` Additionally, the following paths are supported by default: @@ -96,4 +113,11 @@ Both directives are parsed identically and output as `Content-Usage` in the gene To ensure other modules can integrate with your generated robots file, you must not have a `robots.txt` file in your `public` folder. -If you do, it will be moved to `/public/_robots.txt` and merged with the generated file. +**Important:** Always use `_robots.txt` (with underscore) instead of `robots.txt` in your public folder. + +If you accidentally create a `public/robots.txt` file, the module will automatically: +1. Move it to `/public/_robots.txt` +2. Merge it with the generated file +3. Log a warning in your console + +This ensures your custom rules are preserved while allowing the module to function correctly.