Skip to content

Conversation

@GabrielDrapor
Copy link
Contributor

@GabrielDrapor GabrielDrapor commented Aug 20, 2025

PR Type

Enhancement


Description

  • Enhanced manifest generation script with improved validation

  • Added schema-compliant installation entry validation

  • Updated workflow name from testing to production

  • Improved code formatting and error handling


Diagram Walkthrough

flowchart LR
  A["Repository URL"] --> B["Generate Initial Manifest"]
  B --> C["Validate Installation Entries"]
  C --> D["Schema Compliance Check"]
  D --> E["Save Manifest JSON"]
Loading

File Walkthrough

Relevant files
Enhancement
get_manifest.py
Enhanced manifest validation with schema compliance           

scripts/get_manifest.py

  • Added validate_installation_entry() function for schema validation
  • Enhanced validate_installations() with detailed schema examples
  • Improved error handling and validation logic
  • Updated code formatting to use double quotes consistently
+149/-69
Configuration changes
generate-manifest.yml
Updated workflow name for production                                         

.github/workflows/generate-manifest.yml

  • Updated workflow name from "(TESTING) Generate MCP Manifest" to
    "Generate MCP Manifest"
+1/-1     

Summary by CodeRabbit

  • Bug Fixes

    • Improved manifest generation resilience to unexpected API responses.
    • Validates and filters installation entries to ensure correct command formats per install type, preventing invalid instructions from appearing.
  • Chores

    • Renamed the CI workflow for generating the MCP manifest for clarity.

@coderabbitai
Copy link

coderabbitai bot commented Aug 20, 2025

Walkthrough

Renamed a GitHub Actions workflow. Enhanced scripts/get_manifest.py by adding a validator for installation entries, integrating it into installation filtering, and broadening error handling in generate_manifest for API responses. Minor formatting adjustments were applied without changing overall execution semantics.

Changes

Cohort / File(s) Summary
Workflow rename
.github/workflows/generate-manifest.yml
Renamed workflow from “(TESTING) Generate MCP Manifest” to “Generate MCP Manifest”; no functional changes to triggers, jobs, or steps.
Manifest validation and resilience
scripts/get_manifest.py
Added validate_installation_entry(); updated validate_installations() to filter API installations using the validator; expanded generate_manifest() exception handling (KeyError/IndexError); minor formatting/whitespace tweaks.

Sequence Diagram(s)

sequenceDiagram
    autonumber
    participant U as Caller
    participant GM as generate_manifest()
    participant API as Registry API
    participant VI as validate_installations()
    participant VE as validate_installation_entry()

    U->>GM: generate_manifest(repo_url)
    GM->>API: POST fetch manifest metadata
    API-->>GM: Response (installations, etc.)
    alt Response OK
        GM->>VI: validate_installations(manifest, repo_url)
        loop For each installation entry
            VI->>VE: validate_installation_entry(type, entry)
            VE-->>VI: valid? (true/false)
        end
        VI-->>GM: cleaned installations (or original if none valid)
        GM-->>U: Manifest (validated)
    else KeyError/IndexError/HTTP errors
        GM-->>U: Handle error (fallback/log)
    end
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Possibly related PRs

Suggested labels

Review effort 2/5

Poem

I thump my paws, a tidy feat,
Install lines trimmed, now crisp and neat.
The manifest hums, withstands a bump,
When APIs wobble, we don’t jump.
A workflow name now clean and bright—
Carrot-coded, set to byte. 🥕🐇

Tip

🔌 Remote MCP (Model Context Protocol) integration is now available!

Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats.

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch Jiarui/smart-registry-workflow

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@qodo-merge-pro
Copy link
Contributor

PR Reviewer Guide 🔍

Here are some key observations to aid the review process:

⏱️ Estimated effort to review: 2 🔵🔵⚪⚪⚪
🧪 No relevant tests
🔒 No security concerns identified
⚡ Recommended focus areas for review

Possible Issue

The JSON extraction regex in extract_json_from_content only matches fenced blocks labeled exactly as json with a trailing newline before closing fence, which may miss valid variations (e.g., json\r\n, ```\n without language, or extra spaces). This could cause failure to parse otherwise valid responses.

json_match = re.search(r"```json\n(.*?)\n```", content, re.DOTALL)
if json_match:
    try:
        return json.loads(json_match.group(1))
    except json.JSONDecodeError as e:
        print(f"Error parsing JSON: {e}")
        return None
Schema Assumption

validate_installation_entry enforces strict npm args to start with ["-y", ...] and requires args[0] == "run" for docker. These hard-coded rules may reject valid registry entries (e.g., npx without -y, docker run options order, additional flags). Consider more flexible validation aligned to the official schema rather than specific patterns.

def validate_installation_entry(install_type: str, entry: dict) -> bool:
    """Validate a single installation entry against MCP registry schema."""
    # Required fields for all installation types
    required_fields = {"type", "command", "args"}

    # Check all required fields exist
    if not all(field in entry for field in required_fields):
        return False

    # Type must match install_type
    if entry.get("type") != install_type:
        return False

    # Type-specific validation based on MCP registry patterns
    if install_type == "npm":
        # npm type should use npx command with -y flag
        if entry.get("command") != "npx":
            return False
        args = entry.get("args", [])
        if not args or args[0] != "-y":
            return False
    elif install_type == "uvx":
        # uvx type should use uvx command
        if entry.get("command") != "uvx":
            return False
    elif install_type == "docker":
        # docker type should use docker command
        if entry.get("command") != "docker":
            return False
        args = entry.get("args", [])
        if not args or args[0] != "run":
            return False

    return True
Robustness

Accessing data["choices"][0]["message"]["content"] assumes OpenAI-compatible schema. If the backend returns tool calls or different formats, this will KeyError. A safer extraction with .get checks and type validation would prevent crashes.

data = response.json()
content = data["choices"][0]["message"]["content"]

return extract_json_from_content(content)

@qodo-merge-pro
Copy link
Contributor

CI Feedback 🧐

A test triggered by this PR failed. Here is an AI-generated analysis of the failure:

Action: codex

Failed stage: Run Codex [❌]

Failure summary:

The workflow exited with code 1 due to insufficient permissions for the GitHub App/bot
qodo-merge-pro[bot] on the repository.
- The script fetched the bot's permission via gh api
"/repos/${GITHUB_REPOSITORY}/collaborators/qodo-merge-pro[bot]/permission" and parsed .permission.
-
It then checked: if PERMISSION is not admin and not write, exit 1.
- That condition was met, so the
script ran exit 1, causing the job to fail.

Relevant error logs:
1:  ##[group]Runner Image Provisioner
2:  Hosted Compute Agent
...

116:  ##[endgroup]
117:  ##[group]Run set -euo pipefail
118:  �[36;1mset -euo pipefail�[0m
119:  �[36;1m�[0m
120:  �[36;1mPERMISSION=$(gh api \�[0m
121:  �[36;1m  "/repos/${GITHUB_REPOSITORY}/collaborators/qodo-merge-pro[bot]/permission" \�[0m
122:  �[36;1m  | jq -r '.permission')�[0m
123:  �[36;1m�[0m
124:  �[36;1mif [[ "$PERMISSION" != "admin" && "$PERMISSION" != "write" ]]; then�[0m
125:  �[36;1m  exit 1�[0m
126:  �[36;1mfi�[0m
127:  shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0}
128:  env:
129:  GH_TOKEN: ***
130:  ##[endgroup]
131:  ##[error]Process completed with exit code 1.
132:  Post job cleanup.

@qodo-merge-pro
Copy link
Contributor

PR Code Suggestions ✨

Explore these optional code suggestions:

CategorySuggestion                                                                                                                                    Impact
General
Relax strict npx -y requirement

Avoid universally enforcing the "-y" flag for all npx installs, as many README
instructions omit it. Permit args starting with "-y" optionally, while still
ensuring a valid package argument is present.

scripts/get_manifest.py [104-110]

 if install_type == "npm":
-    # npm type should use npx command with -y flag
     if entry.get("command") != "npx":
         return False
     args = entry.get("args", [])
-    if not args or args[0] != "-y":
+    if not isinstance(args, list) or len(args) == 0:
+        return False
+    # Optional "-y" as first arg, package should be next or first if no "-y"
+    pkg_index = 1 if args and args[0] == "-y" else 0
+    if pkg_index >= len(args) or not isinstance(args[pkg_index], str) or not args[pkg_index]:
         return False
  • Apply / Chat
Suggestion importance[1-10]: 8

__

Why: This is a valuable suggestion that correctly identifies that the new validation logic is overly strict by requiring the -y flag for npx, which could lead to incorrectly rejecting valid installation methods. The proposed change makes the validation more robust and practical.

Medium
Improve validation error handling

Narrow the catch to expected request/JSON errors and keep the original broad
fallback, so network issues don't mask programming errors during validation. Log
the exception type to aid debugging.

scripts/get_manifest.py [248-250]

+except (requests.RequestException, json.JSONDecodeError) as e:
+    print(f"Validation request/parse error: {type(e).__name__}: {e}")
+    return manifest
 except Exception as e:
-    print(f"Error validating installations: {e}")
+    print(f"Unexpected error during validation: {type(e).__name__}: {e}")
     return manifest
  • Apply / Chat
Suggestion importance[1-10]: 6

__

Why: The suggestion correctly proposes replacing a broad except Exception with more specific exception handling for network and parsing errors, which improves code clarity and aids debugging.

Low
Possible issue
Harden API response parsing

Guard against missing or differently structured API fields to avoid
KeyError/IndexError crashes. Use safe dict access with defaults and validate
that content is a string before passing to the JSON extractor.

scripts/get_manifest.py [78]

-content = data["choices"][0]["message"]["content"]
+content = (
+    data.get("choices", [{}])[0]
+    .get("message", {})
+    .get("content")
+)
+if not isinstance(content, str):
+    print("Unexpected API response: missing 'content' string")
+    return None
  • Apply / Chat
Suggestion importance[1-10]: 4

__

Why: The suggestion to use safe dictionary access is valid, but the existing code already handles KeyError and IndexError in a try...except block, and the proposed improved_code has a bug that would cause an IndexError for an empty choices list.

Low
  • More

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (4)
.github/workflows/generate-manifest.yml (2)

38-41: Install jsonschema to enable schema validation in CI

Add jsonschema to dependencies so we can validate generated manifests against the registry schema in the same job.

Apply this diff:

       - name: Install dependencies
         run: |
           python -m pip install --upgrade pip
-          pip install requests
+          pip install requests jsonschema

43-49: Add a schema validation step after manifest generation

Catching schema issues early in CI avoids opening PRs with invalid manifests.

Apply this diff to add a validation step:

       - name: Generate manifest
         env:
           ANYON_API_KEY: ${{ secrets.ANYON_API_KEY }}
         run: |
           REPO_URL="${{ github.event.inputs.repo_url || github.event.client_payload.repo_url }}"
           python scripts/get_manifest.py "$REPO_URL"
 
+      - name: Validate generated manifests against schema
+        run: |
+          python scripts/validate_manifest.py
+```

</blockquote></details>
<details>
<summary>scripts/get_manifest.py (2)</summary><blockquote>

`21-28`: **Make JSON code-block extraction more robust (CRLF, optional language, trailing whitespace)**

The current regex requires exact newlines before and after the block. Loosen it to handle CRLF and optional language tags; improves resilience to typical LLM outputs.

Apply this diff:

```diff
-    json_match = re.search(r"```json\n(.*?)\n```", content, re.DOTALL)
+    # Accept ```json fences (case-insensitive), optional whitespace, and both LF/CRLF line endings
+    codeblock_pattern = r"```(?:json)?\s*\r?\n(.*?)\r?\n```"
+    json_match = re.search(codeblock_pattern, content, re.DOTALL | re.IGNORECASE)

231-243: Log why entries were rejected to aid debugging

When dropping entries, include minimal reason (missing field, wrong command, etc.). This will reduce churn when API outputs are slightly off.

For example:

-                else:
-                    print(f"⚠ Removing invalid {install_type} installation entry")
+                else:
+                    print(f"⚠ Removing invalid {install_type} installation entry (failed schema validation)")

Optionally extend validate_installation_entry to return (bool, reason) for richer messages.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between e19dfff and 2481a58.

📒 Files selected for processing (2)
  • .github/workflows/generate-manifest.yml (1 hunks)
  • scripts/get_manifest.py (7 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py

📄 CodeRabbit Inference Engine (CLAUDE.md)

Always format Python code with ruff.

Files:

  • scripts/get_manifest.py
🧬 Code Graph Analysis (1)
scripts/get_manifest.py (3)
scripts/utils.py (1)
  • validate_arguments_in_installation (61-153)
scripts/validate_manifest.py (2)
  • main (64-97)
  • validate_manifest (32-48)
src/mcpm/commands/install.py (1)
  • install (123-448)
🔇 Additional comments (5)
.github/workflows/generate-manifest.yml (1)

1-1: Workflow rename to production name looks good

Clearer name. No functional changes introduced.

scripts/get_manifest.py (4)

40-50: Repo name parsing handles HTTPS/SSH forms — LGTM

Covers https://github.com/owner/repo(.git) and [email protected]:owner/repo(.git) correctly.


67-70: Verify the Chat Completions endpoint accepts “content” as an array of parts

You’re sending content as [{"type":"text","text": "..."}] to a chat.completions-compatible endpoint, but later you expect a plain string in choices[0].message.content. Some OpenAI-compatible endpoints only accept content as a string in chat.completions. If the API rejects this, revert to a plain string.

If needed, switch to string content with this change:

-                "content": [{"type": "text", "text": f"help me generate manifest json for this repo: {repo_url}"}],
+                "content": f"help me generate manifest json for this repo: {repo_url}",

265-267: Explicit UTF-8 encoding on write — LGTM

Prevents mojibake for non-ASCII manifests.


276-304: Remember to run ruff format per repo guidelines

Per coding guidelines, Python code should be formatted with ruff. No blockers spotted, but please run ruff format to keep CI happy.

Comment on lines 74 to +81
response = requests.post(url, headers=headers, json=payload)
response.raise_for_status()

data = response.json()
content = data["choices"][0]["message"]["content"]

return extract_json_from_content(content)

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add request timeout and handle JSON decoding errors

Without a timeout, the action can hang indefinitely. Also guard against invalid JSON bodies.

Apply this diff:

-        response = requests.post(url, headers=headers, json=payload)
+        response = requests.post(url, headers=headers, json=payload, timeout=30)
         response.raise_for_status()
 
-        data = response.json()
+        data = response.json()
         content = data["choices"][0]["message"]["content"]

And expand error handling:

-    except requests.RequestException as e:
+    except requests.RequestException as e:
         print(f"API request failed: {e}")
         return None
-    except (KeyError, IndexError) as e:
+    except (KeyError, IndexError, ValueError, json.JSONDecodeError) as e:
         print(f"Unexpected API response format: {e}")
         return None
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
response = requests.post(url, headers=headers, json=payload)
response.raise_for_status()
data = response.json()
content = data["choices"][0]["message"]["content"]
return extract_json_from_content(content)
try:
response = requests.post(url, headers=headers, json=payload, timeout=30)
response.raise_for_status()
data = response.json()
content = data["choices"][0]["message"]["content"]
return extract_json_from_content(content)
except requests.RequestException as e:
print(f"API request failed: {e}")
return None
except (KeyError, IndexError, ValueError, json.JSONDecodeError) as e:
print(f"Unexpected API response format: {e}")
return None

Comment on lines +90 to +124
def validate_installation_entry(install_type: str, entry: dict) -> bool:
"""Validate a single installation entry against MCP registry schema."""
# Required fields for all installation types
required_fields = {"type", "command", "args"}

# Check all required fields exist
if not all(field in entry for field in required_fields):
return False

# Type must match install_type
if entry.get("type") != install_type:
return False

# Type-specific validation based on MCP registry patterns
if install_type == "npm":
# npm type should use npx command with -y flag
if entry.get("command") != "npx":
return False
args = entry.get("args", [])
if not args or args[0] != "-y":
return False
elif install_type == "uvx":
# uvx type should use uvx command
if entry.get("command") != "uvx":
return False
elif install_type == "docker":
# docker type should use docker command
if entry.get("command") != "docker":
return False
args = entry.get("args", [])
if not args or args[0] != "run":
return False

return True

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Validator incorrectly rejects HTTP installs and over-constrains types; add type-specific checks

Currently requires command/args for all types and only recognizes npm/uvx/docker. This will drop valid HTTP entries (which use url/headers) and can admit incomplete entries (e.g., npm with only ["-y"]). Make checks per type and ensure minimum args are present.

Apply this diff:

-def validate_installation_entry(install_type: str, entry: dict) -> bool:
-    """Validate a single installation entry against MCP registry schema."""
-    # Required fields for all installation types
-    required_fields = {"type", "command", "args"}
-
-    # Check all required fields exist
-    if not all(field in entry for field in required_fields):
-        return False
-
-    # Type must match install_type
-    if entry.get("type") != install_type:
-        return False
-
-    # Type-specific validation based on MCP registry patterns
-    if install_type == "npm":
-        # npm type should use npx command with -y flag
-        if entry.get("command") != "npx":
-            return False
-        args = entry.get("args", [])
-        if not args or args[0] != "-y":
-            return False
-    elif install_type == "uvx":
-        # uvx type should use uvx command
-        if entry.get("command") != "uvx":
-            return False
-    elif install_type == "docker":
-        # docker type should use docker command
-        if entry.get("command") != "docker":
-            return False
-        args = entry.get("args", [])
-        if not args or args[0] != "run":
-            return False
-
-    return True
+def validate_installation_entry(install_type: str, entry: dict) -> bool:
+    """Validate a single installation entry against MCP registry schema."""
+    if not isinstance(entry, dict):
+        return False
+
+    # Entry type must match key
+    if entry.get("type") != install_type:
+        return False
+
+    # Type-specific validation
+    if install_type == "npm":
+        # Must be: command=npx, args=["-y", "<package>", ...]
+        if entry.get("command") != "npx":
+            return False
+        args = entry.get("args") or []
+        if len(args) < 2 or args[0] != "-y" or not isinstance(args[1], str) or not args[1]:
+            return False
+        return True
+
+    if install_type == "uvx":
+        # Must be: command=uvx, args=[...at least one token...]
+        if entry.get("command") != "uvx":
+            return False
+        args = entry.get("args") or []
+        if len(args) < 1:
+            return False
+        return True
+
+    if install_type == "docker":
+        # Must be: command=docker, args=["run", ...]
+        if entry.get("command") != "docker":
+            return False
+        args = entry.get("args") or []
+        if not args or args[0] != "run" or len(args) < 2:
+            # require at least an image or a flag after "run"
+            return False
+        return True
+
+    if install_type == "http":
+        # HTTP uses URL/headers instead of command/args
+        url = entry.get("url")
+        if not isinstance(url, str) or not url.strip():
+            return False
+        headers = entry.get("headers")
+        if headers is not None and not isinstance(headers, dict):
+            return False
+        return True
+
+    # Unknown types: reject
+    return False
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
def validate_installation_entry(install_type: str, entry: dict) -> bool:
"""Validate a single installation entry against MCP registry schema."""
# Required fields for all installation types
required_fields = {"type", "command", "args"}
# Check all required fields exist
if not all(field in entry for field in required_fields):
return False
# Type must match install_type
if entry.get("type") != install_type:
return False
# Type-specific validation based on MCP registry patterns
if install_type == "npm":
# npm type should use npx command with -y flag
if entry.get("command") != "npx":
return False
args = entry.get("args", [])
if not args or args[0] != "-y":
return False
elif install_type == "uvx":
# uvx type should use uvx command
if entry.get("command") != "uvx":
return False
elif install_type == "docker":
# docker type should use docker command
if entry.get("command") != "docker":
return False
args = entry.get("args", [])
if not args or args[0] != "run":
return False
return True
def validate_installation_entry(install_type: str, entry: dict) -> bool:
"""Validate a single installation entry against MCP registry schema."""
if not isinstance(entry, dict):
return False
# Entry type must match key
if entry.get("type") != install_type:
return False
# Type-specific validation
if install_type == "npm":
# Must be: command=npx, args=["-y", "<package>", ...]
if entry.get("command") != "npx":
return False
args = entry.get("args") or []
if len(args) < 2 or args[0] != "-y" or not isinstance(args[1], str) or not args[1]:
return False
return True
if install_type == "uvx":
# Must be: command=uvx, args=[...at least one token...]
if entry.get("command") != "uvx":
return False
args = entry.get("args") or []
if len(args) < 1:
return False
return True
if install_type == "docker":
# Must be: command=docker, args=["run", ...]
if entry.get("command") != "docker":
return False
args = entry.get("args") or []
if not args or args[0] != "run" or len(args) < 2:
# require at least an image or a flag after "run"
return False
return True
if install_type == "http":
# HTTP uses URL/headers instead of command/args
url = entry.get("url")
if not isinstance(url, str) or not url.strip():
return False
headers = entry.get("headers")
if headers is not None and not isinstance(headers, dict):
return False
return True
# Unknown types: reject
return False
🤖 Prompt for AI Agents
In scripts/get_manifest.py around lines 90 to 124, the validator currently
enforces "command" and "args" for every install entry and only recognizes
npm/uvx/docker, which wrongly rejects HTTP installs and under/over-constrains
other types; change validation to branch by install_type and enforce
type-specific required fields: for "http" require "url" (and optional "headers")
and do not demand command/args; for "npm" require command == "npx" and args
length >= 2 with first arg "-y" and at least one package arg after the flags;
for "uvx" require command == "uvx" and no args constraint beyond presence if
needed; for "docker" require command == "docker" and args length >= 1 with first
arg "run"; also return False for unknown install_type instead of attempting
generic checks. Ensure presence checks use explicit required sets per type
rather than global required_fields.

Comment on lines 221 to 229
print("Validating installations against README...")
response = requests.post(url, headers=headers, json=payload)
response.raise_for_status()

data = response.json()
content = data["choices"][0]["message"]["content"]

validated_data = extract_json_from_content(content)
if validated_data and "installations" in validated_data:
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add timeout and JSON error handling to validation request

Same rationale as generate_manifest — avoid indefinite hangs and handle invalid JSON.

Apply this diff:

-        response = requests.post(url, headers=headers, json=payload)
+        response = requests.post(url, headers=headers, json=payload, timeout=60)
         response.raise_for_status()
 
-        data = response.json()
+        data = response.json()
         content = data["choices"][0]["message"]["content"]

And broaden exception handling here too:

-    except Exception as e:
-        print(f"Error validating installations: {e}")
+    except (requests.RequestException, ValueError, json.JSONDecodeError, KeyError, IndexError) as e:
+        print(f"Error validating installations: {e}")
         return manifest
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
print("Validating installations against README...")
response = requests.post(url, headers=headers, json=payload)
response.raise_for_status()
data = response.json()
content = data["choices"][0]["message"]["content"]
validated_data = extract_json_from_content(content)
if validated_data and "installations" in validated_data:
print("Validating installations against README...")
response = requests.post(url, headers=headers, json=payload, timeout=60)
response.raise_for_status()
data = response.json()
content = data["choices"][0]["message"]["content"]
validated_data = extract_json_from_content(content)
if validated_data and "installations" in validated_data:
...
except (requests.RequestException, ValueError, json.JSONDecodeError, KeyError, IndexError) as e:
print(f"Error validating installations: {e}")
return manifest
🤖 Prompt for AI Agents
In scripts/get_manifest.py around lines 221 to 229, the POST request that
validates installations should include a timeout and robust error handling: add
a timeout argument to requests.post to avoid indefinite hangs, wrap the response
parsing in a try/except that catches requests.RequestException and JSON decoding
errors (ValueError/JSONDecodeError) and handle them gracefully (log/raise a
clear error), and broaden the existing exception handling around
extract_json_from_content so invalid or missing JSON in content is caught and
handled rather than letting the script crash.

@GabrielDrapor GabrielDrapor merged commit cabec4c into main Aug 20, 2025
8 of 9 checks passed
@GabrielDrapor GabrielDrapor deleted the Jiarui/smart-registry-workflow branch August 20, 2025 07:01
@mcpm-semantic-release
Copy link

🎉 This PR is included in version 2.7.1 🎉

The release is available on GitHub release

Your semantic-release bot 📦🚀

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants