Conversation
- Replace Flask with FastAPI + uvicorn[standard] in __init__.py, __main__.py, requirements.txt, pyproject.toml, setup.py - Introduce TranslateEngineWrapper for clean dependency injection; remove global TX/DETECT state - start_translate_server() now returns (app, engine) tuple instead of blocking; port/host args moved to uvicorn.run() call site - CORSMiddleware always allow_origins=["*"] unconditionally - Fix build_tests.yml: replace broken bespoke inline workflow with gh-automations reusable build-tests.yml@dev - Fix license_tests.yml, publish_stable.yml, release_workflow.yml: migrate from TigreGotico/gh-automations@master to OpenVoiceOS/gh-automations@dev, add bot-safety guard, add workflow_dispatch trigger - Add lint.yml and pip_audit.yml workflows using gh-automations@dev - Add pyproject.toml (dynamic version from version.py) - Add/update docs/index.md, QUICK_FACTS.md, FAQ.md, AUDIT.md, SUGGESTIONS.md, MAINTENANCE_REPORT.md Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Adds LibreTranslate, DeepL, Google Translate v2, Azure Translator v3, and Amazon Translate-compatible endpoints so any 3rd-party app can use OVOS translation plugins as a drop-in replacement. - New routers/: libretranslate, deepl, google_translate, azure_translator, amazon_translate — all via make_X_router(engine) factory pattern - All schemas use Pydantic Field() constraints with Literal types - Lang code normalisation: DeepL inbound EN-US→en-us, outbound→EN-US - Azure Translator note: /translate, /detect, /languages paths conflict with LibreTranslate — must be registered in an isolated app or added after libretranslate router (first-registered wins) - pyproject.toml: optional [lang-names] extra for langcodes - test/unittests/test_compat_routers.py: 22 tests, all passing Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Each compat router now mounts under a prefix that identifies the API vendor: /libretranslate/... LibreTranslate /deepl/v2/... DeepL /google/language/... Google Translate v2 /azure/... Azure Translator v3 /amazon/translate/... Amazon Translate Benefits: all routers coexist in one app with no path conflicts (previously LibreTranslate and Azure Translator conflicted on /translate, /detect, /languages). Swagger groups endpoints clearly by vendor. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
|
Important Review skippedDraft detected. Please check the settings in the CodeRabbit UI or the ⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: You can disable this status message by setting the Use the checkbox below for a quick retry:
📝 WalkthroughWalkthroughThe pull request performs a major architectural migration of ovos-translate-server from Flask to FastAPI, consolidates CI workflows into reusable external workflows, introduces multiple API-compatible routers for different translation providers, and adds comprehensive project documentation. Changes
Sequence DiagramsequenceDiagram
participant Client
participant FastAPI as FastAPI App
participant Wrapper as TranslateEngineWrapper
participant TxPlugin as Translation Plugin
participant DetectPlugin as Detection Plugin
Client->>FastAPI: GET /translate/{src}/{tgt}/{text}
FastAPI->>Wrapper: Lookup router & call translate
Wrapper->>TxPlugin: Invoke translate(text, src, tgt)
TxPlugin-->>Wrapper: Return translated_text
Wrapper-->>FastAPI: Return translation result
FastAPI-->>Client: 200 with JSON response
Client->>FastAPI: GET /detect/{text}
FastAPI->>Wrapper: Call detect endpoint
alt detect_plugin available
Wrapper->>DetectPlugin: Invoke detect(text)
DetectPlugin-->>Wrapper: Return language
else fallback to tx plugin
Wrapper->>TxPlugin: Invoke tx.detect(text)
TxPlugin-->>Wrapper: Return language
end
Wrapper-->>FastAPI: Return detected language
FastAPI-->>Client: 200 with detection data
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Poem
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
The automated pipeline has reached its destination. 🏁I've aggregated the results of the automated checks for this PR below. 🔍 LintThe automated results are now available for your perusal. 📂 ❌ ruff: issues found — see job log 🔒 Security (pip-audit)Scanning for any potential SQL injection points. 💉 ✅ No known vulnerabilities found (75 packages scanned). ⚖️ License CheckAuditing the legal lineage of this contribution. 📜 ✅ No license violations found (56 packages). License distribution: 13× MIT, 13× MIT License, 6× Apache Software License, 6× BSD-3-Clause, 5× Apache-2.0, 2× ISC License (ISCL), 2× PSF-2.0, 2× Python Software Foundation License, +7 more Full breakdown — 56 packages
Copyright (c) 2022 Phil Ewels Permission is hereby granted, free of charge, to any person obtaining a copy The above copyright notice and this permission notice shall be included in all THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR Policy: Apache 2.0 (universal donor). StrongCopyleft / NetworkCopyleft / WeakCopyleft / Other / Error categories fail. MPL allowed. 🔨 Build TestsChecking the blueprint against the actual construction. 📐
❌ 3.10: An automated high-five for your latest changes! 🖐️ |
There was a problem hiding this comment.
Actionable comments posted: 16
🧹 Nitpick comments (8)
QUICK_FACTS.md (1)
9-9: Avoid hard-coding the version in docs.Line 9 will drift on the next release. Prefer generating this value from
ovos_translate_server/version.py(or templating during docs build).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@QUICK_FACTS.md` at line 9, The README currently hard-codes the version string in QUICK_FACTS.md; instead, read the package version from ovos_translate_server/version.py (e.g. the __version__ symbol) or inject it at docs build time via a template variable; update QUICK_FACTS.md to use a placeholder (or Jinja/Sphinx substitution) and modify the docs build step to import ovos_translate_server.version.__version__ and render that value into the placeholder so the docs always reflect the package version.ovos_translate_server/routers/amazon_translate.py (1)
87-95: Moveimport langcodesoutside the loop.The
langcodesimport is executed on every iteration. Move it before the loop or to module level for better performance, especially with large language lists.♻️ Proposed fix
def list_languages( authorization: Optional[str] = Header(default=None), ) -> AmazonListLanguagesResponse: ... languages = [] + try: + import langcodes + except ImportError: + langcodes = None for code in engine.langs: - try: - import langcodes + if langcodes is not None: + try: - name = langcodes.Language.get(code).display_name() - except Exception: + name = langcodes.Language.get(code).display_name() + except Exception: + name = code + else: name = code languages.append(AmazonLanguage(LanguageCode=code, LanguageName=name))🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@ovos_translate_server/routers/amazon_translate.py` around lines 87 - 95, The loop in which AmazonLanguage objects are created repeatedly imports langcodes, causing unnecessary overhead; move the import of langcodes out of the loop (preferably to module level or at the top of the function that builds the list) so that engine.langs is iterated without re-importing; update the block where languages are constructed (the code that appends AmazonLanguage(LanguageCode=code, LanguageName=name) and returns AmazonListLanguagesResponse) to use the single imported langcodes instance.ovos_translate_server/routers/azure_translator.py (2)
158-169: Moveimport langcodesoutside the loop.Same issue as in amazon_translate.py - the import is executed on every iteration.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@ovos_translate_server/routers/azure_translator.py` around lines 158 - 169, The loop in the Azure languages builder repeatedly imports langcodes on each iteration which is inefficient; move the import langcodes statement out of the for loop and perform it once before iterating over engine.langs, then use langcodes.Language.get(code) inside the loop to populate AzureLanguageInfo entries and return AzureLanguagesResponse(translation=translation); keep the existing try/except around the Language.get/display_name calls to preserve fallback behavior.
94-110: Redundantor Noneassignment.Line 98
source = from_lang or Noneis redundant sincefrom_langis alreadyOptional[str]withdefault=None. Iffrom_langis falsy (empty string), you'd want to treat it as None, but the Query already enforcesmin_length=1, so an empty string won't pass validation.♻️ Simplified assignment
- source = from_lang or None + source = from_lang🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@ovos_translate_server/routers/azure_translator.py` around lines 94 - 110, The assignment source = from_lang or None is redundant; replace it by using from_lang directly (e.g., remove the source variable or set source = from_lang) and pass that into engine.tx.translate (translate(item.Text, target=tgt, source=from_lang)) so you don't coerce a valid Optional[str] to None unnecessarily; keep the existing detected language handling (_detect_lang, AzureTranslation, AzureTranslateItem) unchanged.test/unittests/test_compat_routers.py (1)
18-46: Mutable class attribute defaults in test fakes.
available_languages(line 19) andlangs(line 41) use mutable list defaults. While harmless in this test context since they're never mutated, annotating withClassVaror using a property would silence the Ruff RUF012 warning and prevent accidental mutation.♻️ Optional fix using ClassVar
+from typing import ClassVar, Dict, List, Optional -from typing import Dict, List, Optional class FakeTx: - available_languages: List[str] = ["en", "de", "fr", "es"] + available_languages: ClassVar[List[str]] = ["en", "de", "fr", "es"] ... class FakeEngine: - plugin_name: str = "fake-translate" - langs: List[str] = ["en", "de", "fr", "es"] + plugin_name: ClassVar[str] = "fake-translate" + langs: ClassVar[List[str]] = ["en", "de", "fr", "es"]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/unittests/test_compat_routers.py` around lines 18 - 46, The test fakes use mutable list defaults which trigger RUF012: change the class-level mutable lists to non-mutable class-vars or instance attributes; specifically annotate FakeTx.available_languages and FakeEngine.langs as ClassVar[List[str]] (or move them into __init__ as self.available_languages/self.langs) so they are not mutable class defaults and the linter warning is silenced while preserving current behavior.ovos_translate_server/routers/google_translate.py (1)
109-114: Redundant try-except block.
_detect_one(text)(lines 74-86) already catches all exceptions internally and returns a defaultGoogleDetection(language="und", confidence=0.0). The outer try-except here is unnecessary.♻️ Simplified code
detected_src = None if not request.source: - try: - det = _detect_one(text) - detected_src = det.language - except Exception: - pass + det = _detect_one(text) + detected_src = det.language🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@ovos_translate_server/routers/google_translate.py` around lines 109 - 114, The outer try-except around _detect_one(text) is redundant because _detect_one already handles exceptions and returns a default GoogleDetection; remove the try/except and directly call det = _detect_one(text) and set detected_src = det.language when request.source is falsy (preserve the existing conditional). Update the block containing request.source, _detect_one, and detected_src accordingly (references: _detect_one and detected_src).ovos_translate_server/routers/libretranslate.py (2)
100-108: Moveimport langcodesoutside the loop.Same issue as other routers - import inside loop is inefficient.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@ovos_translate_server/routers/libretranslate.py` around lines 100 - 108, The import of langcodes is done inside the loop (in the block that iterates over engine.langs) which is inefficient; move the import statement out of the loop (e.g., at the top of the function that builds result or module top) so langcodes is imported once, then use langcodes.Language.get(code).display_name() inside the loop to construct LibreLanguage(code=code, name=name) for each code; update the function that contains engine.langs and the LibreLanguage construction to reference the pre-imported langcodes.
71-91: Missing error handling in detect endpoint.Unlike other routers (DeepL, Azure, Google) which wrap detection in try-except with fallback, this endpoint will propagate exceptions directly to the client if
detect_probsfails. Consider adding a try-except block for consistency and resilience.♻️ Proposed fix with error handling
`@router.post`("/detect", response_model=List[LibreDetectEntry]) def detect(request: LibreDetectRequest) -> List[LibreDetectEntry]: ... - if engine.detect is not None: - probs = engine.detect.detect_probs(request.q) - else: - probs = engine.tx.detect_probs(request.q) + try: + if engine.detect is not None: + probs = engine.detect.detect_probs(request.q) + else: + probs = engine.tx.detect_probs(request.q) + except Exception: + return [] results = [ LibreDetectEntry(language=lang, confidence=float(conf)) for lang, conf in probs.items() ]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@ovos_translate_server/routers/libretranslate.py` around lines 71 - 91, The detect endpoint currently calls engine.detect.detect_probs or engine.tx.detect_probs without error handling; wrap the detection calls inside a try-except in the detect function so exceptions from detect_probs are caught, log or handle the exception, and attempt the fallback detection (e.g., call engine.tx.detect_probs if the primary call fails) before returning the LibreDetectEntry list; ensure you reference the existing symbols (detect function, engine.detect, engine.tx, detect_probs, LibreDetectEntry) when adding the try-except and fallback so behavior matches other routers' resilience.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/license_tests.yml:
- Around line 13-15: The reusable workflow invocation currently only passes
system_deps: 'swig' and omits the project-specific license policy; update the
license_tests.yml call to forward the same exclusions/overrides used by
test/license_tests.py by adding the reusable-workflow inputs for
exclude_packages and exclude_licenses and the custom overrides for kthread→MIT,
pyxdg→GPL-2.0, ptyprocess→ISC and psutil→BSD3 (and any strict validation flag
used in test/license_tests.py) so the external workflow enforces the project
policy rather than its defaults; make sure to reference the existing system_deps
input and the package names pyxdg, kthread, ptyprocess, psutil when adding these
inputs.
In @.github/workflows/pip_audit.yml:
- Around line 11-12: Replace the mutable reusable-workflow reference that uses
"@dev" with an immutable commit SHA: locate the line containing "uses:
OpenVoiceOS/gh-automations/.github/workflows/pip-audit.yml@dev" and update the
suffix to a specific commit SHA (e.g., "@<commit-sha>"); do the same for the
other affected workflow files that reference OpenVoiceOS/gh-automations (search
for "uses: OpenVoiceOS/gh-automations/.github/workflows/...@dev") so all
reusable workflow links are pinned to explicit commit SHAs.
In @.github/workflows/publish_stable.yml:
- Around line 10-16: The workflow is importing a reusable workflow via the
moving ref "uses:
OpenVoiceOS/gh-automations/.github/workflows/publish-stable.yml@dev", which
allows behavior to change unexpectedly; replace the branch ref (`@dev`) with an
immutable full commit SHA for that reusable workflow reference so
secrets/publishing behavior is pinned (keep the rest of the inputs—branch,
version_file, publish_pypi, sync_dev—unchanged).
In @.github/workflows/release_workflow.yml:
- Around line 12-22: The workflow currently references the reusable publishing
workflow via a mutable branch ref
("OpenVoiceOS/gh-automations/.github/workflows/publish-alpha.yml@dev"), which
risks changing behavior and exposing inherited secrets; update that "uses" value
to pin to an immutable full commit SHA of the target repo (replace the "@dev"
ref with "@<full-commit-sha>") and do the same for the equivalent reference in
publish_stable.yml (the "publish-stable.yml@dev" entry), ensuring both
publish_alpha and publish_stable workflows point to exact commit SHAs to lock
release behavior and minimize blast radius of inherited secrets.
In `@AUDIT.md`:
- Line 13: Update the AUDIT.md entry that currently claims the test/ directory
is empty: replace the `[MAJOR] tests: No unit tests found` line with a note
acknowledging the added router compatibility tests
(test/unittests/test_compat_routers.py) and reframe the finding to identify the
remaining gap—lack of plugin-loading and configuration coverage (e.g.,
mock-based unit tests for ovos_translate_server plugin loading and config
handling). Ensure the new bullet references the existing test file
(test/unittests/test_compat_routers.py) and clearly states that targeted
plugin-loading/config tests are needed.
In `@docs/index.md`:
- Around line 171-177: TranslateEngineWrapper currently instantiates plugin
classes with config={} which drops per-plugin settings; update the instantiation
in TranslateEngineWrapper (ovos_translate_server.__init__.py) to load and pass
the proper config for each plugin (e.g. cfg = Configuration().get("language",
{}); PluginClass(config=cfg.get(plugin_name, {})) or equivalent) instead of an
empty dict so credential-based backends (DeepL, Azure, Amazon) receive their API
keys/endpoints.
- Around line 96-99: The fenced code examples that currently start with the raw
lines like `GET /detect/o meu nome é Casimiro` (and the other similar
request/response examples in the same docs section) lack a language tag and
trigger MD040; update each fenced block in docs/index.md that contains those
HTTP request/response examples (e.g., the blocks around the GET /detect examples
and the other pairs noted in the comment) to include a language identifier such
as ```http or ```console (or ```text) so the markdownlint warning is resolved
and the examples render with proper syntax highlighting.
- Around line 41-49: Update the docs to reflect the real default and fallback:
change the `--detect-engine` default from `ovos-lang-detector-classics-plugin`
to `None` (or "not set") and document that when unset the server handlers call
engine.tx.detect() / engine.tx.detect_probs() as the fallback behavior;
reference the flag name `--detect-engine`, the module
`ovos_translate_server.__main__.py`, and the handler behavior
`engine.tx.detect()` / `detect_probs()` so readers know the actual runtime
behavior.
In `@FAQ.md`:
- Around line 11-22: Update the repo-root command examples in FAQ.md so they are
copy-pasteable from the project root: replace the development install example
shown in the pip snippet (currently referencing "pip install -e
ovos-translate-server/") with the repo-root form "pip install -e ." and update
the test command (currently "pytest ovos-translate-server/test/...") to "pytest
test/..." (leave any task-runner prefix like "uv" in place if intended). Locate
the two snippets in the FAQ.md file (the pip install block and the pytest block)
and change only the path arguments to the repo-root equivalents.
In `@MAINTENANCE_REPORT.md`:
- Around line 18-21: Update the AI Transparency Report entry so it accurately
reflects that unit tests were added: mention the new test file
test_compat_routers.py instead of saying "no tests added", and replace the
blanket statement with a brief note of the remaining coverage gap (e.g., plugin
mocking/integration tests still required). Edit the section under "Actions
Taken"/"Oversight" to state "Unit tests added: test_compat_routers.py;
remaining: plugin mocking/integration tests and expanded coverage" and ensure
the Oversight line requests targeted human review for those missing tests.
In `@ovos_translate_server/__init__.py`:
- Around line 90-96: The CORS setup uses app.add_middleware with CORSMiddleware
allowing allow_origins=["*"] while allow_credentials=True, which is insecure and
incompatible with credentialed responses; update the middleware configuration in
the app.add_middleware call to either disable credentials (set
allow_credentials=False) if you don't need cookies/authenticated requests, or
replace the wildcard origin with an explicit list of trusted origins (e.g.,
["http://localhost:3000", "https://your-app.com"]) while keeping
allow_credentials=True so credentialed requests only succeed for those specific
origins.
In `@ovos_translate_server/__main__.py`:
- Line 35: Change the CLI default host from "0.0.0.0" to "127.0.0.1" in the
argument parser so the server binds to localhost by default; locate the
parser.add_argument call (the "--host" argument) in
ovos_translate_server/__main__.py and update its default value to "127.0.0.1",
leaving the help text and behavior otherwise unchanged so deployments can still
pass --host 0.0.0.0 explicitly.
In `@pyproject.toml`:
- Around line 2-10: The pyproject.toml declares build-system.requires =
["setuptools>=61.0", "wheel"] while using the simple license field license =
"Apache-2.0", which requires setuptools >= 77.0.0; fix by either raising the
setuptools floor to "setuptools>=77.0.0" in the build-system.requires entry or
keep the current setuptools version and change the license to the table form
license = { text = "Apache-2.0" } so metadata validation succeeds (edit the
build-system.requires or the license field in pyproject.toml accordingly).
In `@QUICK_FACTS.md`:
- Around line 24-25: The "### Scripts" heading in QUICK_FACTS.md causes a
heading level jump (MD001); change its level to increment by one relative to the
previous heading (e.g., replace "### Scripts" with "## Scripts" or adjust
surrounding headings so the sequence is consistent) so the heading hierarchy is
contiguous.
In `@requirements.txt`:
- Around line 1-3: The build fails because setup.py calls
required("requirements.txt") to populate install_requires (remove that call) and
rely on pyproject.toml for dependencies instead; edit setup.py to remove the
install_requires=required("requirements.txt") (or the required(...) helper
usage) so setup.py no longer reads requirements.txt, and optionally add a
MANIFEST.in with "include requirements.txt" only if you keep reading the file;
ensure pyproject.toml remains the single source of declared dependencies
(fastapi, uvicorn[standard], ovos-plugin-manager).
In `@SUGGESTIONS.md`:
- Line 23: Fix the typo in the suggestion title: replace "Add `GET /translate`
POST variant for long texts" with a correct title such as "Add `POST /translate`
endpoint for long texts" (or "Add `POST /translate`") so the heading matches the
described change; update the markdown heading text in SUGGESTIONS.md
accordingly.
---
Nitpick comments:
In `@ovos_translate_server/routers/amazon_translate.py`:
- Around line 87-95: The loop in which AmazonLanguage objects are created
repeatedly imports langcodes, causing unnecessary overhead; move the import of
langcodes out of the loop (preferably to module level or at the top of the
function that builds the list) so that engine.langs is iterated without
re-importing; update the block where languages are constructed (the code that
appends AmazonLanguage(LanguageCode=code, LanguageName=name) and returns
AmazonListLanguagesResponse) to use the single imported langcodes instance.
In `@ovos_translate_server/routers/azure_translator.py`:
- Around line 158-169: The loop in the Azure languages builder repeatedly
imports langcodes on each iteration which is inefficient; move the import
langcodes statement out of the for loop and perform it once before iterating
over engine.langs, then use langcodes.Language.get(code) inside the loop to
populate AzureLanguageInfo entries and return
AzureLanguagesResponse(translation=translation); keep the existing try/except
around the Language.get/display_name calls to preserve fallback behavior.
- Around line 94-110: The assignment source = from_lang or None is redundant;
replace it by using from_lang directly (e.g., remove the source variable or set
source = from_lang) and pass that into engine.tx.translate (translate(item.Text,
target=tgt, source=from_lang)) so you don't coerce a valid Optional[str] to None
unnecessarily; keep the existing detected language handling (_detect_lang,
AzureTranslation, AzureTranslateItem) unchanged.
In `@ovos_translate_server/routers/google_translate.py`:
- Around line 109-114: The outer try-except around _detect_one(text) is
redundant because _detect_one already handles exceptions and returns a default
GoogleDetection; remove the try/except and directly call det = _detect_one(text)
and set detected_src = det.language when request.source is falsy (preserve the
existing conditional). Update the block containing request.source, _detect_one,
and detected_src accordingly (references: _detect_one and detected_src).
In `@ovos_translate_server/routers/libretranslate.py`:
- Around line 100-108: The import of langcodes is done inside the loop (in the
block that iterates over engine.langs) which is inefficient; move the import
statement out of the loop (e.g., at the top of the function that builds result
or module top) so langcodes is imported once, then use
langcodes.Language.get(code).display_name() inside the loop to construct
LibreLanguage(code=code, name=name) for each code; update the function that
contains engine.langs and the LibreLanguage construction to reference the
pre-imported langcodes.
- Around line 71-91: The detect endpoint currently calls
engine.detect.detect_probs or engine.tx.detect_probs without error handling;
wrap the detection calls inside a try-except in the detect function so
exceptions from detect_probs are caught, log or handle the exception, and
attempt the fallback detection (e.g., call engine.tx.detect_probs if the primary
call fails) before returning the LibreDetectEntry list; ensure you reference the
existing symbols (detect function, engine.detect, engine.tx, detect_probs,
LibreDetectEntry) when adding the try-except and fallback so behavior matches
other routers' resilience.
In `@QUICK_FACTS.md`:
- Line 9: The README currently hard-codes the version string in QUICK_FACTS.md;
instead, read the package version from ovos_translate_server/version.py (e.g.
the __version__ symbol) or inject it at docs build time via a template variable;
update QUICK_FACTS.md to use a placeholder (or Jinja/Sphinx substitution) and
modify the docs build step to import ovos_translate_server.version.__version__
and render that value into the placeholder so the docs always reflect the
package version.
In `@test/unittests/test_compat_routers.py`:
- Around line 18-46: The test fakes use mutable list defaults which trigger
RUF012: change the class-level mutable lists to non-mutable class-vars or
instance attributes; specifically annotate FakeTx.available_languages and
FakeEngine.langs as ClassVar[List[str]] (or move them into __init__ as
self.available_languages/self.langs) so they are not mutable class defaults and
the linter warning is silenced while preserving current behavior.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: fa19630e-27ed-4880-bb63-f269f2005868
⛔ Files ignored due to path filters (8)
ovos_translate_server/routers/__pycache__/__init__.cpython-311.pycis excluded by!**/*.pycovos_translate_server/routers/__pycache__/amazon_translate.cpython-311.pycis excluded by!**/*.pycovos_translate_server/routers/__pycache__/azure_translator.cpython-311.pycis excluded by!**/*.pycovos_translate_server/routers/__pycache__/deepl.cpython-311.pycis excluded by!**/*.pycovos_translate_server/routers/__pycache__/google_translate.cpython-311.pycis excluded by!**/*.pycovos_translate_server/routers/__pycache__/libretranslate.cpython-311.pycis excluded by!**/*.pyctest/unittests/__pycache__/__init__.cpython-311.pycis excluded by!**/*.pyctest/unittests/__pycache__/test_compat_routers.cpython-311-pytest-9.0.2.pycis excluded by!**/*.pyc
📒 Files selected for processing (26)
.github/workflows/build_tests.yml.github/workflows/license_tests.yml.github/workflows/lint.yml.github/workflows/pip_audit.yml.github/workflows/publish_stable.yml.github/workflows/release_workflow.ymlAUDIT.mdFAQ.mdMAINTENANCE_REPORT.mdQUICK_FACTS.mdSUGGESTIONS.mddocs/index.mdovos_translate_server/__init__.pyovos_translate_server/__main__.pyovos_translate_server/routers/__init__.pyovos_translate_server/routers/amazon_translate.pyovos_translate_server/routers/azure_translator.pyovos_translate_server/routers/deepl.pyovos_translate_server/routers/google_translate.pyovos_translate_server/routers/libretranslate.pyovos_translate_server/version.pypyproject.tomlrequirements.txtsetup.pytest/unittests/__init__.pytest/unittests/test_compat_routers.py
| uses: OpenVoiceOS/gh-automations/.github/workflows/license-check.yml@dev | ||
| with: | ||
| system_deps: 'swig' |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Fetch reusable workflow content and inspect what it actually runs
gh api repos/OpenVoiceOS/gh-automations/contents/.github/workflows/license-check.yml?ref=dev --jq '.content' \
| base64 -d > /tmp/license-check.yml
rg -n -C2 "license|licheck|pytest|test/license_tests.py|override|whitelist|allow_" /tmp/license-check.yml || trueRepository: OpenVoiceOS/ovos-translate-server
Length of output: 129
🏁 Script executed:
# First, check what files exist related to licensing in the repo
find . -type f -name "*license*" | head -20Repository: OpenVoiceOS/ovos-translate-server
Length of output: 138
🏁 Script executed:
# Check the workflows directory structure
ls -la .github/workflows/Repository: OpenVoiceOS/ovos-translate-server
Length of output: 643
🏁 Script executed:
# Examine the current license_tests.yml file to see full context
cat -n .github/workflows/license_tests.ymlRepository: OpenVoiceOS/ovos-translate-server
Length of output: 446
🏁 Script executed:
# Check if test/license_tests.py exists and examine it
if [ -f test/license_tests.py ]; then
wc -l test/license_tests.py
echo "---"
cat -n test/license_tests.py
fiRepository: OpenVoiceOS/ovos-translate-server
Length of output: 2064
🏁 Script executed:
# Check git history for changes to license_tests.yml
git log --oneline -10 .github/workflows/license_tests.yml 2>/dev/null || echo "Git history unavailable"Repository: OpenVoiceOS/ovos-translate-server
Length of output: 143
🏁 Script executed:
# Try to fetch external workflow via public GitHub URL
curl -s "https://raw.githubusercontent.com/OpenVoiceOS/gh-automations/dev/.github/workflows/license-check.yml" | head -100Repository: OpenVoiceOS/ovos-translate-server
Length of output: 4022
🏁 Script executed:
# Check if test/license_tests.py is referenced anywhere in workflows
grep -r "test/license_tests.py" .github/workflows/ || echo "Not found in workflows"Repository: OpenVoiceOS/ovos-translate-server
Length of output: 99
🏁 Script executed:
# Search all workflow files for pytest or license test execution
grep -r "pytest.*license" .github/workflows/ || echo "Not found"Repository: OpenVoiceOS/ovos-translate-server
Length of output: 86
🏁 Script executed:
# Check if there's any configuration or script that passes the overrides
grep -r "license_overrides\|allow_nonfree\|allow_viral" . --include="*.yml" --include="*.yaml" --include="*.py" 2>/dev/null | head -20Repository: OpenVoiceOS/ovos-translate-server
Length of output: 474
🏁 Script executed:
# Verify that old test is truly not being called by searching broader
grep -r "license_tests\|LicenseChecker" .github/ || echo "Not found in .github"Repository: OpenVoiceOS/ovos-translate-server
Length of output: 129
🏁 Script executed:
# Check if there's a build_tests or other workflow that might call the old test
cat -n .github/workflows/build_tests.ymlRepository: OpenVoiceOS/ovos-translate-server
Length of output: 375
Compliance regression confirmed: project-specific license policy not enforced by external workflow.
The project's custom license overrides and strict validation flags (defined in test/license_tests.py) are not passed to the external workflow. Specifically:
- The 4 custom overrides (kthread→MIT, pyxdg→GPL-2.0, ptyprocess→ISC, psutil→BSD3) are missing
pyxdg(GPL-2.0) classified as StrongCopyleft will fail under the new workflow's default policy, which blocks GPL- The old test remains in the repository but is not executed by any workflow and represents orphaned code
The external workflow accepts exclude_packages and exclude_licenses regex parameters, but the license_tests.yml passes only system_deps: 'swig'. To restore compliance, pass the necessary exclusions or overrides to the reusable workflow.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/license_tests.yml around lines 13 - 15, The reusable
workflow invocation currently only passes system_deps: 'swig' and omits the
project-specific license policy; update the license_tests.yml call to forward
the same exclusions/overrides used by test/license_tests.py by adding the
reusable-workflow inputs for exclude_packages and exclude_licenses and the
custom overrides for kthread→MIT, pyxdg→GPL-2.0, ptyprocess→ISC and psutil→BSD3
(and any strict validation flag used in test/license_tests.py) so the external
workflow enforces the project policy rather than its defaults; make sure to
reference the existing system_deps input and the package names pyxdg, kthread,
ptyprocess, psutil when adding these inputs.
| uses: OpenVoiceOS/gh-automations/.github/workflows/pip-audit.yml@dev | ||
| secrets: inherit |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Resolve current SHA behind dev branch before pinning
gh api repos/OpenVoiceOS/gh-automations/commits/dev --jq '.sha'Repository: OpenVoiceOS/ovos-translate-server
Length of output: 237
🏁 Script executed:
# First, examine the pip_audit.yml file to confirm the `@dev` usage
cat -n .github/workflows/pip_audit.ymlRepository: OpenVoiceOS/ovos-translate-server
Length of output: 365
🏁 Script executed:
# Search for other workflow files with similar patterns
find .github/workflows -name "*.yml" -o -name "*.yaml" | head -20Repository: OpenVoiceOS/ovos-translate-server
Length of output: 323
🏁 Script executed:
# Search for other uses of `@dev` or other mutable branch references in workflow files
rg "uses:.*@(dev|main|master)" .github/workflows/ --type yamlRepository: OpenVoiceOS/ovos-translate-server
Length of output: 729
Pin all reusable workflow references to immutable commit SHAs.
The pattern of using @dev for reusable workflows appears across 6 workflow files. This mutable reference allows upstream changes in OpenVoiceOS/gh-automations to alter CI behavior without a PR in this repo. Pin each of these to a commit SHA instead:
.github/workflows/pip_audit.yml(line 11).github/workflows/release_workflow.yml.github/workflows/lint.yml.github/workflows/publish_stable.yml.github/workflows/license_tests.yml.github/workflows/build_tests.yml
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/pip_audit.yml around lines 11 - 12, Replace the mutable
reusable-workflow reference that uses "@dev" with an immutable commit SHA:
locate the line containing "uses:
OpenVoiceOS/gh-automations/.github/workflows/pip-audit.yml@dev" and update the
suffix to a specific commit SHA (e.g., "@<commit-sha>"); do the same for the
other affected workflow files that reference OpenVoiceOS/gh-automations (search
for "uses: OpenVoiceOS/gh-automations/.github/workflows/...@dev") so all
reusable workflow links are pinned to explicit commit SHAs.
| uses: OpenVoiceOS/gh-automations/.github/workflows/publish-stable.yml@dev | ||
| secrets: inherit | ||
| with: | ||
| branch: 'master' | ||
| version_file: 'ovos_translate_server/version.py' | ||
| setup_py: 'setup.py' | ||
| publish_pypi: true | ||
| sync_dev: true |
There was a problem hiding this comment.
Pin this reusable release workflow to an immutable ref.
This job publishes artifacts and inherits secrets, but the implementation comes from the moving @dev branch in another repo. A later force-push there can change stable-release behavior here without any diff in this repository. Use a full commit SHA for the reusable workflow ref.
Quick check: this should return no matches once the workflow refs are pinned.
#!/bin/bash
set -euo pipefail
rg -n 'uses:\s+\S+@(dev|main|master)\b' .github/workflows🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/publish_stable.yml around lines 10 - 16, The workflow is
importing a reusable workflow via the moving ref "uses:
OpenVoiceOS/gh-automations/.github/workflows/publish-stable.yml@dev", which
allows behavior to change unexpectedly; replace the branch ref (`@dev`) with an
immutable full commit SHA for that reusable workflow reference so
secrets/publishing behavior is pinned (keep the rest of the inputs—branch,
version_file, publish_pypi, sync_dev—unchanged).
| uses: OpenVoiceOS/gh-automations/.github/workflows/publish-alpha.yml@dev | ||
| secrets: inherit | ||
| with: | ||
| branch: 'dev' | ||
| version_file: 'ovos_translate_server/version.py' | ||
| setup_py: 'setup.py' | ||
| update_changelog: true | ||
| publish_prerelease: true | ||
| propose_release: true | ||
| changelog_max_issues: 100 | ||
|
|
||
| notify: | ||
| if: github.event.pull_request.merged == true | ||
| needs: publish_alpha | ||
| runs-on: ubuntu-latest | ||
| steps: | ||
| - uses: actions/checkout@v2 | ||
| - name: Send message to Matrix bots channel | ||
| id: matrix-chat-message | ||
| uses: fadenb/matrix-chat-message@v0.0.6 | ||
| with: | ||
| homeserver: 'matrix.org' | ||
| token: ${{ secrets.MATRIX_TOKEN }} | ||
| channel: '!WjxEKjjINpyBRPFgxl:krbel.duckdns.org' | ||
| message: | | ||
| new ${{ github.event.repository.name }} PR merged! https://github.com/${{ github.repository }}/pull/${{ github.event.number }} | ||
|
|
||
| publish_pypi: | ||
| needs: publish_alpha | ||
| if: success() # Ensure this job only runs if the previous job succeeds | ||
| runs-on: ubuntu-latest | ||
| steps: | ||
| - uses: actions/checkout@v2 | ||
| with: | ||
| ref: dev | ||
| fetch-depth: 0 # otherwise, there would be errors pushing refs to the destination repository. | ||
| - name: Setup Python | ||
| uses: actions/setup-python@v1 | ||
| with: | ||
| python-version: "3.14" | ||
| - name: Install Build Tools | ||
| run: | | ||
| python -m pip install build wheel | ||
| - name: version | ||
| run: echo "::set-output name=version::$(python setup.py --version)" | ||
| id: version | ||
| - name: Build Distribution Packages | ||
| run: | | ||
| python setup.py sdist bdist_wheel | ||
| - name: Publish to PyPI | ||
| uses: pypa/gh-action-pypi-publish@master | ||
| with: | ||
| password: ${{secrets.PYPI_TOKEN}} | ||
|
|
||
|
|
||
| propose_release: | ||
| needs: publish_alpha | ||
| if: success() # Ensure this job only runs if the previous job succeeds | ||
| runs-on: ubuntu-latest | ||
| steps: | ||
| - name: Checkout dev branch | ||
| uses: actions/checkout@v3 | ||
| with: | ||
| ref: dev | ||
|
|
||
| - name: Setup Python | ||
| uses: actions/setup-python@v2 | ||
| with: | ||
| python-version: '3.14' | ||
|
|
||
| - name: Get version from setup.py | ||
| id: get_version | ||
| run: | | ||
| VERSION=$(python setup.py --version) | ||
| echo "VERSION=$VERSION" >> $GITHUB_ENV | ||
|
|
||
| - name: Create and push new branch | ||
| run: | | ||
| git checkout -b release-${{ env.VERSION }} | ||
| git push origin release-${{ env.VERSION }} | ||
|
|
||
| - name: Open Pull Request from dev to master | ||
| env: | ||
| GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} | ||
| run: | | ||
| # Variables | ||
| BRANCH_NAME="release-${{ env.VERSION }}" | ||
| BASE_BRANCH="master" | ||
| HEAD_BRANCH="release-${{ env.VERSION }}" | ||
| PR_TITLE="Release ${{ env.VERSION }}" | ||
| PR_BODY="Human review requested!" | ||
|
|
||
| # Create a PR using GitHub API | ||
| curl -X POST \ | ||
| -H "Accept: application/vnd.github+json" \ | ||
| -H "Authorization: token $GITHUB_TOKEN" \ | ||
| -d "{\"title\":\"$PR_TITLE\",\"body\":\"$PR_BODY\",\"head\":\"$HEAD_BRANCH\",\"base\":\"$BASE_BRANCH\"}" \ | ||
| https://api.github.com/repos/${{ github.repository }}/pulls | ||
| publish_pypi: true | ||
| notify_matrix: true |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
rg -n 'uses:\s+\S+@(dev|main|master)\b' .github/workflowsRepository: OpenVoiceOS/ovos-translate-server
Length of output: 747
Pin this publishing workflow to an immutable ref.
This workflow publishes prereleases to PyPI with secrets: inherit, but sources the reusable workflow from @dev, a mutable branch. This risks uncontrolled changes to release semantics and widens the blast radius of inherited secrets. Pin the workflow reference to a full commit SHA instead.
Note: The same issue affects .github/workflows/publish_stable.yml:10, which also references publish-stable.yml@dev — stable release workflows should be pinned as well.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/release_workflow.yml around lines 12 - 22, The workflow
currently references the reusable publishing workflow via a mutable branch ref
("OpenVoiceOS/gh-automations/.github/workflows/publish-alpha.yml@dev"), which
risks changing behavior and exposing inherited secrets; update that "uses" value
to pin to an immutable full commit SHA of the target repo (replace the "@dev"
ref with "@<full-commit-sha>") and do the same for the equivalent reference in
publish_stable.yml (the "publish-stable.yml@dev" entry), ensuring both
publish_alpha and publish_stable workflows point to exact commit SHAs to lock
release behavior and minimize blast radius of inherited secrets.
AUDIT.md
Outdated
| - [x] docs/index.md | ||
|
|
||
| ## Technical Debt & Issues | ||
| - `[MAJOR]` **tests**: No unit tests found — `test/` directory exists but is empty (`ovos_translate_server/__init__.py` requires live plugins to test meaningfully; mock-based unit tests needed) |
There was a problem hiding this comment.
Update the audit finding on tests.
This PR already adds router compatibility tests in test/unittests/test_compat_routers.py, so the report should not say the test/ directory is empty or that no unit tests exist. Reframe this as the narrower gap that still remains, e.g. plugin-loading/config coverage.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@AUDIT.md` at line 13, Update the AUDIT.md entry that currently claims the
test/ directory is empty: replace the `[MAJOR] tests: No unit tests found` line
with a note acknowledging the added router compatibility tests
(test/unittests/test_compat_routers.py) and reframe the finding to identify the
remaining gap—lack of plugin-loading and configuration coverage (e.g.,
mock-based unit tests for ovos_translate_server plugin loading and config
handling). Ensure the new bullet references the existing test file
(test/unittests/test_compat_routers.py) and clearly states that targeted
plugin-loading/config tests are needed.
| default=None, | ||
| help="OPM language-detection plugin entry-point name (optional)", | ||
| ) | ||
| parser.add_argument("--host", default="0.0.0.0", help="Host to bind (default: 0.0.0.0)") |
There was a problem hiding this comment.
Default the CLI to localhost instead of 0.0.0.0.
The server exposes unauthenticated endpoints and the app enables wildcard CORS, so binding to all interfaces by default makes a fresh install remotely reachable unless the operator opts out. Use 127.0.0.1 as the default, and let container/systemd deployments pass --host 0.0.0.0 explicitly.
Safer default
- parser.add_argument("--host", default="0.0.0.0", help="Host to bind (default: 0.0.0.0)")
+ parser.add_argument("--host", default="127.0.0.1", help="Host to bind (default: 127.0.0.1)")📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| parser.add_argument("--host", default="0.0.0.0", help="Host to bind (default: 0.0.0.0)") | |
| parser.add_argument("--host", default="127.0.0.1", help="Host to bind (default: 127.0.0.1)") |
🧰 Tools
🪛 Ruff (0.15.6)
[error] 35-35: Possible binding to all interfaces
(S104)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@ovos_translate_server/__main__.py` at line 35, Change the CLI default host
from "0.0.0.0" to "127.0.0.1" in the argument parser so the server binds to
localhost by default; locate the parser.add_argument call (the "--host"
argument) in ovos_translate_server/__main__.py and update its default value to
"127.0.0.1", leaving the help text and behavior otherwise unchanged so
deployments can still pass --host 0.0.0.0 explicitly.
| requires = ["setuptools>=61.0", "wheel"] | ||
| build-backend = "setuptools.build_meta" | ||
|
|
||
| [project] | ||
| name = "ovos-translate-server" | ||
| dynamic = ["version"] | ||
| description = "FastAPI server to host OpenVoiceOS translate plugins as a service" | ||
| readme = "README.md" | ||
| license = "Apache-2.0" |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
According to setuptools packaging documentation, which setuptools version first supports SPDX expressions like license = "Apache-2.0"inpyproject.toml, and is that syntax valid when build-system.requiresonly declaressetuptools>=61.0?
💡 Result:
According to setuptools documentation, version 77.0.0 first supports SPDX license expressions like license = "Apache-2.0" in pyproject.toml. This syntax is not valid with build-system.requires declaring only setuptools>=61.0, as versions before 77.0.0 expect the deprecated table format license = {text = "Apache-2.0"} and may raise validation errors for string values.
Citations:
- 1: https://setuptools.pypa.io/en/latest/userguide/pyproject_config.html
- 2: [Docs] Need migration guide for license expression (PEP 639) about TOML-table-based
project.licensedeprecation if v77 unavailable pypa/setuptools#4903 - 3: https://packaging.python.org/en/latest/guides/writing-pyproject-toml/
- 4: https://setuptools.pypa.io/en/stable/history.html
- 5: https://setuptools.pypa.io/en/latest/userguide/pyproject_config.html?highlight=declaring+configurations+not+standardized+by
- 6: Trouble specifying license in pyproject.toml osvenskan/posix_ipc#68
- 7: Invalid pyproject.toml config after configuring license pypa/setuptools#4993
Raise setuptools minimum or use table-form license.
license = "Apache-2.0" requires setuptools 77.0.0+, but build-system.requires declares only setuptools>=61.0. Builders meeting that floor will fail during metadata validation. Either bump the setuptools minimum to 77.0.0 or use the table format license = { text = "Apache-2.0" } instead.
Backward-compatible fix
-[project]
-license = "Apache-2.0"
+[project]
+license = { text = "Apache-2.0" }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@pyproject.toml` around lines 2 - 10, The pyproject.toml declares
build-system.requires = ["setuptools>=61.0", "wheel"] while using the simple
license field license = "Apache-2.0", which requires setuptools >= 77.0.0; fix
by either raising the setuptools floor to "setuptools>=77.0.0" in the
build-system.requires entry or keep the current setuptools version and change
the license to the table form license = { text = "Apache-2.0" } so metadata
validation succeeds (edit the build-system.requires or the license field in
pyproject.toml accordingly).
| ### Scripts | ||
| - `ovos-translate-server`: `ovos_translate_server.__main__:main` |
There was a problem hiding this comment.
Fix heading level to satisfy markdownlint MD001.
The docs lint warning indicates a heading increment problem around this section. Adjust this heading hierarchy so it only increments by one level.
📝 Possible fix
-### Scripts
+## Scripts📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| ### Scripts | |
| - `ovos-translate-server`: `ovos_translate_server.__main__:main` | |
| ## Scripts | |
| - `ovos-translate-server`: `ovos_translate_server.__main__:main` |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@QUICK_FACTS.md` around lines 24 - 25, The "### Scripts" heading in
QUICK_FACTS.md causes a heading level jump (MD001); change its level to
increment by one relative to the previous heading (e.g., replace "### Scripts"
with "## Scripts" or adjust surrounding headings so the sequence is consistent)
so the heading hierarchy is contiguous.
| fastapi | ||
| uvicorn[standard] | ||
| ovos-plugin-manager |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Verify whether requirements.txt is included in source distribution inputs
fd -a 'MANIFEST.in|setup.py|pyproject.toml|requirements.txt'
# Inspect packaging config references
rg -n "requirements.txt|install_requires|dynamic|dependencies|include requirements.txt" MANIFEST.in setup.py pyproject.toml 2>/dev/null || trueRepository: OpenVoiceOS/ovos-translate-server
Length of output: 405
🏁 Script executed:
# Check if MANIFEST.in exists
cat -n MANIFEST.in 2>/dev/null || echo "MANIFEST.in does not exist"Repository: OpenVoiceOS/ovos-translate-server
Length of output: 103
🏁 Script executed:
# Inspect setup.py to understand how requirements.txt is used
cat -n setup.py | head -70Repository: OpenVoiceOS/ovos-translate-server
Length of output: 3204
🏁 Script executed:
# Check pyproject.toml for build configuration
cat -n pyproject.tomlRepository: OpenVoiceOS/ovos-translate-server
Length of output: 1457
🏁 Script executed:
# Check for any CI/build configuration that might show the error
fd -e 'yml|yaml' | head -20Repository: OpenVoiceOS/ovos-translate-server
Length of output: 59
Add MANIFEST.in to include requirements.txt in source distributions, and consolidate dependency management.
The build failure occurs because setup.py (line 52) reads requirements.txt at build time, but sdist does not include it without explicit declaration. Adding include requirements.txt to MANIFEST.in will unblock the build.
However, there's a deeper issue: dependencies are now defined in both pyproject.toml (lines 22–26) and requirements.txt, with setup.py redundantly reading the latter. The modern approach is to remove line 52 from setup.py and rely entirely on pyproject.toml as the dependency source, since it already lists fastapi, uvicorn[standard], and ovos-plugin-manager. This eliminates the file-reading dependency and aligns with PEP 517 packaging standards.
Immediate fix:
+++ MANIFEST.in
+include requirements.txtRecommended refactor:
Remove install_requires=required("requirements.txt") from setup.py line 52, as pyproject.toml already declares all dependencies correctly.
🧰 Tools
🪛 GitHub Actions: Run Build Tests
[error] 1-1: FileNotFoundError: No such file or directory: '/tmp/build-via-sdist-bnw_w1zv/ovos_translate_server-0.0.3a2/requirements.txt' during get_requires_for_build_wheel (build step: python -m build)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@requirements.txt` around lines 1 - 3, The build fails because setup.py calls
required("requirements.txt") to populate install_requires (remove that call) and
rely on pyproject.toml for dependencies instead; edit setup.py to remove the
install_requires=required("requirements.txt") (or the required(...) helper
usage) so setup.py no longer reads requirements.txt, and optionally add a
MANIFEST.in with "include requirements.txt" only if you keep reading the file;
ensure pyproject.toml remains the single source of declared dependencies
(fastapi, uvicorn[standard], ovos-plugin-manager).
SUGGESTIONS.md
Outdated
|
|
||
| **Estimated Impact**: Low effort; removes duplication and future confusion. | ||
|
|
||
| ### 3. Add `GET /translate` POST variant for long texts |
There was a problem hiding this comment.
Fix typo in suggestion title.
The title says "Add GET /translate POST variant" but describes adding a POST endpoint. Should be "Add POST /translate" or "Add POST /translate endpoint".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@SUGGESTIONS.md` at line 23, Fix the typo in the suggestion title: replace
"Add `GET /translate` POST variant for long texts" with a correct title such as
"Add `POST /translate` endpoint for long texts" (or "Add `POST /translate`") so
the heading matches the described change; update the markdown heading text in
SUGGESTIONS.md accordingly.
Adds docs/api-compatibility.md (5 vendors, 12 endpoints, curl examples), docs/language-codes.md (per-vendor normalisation rules with source citations), docs/detection.md (plugin priority and compat router behaviour). Updates FAQ.md (20 Q&As), QUICK_FACTS.md, AUDIT.md, SUGGESTIONS.md, MAINTENANCE_REPORT.md. AI-Generated Change: - Model: claude-sonnet-4-6 - Intent: document all five vendor-compatible API layers exhaustively - Impact: new docs/api-compatibility.md, language-codes.md, detection.md; FAQ expanded to 20 entries - Verified via: uv run pytest test/ -v (29 passed)
New tests: LibreTranslate detect with api_key ignored, languages returns all fake langs; DeepL missing target_lang rejected, uppercase outbound normalisation; Google detect with list input; Azure explicit source lang, multiple text items; Amazon empty Text rejected. AI-Generated Change: - Model: claude-sonnet-4-6 - Intent: improve edge-case coverage for all five vendor compat routers - Impact: 21 → 29 tests - Verified via: uv run pytest test/ -v (29 passed)
Summary by CodeRabbit
Release Notes
New Features
Documentation
Tests
Chores