-
Notifications
You must be signed in to change notification settings - Fork 35
fix(backend): re-use tls contexts where possible #7327
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
WalkthroughAdds TLS verification plumbing across HTTP adapters and execution paths. Introduces Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
CodSpeed Performance ReportMerging #7327 will not alter performanceComparing Summary
Footnotes |
|
@ogenstad is that good use of the http adapter? 😄 |
Well, good find with the performance issue :) It was perhaps not how I envisioned it being used, but I don't know if I have a strong objection to it. Possibly So the real reason why I wanted this to be cached was actually during application startup. @model_validator(mode="after")
def set_tls_context(self) -> Self:
try:
# Validate that the context can be created, we want to raise this error during application start
# instead of running into issues later when we first try to use the tls context.
self.get_tls_context()
except ssl.SSLError as exc:
raise ValueError(f"Unable load CA bundle from {self.tls_ca_bundle}: {exc}") from exc
return selfI wanted the above to produce an error if you tried to load a custom CA bundle that didn't exist as it would be easier to spot during startup then have it happen some time later when you wanted to do something with HTTP. But were you using a CA bundle or was it just any TLS call that benefitted from not having to recreate the context? For the SDK. I also did not have that in mind when I added the ability to provide your custom transform but I like that it works :) I'm wondering though if it might cause any confusion with regards to TLS settings in the SDK if users have environment variables for the SDK that would set this to a different value. I.e. the TLS options in the SDK would be ignored which can be confusing? |
Indeed, I only need the cached SSLContext thus I tried to reuse the current http adapter as much as possible. It can probably live somewhere else as you mentioned but don't know if it's worth the effort.
The thing is that SSL contexts are ALWAYS created even if you do not use TLS at all... This is the case for both the SDK client and the Prefect client. The only way to disable creating a SSL context for the SDK client is to disable TLS verification. For the Prefect Client there's no way to avoid the context creation even when TLS is not used anywhere... (thus the upstream PR I opened).
Glad you mention the part I'm not convinced of... I believe this can be a breaking change. I tried to move the |
9f2501f to
ff57c6b
Compare
I think an inline comment explaining why it's there and mention that if we're doing a refactor we could separate the SSLContext part out to some other shared location.
Based on this I'd say that we should implement a fix for this directly in the SDK so that we always create an SSL context that is cached within the SDK config. That way anyone could benefit from it not just internally when used within Infrahub. I.e. when someone runs a generator outside of Infrahub or just uses the SDK we'd have this in place already. Then what we can do is to cache the first SSLContext somewhere reached by How does that sound? |
Sounds good, I think we can do both: create a cached SSL context within the SDK (for the SDK end-users) and allow passing an external SSLContext to the SDK client (for the tasks workers). |
c0a4b5b to
64a9e93
Compare
|
Pushed opsmill/infrahub-sdk-python#581 for the SDK change I also added an inline comment for the HttpAdapter usage. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Nitpick comments (2)
backend/infrahub/services/adapters/http/__init__.py (1)
12-14: Make this an abstract API and document semanticsDeclare the class as ABC and mark verify_tls as @AbstractMethod; add a short docstring clarifying return behavior (False vs SSLContext). Improves clarity and mypy/ruff posture.
As per coding guidelines
Apply within/around this method:
+from abc import ABC, abstractmethod @@ -class InfrahubHTTP: +class InfrahubHTTP(ABC): @@ - def verify_tls(self, verify: bool | None = None) -> bool | ssl.SSLContext: - raise NotImplementedError() + @abstractmethod + def verify_tls(self, verify: bool | None = None) -> bool | ssl.SSLContext: + """Return TLS verification setting. + + Args: + verify: If explicitly False, disable verification; otherwise use configured SSLContext. + + Returns: + False to disable verification, or a cached SSLContext to enable verification. + """ + ...Note: Consider adding an optional verify parameter to get(...) for parity with post(...).
backend/infrahub/services/adapters/workflow/worker.py (1)
23-28: Good: class-level adapter to cache TLS contextThis effectively reuses the SSLContext. Consider annotating the attribute for mypy clarity.
- _http_adapter = HttpxAdapter() + _http_adapter: HttpxAdapter = HttpxAdapter()
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
backend/infrahub/services/adapters/http/__init__.py(1 hunks)backend/infrahub/services/adapters/workflow/worker.py(3 hunks)backend/infrahub/workers/infrahub_async.py(3 hunks)backend/infrahub/workflows/utils.py(2 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
backend/**/*
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Run backend tests with
pytestor viainvoketasks
Files:
backend/infrahub/services/adapters/http/__init__.pybackend/infrahub/workflows/utils.pybackend/infrahub/workers/infrahub_async.pybackend/infrahub/services/adapters/workflow/worker.py
**/*.py
📄 CodeRabbit inference engine (.github/instructions/python-docstring.instructions.md)
**/*.py: Use triple double quotes (""") for all Python docstrings
Write docstrings in Google-style format
Include a brief one-line description at the top of each docstring
Add a detailed description section when additional context is needed
Document function/method parameters under an Args/Parameters section without typing information
Include a Returns section describing the return value
Include a Raises section listing possible exceptions
Provide an Examples section demonstrating usage when helpfulUse ruff and mypy to validate and lint Python files
**/*.py: Use type hints for all Python function parameters and return values
Prefer asynchronous code in Python when feasible
Define asynchronous Python functions withasync def
Useawaitfor asynchronous calls in Python
Use Pydantic models instead of standard dataclasses
Use ruff and mypy for linting and type checking
**/*.py: Use type hints for all Python function parameters and return values
Use async/await whenever possible in Python code
Define asynchronous functions withasync def
Await asynchronous calls withawait
Use Pydantic models instead of standard dataclasses for data modeling
Use triple quotes (""") for all Python docstrings
Write docstrings in Google-style format
Include docstring sections when applicable: one-line summary, optional details, Args (without types), Returns, Raises, Examples
Validate and lint Python with ruff and mypy
Files:
backend/infrahub/services/adapters/http/__init__.pybackend/infrahub/workflows/utils.pybackend/infrahub/workers/infrahub_async.pybackend/infrahub/services/adapters/workflow/worker.py
🧬 Code graph analysis (3)
backend/infrahub/workflows/utils.py (2)
backend/infrahub/workers/dependencies.py (1)
get_http(126-127)backend/infrahub/services/adapters/http/__init__.py (1)
verify_tls(12-13)
backend/infrahub/workers/infrahub_async.py (3)
backend/tests/conftest.py (1)
prefect(382-397)backend/infrahub/workers/dependencies.py (1)
get_http(126-127)backend/infrahub/services/adapters/http/__init__.py (1)
verify_tls(12-13)
backend/infrahub/services/adapters/workflow/worker.py (2)
backend/infrahub/services/adapters/http/httpx.py (1)
HttpxAdapter(17-105)backend/infrahub/services/adapters/http/__init__.py (1)
verify_tls(12-13)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: backend-tests-unit
🔇 Additional comments (4)
backend/infrahub/services/adapters/http/__init__.py (1)
6-7: Type-only import is fineUsing TYPE_CHECKING for ssl keeps runtime clean and works with postponed annotations.
backend/infrahub/workers/infrahub_async.py (2)
11-11: OK to import AsyncClientContextImport looks correct and aligns with new usage.
31-31: Import of get_http may be problematic outside DIYou import get_http but later call it outside a DI context; see comment below with a safer approach.
backend/infrahub/services/adapters/workflow/worker.py (1)
89-91: LGTM; TLS-verified client context around deploymentWrapping run_deployment in AsyncClientContext with cached SSLContext meets the PR goal.
Please confirm this path executes before any other Prefect client usage so that all requests for the deployment inherit the intended TLS settings.
| async with AsyncClientContext(httpx_settings={"verify": get_http().verify_tls()}) as ctx: | ||
| ctx._httpx_settings = None # Hack to make all child task/flow runs use the same client | ||
| await run_flow_async(flow=flow_func, flow_run=flow_run, parameters=params, return_type="state") | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
Avoid DI-dependent get_http() and hardening around private attribute
- get_http() relies on FastAPI Depends; outside DI it may return a Depends object, causing AttributeError on .verify_tls(). Use a class-level HttpxAdapter to fetch the cached SSLContext.
- Mutating ctx._httpx_settings touches a private attribute; guard with hasattr to avoid breakage across Prefect versions.
Apply within this block:
- async with AsyncClientContext(httpx_settings={"verify": get_http().verify_tls()}) as ctx:
- ctx._httpx_settings = None # Hack to make all child task/flow runs use the same client
+ async with AsyncClientContext(httpx_settings={"verify": self._http_adapter.verify_tls()}) as ctx:
+ if hasattr(ctx, "_httpx_settings"):
+ ctx._httpx_settings = None # Maintain shared client only if attribute exists
await run_flow_async(flow=flow_func, flow_run=flow_run, parameters=params, return_type="state")Add these supporting changes outside the snippet:
# imports
from infrahub.services.adapters.http.httpx import HttpxAdapter# in class InfrahubWorkerAsync
_http_adapter = HttpxAdapter() # reuse cached SSLContext across runsIf you prefer to keep get_http(), ensure it returns a concrete InfrahubHTTP in this runtime; otherwise this will fail.
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Show get_http definition and where it's used without DI
rg -n -C2 'def get_http\(' backend/infrahub/workers/dependencies.py || true
rg -n -C3 '\bget_http\s*\(\s*\)' -g '!**/tests/**' backend | sed -n '1,200p'Length of output: 3418
🏁 Script executed:
#!/bin/bash
set -euo pipefail
head -n 200 backend/infrahub/workers/infrahub_async.py
rg -n 'HttpxAdapter' -n backend/infrahub/workers/infrahub_async.py || true
rg -n '_http_adapter' -n backend/infrahub/workers/infrahub_async.py || trueLength of output: 8203
🏁 Script executed:
#!/bin/bash
set -euo pipefail
head -n 100 backend/infrahub/services/adapters/workflow/worker.py
rg -n "HttpxAdapter" -n backend/infrahub/services/adapters/workflow/worker.pyLength of output: 3706
Replace DI-dependent get_http() with class-level HttpxAdapter and guard private attribute
get_http() uses FastAPI Depends and will return a Depends object outside DI, breaking .verify_tls(). Use a dedicated HttpxAdapter instance and only mutate the private attribute if it exists:
--- a/backend/infrahub/workers/infrahub_async.py
+++ b/backend/infrahub/workers/infrahub_async.py
@@ async def run(self, flow_run: FlowRun, configuration: BaseJobConfiguration, task_status: TaskStatus | None = None,) -> BaseWorkerResult:
- async with AsyncClientContext(httpx_settings={"verify": get_http().verify_tls()}) as ctx:
- ctx._httpx_settings = None # Hack to make all child task/flow runs use the same client
+ async with AsyncClientContext(httpx_settings={"verify": self._http_adapter.verify_tls()}) as ctx:
+ if hasattr(ctx, "_httpx_settings"):
+ ctx._httpx_settings = None # Maintain shared client only if supported
await run_flow_async(flow=flow_func, flow_run=flow_run, parameters=params, return_type="state")Add at the top of the file:
from infrahub.services.adapters.http.httpx import HttpxAdapterAnd inside class InfrahubWorkerAsync:
_http_adapter = HttpxAdapter()🤖 Prompt for AI Agents
In backend/infrahub/workers/infrahub_async.py around lines 159-162, replace the
DI-dependent get_http() call (which returns a Depends outside DI) with a
class-level HttpxAdapter instance and guard mutation of the private attribute:
add "from infrahub.services.adapters.http.httpx import HttpxAdapter" at the top
of the file, declare a class attribute _http_adapter = HttpxAdapter() inside
InfrahubWorkerAsync, use _http_adapter.verify_tls() when creating
AsyncClientContext, and only set ctx._httpx_settings = None if hasattr(ctx,
"_httpx_settings") to avoid unguarded mutation.
| from infrahub.core.constants import GLOBAL_BRANCH_NAME | ||
| from infrahub.core.registry import registry | ||
| from infrahub.tasks.registry import refresh_branches | ||
| from infrahub.workers.dependencies import get_http |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
Avoid using FastAPI Depends-based get_http() here; also use an async context for get_client
- get_http() returns a FastAPI Depends wrapper outside DI, so get_http().verify_tls() may raise at runtime. Use a module-level HttpxAdapter instead to obtain a cached SSLContext.
- Wrap get_client in an async context to avoid leaking HTTP clients.
As per coding guidelines
Apply this diff here:
- from infrahub.workers.dependencies import get_http
+ from infrahub.services.adapters.http.httpx import HttpxAdapter- client = get_client(httpx_settings={"verify": get_http().verify_tls()}, sync_client=False)
+ async with get_client(
+ httpx_settings={"verify": _HTTP_ADAPTER.verify_tls()}, sync_client=False
+ ) as client:And add this at module scope:
# at top-level
_HTTP_ADAPTER = HttpxAdapter() # caches SSLContext internallyAnd adjust the call site:
- await client.update_flow_run(current_flow_run_id, tags=list(new_tags))
+ await client.update_flow_run(current_flow_run_id, tags=list(new_tags))Verification script to confirm DI usage of get_http:
Also applies to: 30-30
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "Definition of get_http:"
rg -n -C2 'def get_http\(' backend/infrahub/workers/dependencies.py || true
echo
echo "Imports of Depends in dependencies module:"
rg -n -C2 '\bDepends\(' backend/infrahub/workers/dependencies.py || true
echo
echo "Call sites of get_http() outside FastAPI routes:"
rg -n -C3 '\bget_http\s*\(\s*\)' -g '!**/tests/**' backend | sed -n '1,200p'Length of output: 4978
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "Inspect HttpxAdapter definition:"
rg -n "class HttpxAdapter" -C3 backend/infrahub/services/adapters/http/httpx.py || true
echo
echo "Inspect verify_tls in HttpxAdapter:"
rg -n "def verify_tls" -C3 backend/infrahub/services/adapters/http/httpx.py || true
echo
echo "Context around imports and usage in utils.py:"
rg -n -C5 "get_http" -C5 backend/infrahub/workflows/utils.py || trueLength of output: 1387
Replace get_http with a module‐level HttpxAdapter and wrap the client in an async context
- At top of
backend/infrahub/workflows/utils.py, remove:and add:from infrahub.workers.dependencies import get_http
from infrahub.services.adapters.http.httpx import HttpxAdapter _HTTP_ADAPTER = HttpxAdapter() # caches SSLContext internally
- Change the client instantiation:
- client = get_client(httpx_settings={"verify": get_http().verify_tls()}, sync_client=False) + async with get_client( + httpx_settings={"verify": _HTTP_ADAPTER.verify_tls()}, sync_client=False + ) as client:
🤖 Prompt for AI Agents
In backend/infrahub/workflows/utils.py around line 12, remove the import of
get_http and instead import HttpxAdapter from
infrahub.services.adapters.http.httpx and create a module-level adapter instance
named _HTTP_ADAPTER = HttpxAdapter() (it caches SSLContext internally); then
update any client instantiation to obtain the client from the adapter and use it
within an async context manager (e.g., use the adapter's async client
acquisition so you do "async with _HTTP_ADAPTER.client() as client" where the
client is needed) to ensure proper async lifecycle and reuse of the adapter.
64a9e93 to
820eb29
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (1)
backend/infrahub/workers/infrahub_async.py (1)
159-161: Do not call DI-bound get_http() here; use a class-level HttpxAdapter and guard private attrget_http() returns a Depends outside FastAPI DI, so .verify_tls() will fail at runtime. Also, mutating ctx._httpx_settings unguarded risks breakage across Prefect versions.
Apply within this block:
- async with AsyncClientContext(httpx_settings={"verify": get_http().verify_tls()}) as ctx: - ctx._httpx_settings = None # Hack to make all child task/flow runs use the same client + async with AsyncClientContext(httpx_settings={"verify": self._http_adapter.verify_tls()}) as ctx: + if hasattr(ctx, "_httpx_settings"): + ctx._httpx_settings = None # Maintain shared client only if supported await run_flow_async(flow=flow_func, flow_run=flow_run, parameters=params, return_type="state")Add outside this snippet:
# imports (top of file) from infrahub.services.adapters.http.httpx import HttpxAdapter# inside class InfrahubWorkerAsync _http_adapter = HttpxAdapter() # reuse cached SSLContext across runsThen remove the now-unused get_http import. As per coding guidelines
🧹 Nitpick comments (3)
backend/tests/adapters/http.py (2)
9-13: Redundant TYPE_CHECKING import of httpxhttpx is already imported at module scope; keep only the TYPE_CHECKING import (and drop the top-level) or drop this one to avoid duplication. Either is fine; prefer just TYPE_CHECKING given future annotations.
20-21: Let verify_tls honor the verify argument for test fidelityReturning False unconditionally is okay, but supporting the verify argument makes tests mirror production behavior better.
Apply:
def verify_tls(self, verify: bool | None = None) -> bool | ssl.SSLContext: - return False + return False if verify is not None else FalseOr simply pass through when provided:
def verify_tls(self, verify: bool | None = None) -> bool | ssl.SSLContext: - return False + return verify if verify is not None else FalseAlso consider adding a brief docstring. As per coding guidelines
backend/infrahub/services/adapters/workflow/worker.py (1)
89-90: AsyncClientContext wrapping looks correct; consider a brief docstring noteLGTM for TLS reuse. Optionally document that verify is sourced from a cached SSLContext via the adapter for future maintainers.
As per coding guidelines
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
backend/infrahub/services/adapters/http/__init__.py(1 hunks)backend/infrahub/services/adapters/workflow/worker.py(3 hunks)backend/infrahub/workers/infrahub_async.py(3 hunks)backend/infrahub/workflows/utils.py(2 hunks)backend/tests/adapters/http.py(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
- backend/infrahub/services/adapters/http/init.py
- backend/infrahub/workflows/utils.py
🧰 Additional context used
📓 Path-based instructions (3)
backend/tests/**/*
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Place backend tests in
backend/tests/
Files:
backend/tests/adapters/http.py
backend/**/*
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Run backend tests with
pytestor viainvoketasks
Files:
backend/tests/adapters/http.pybackend/infrahub/services/adapters/workflow/worker.pybackend/infrahub/workers/infrahub_async.py
**/*.py
📄 CodeRabbit inference engine (.github/instructions/python-docstring.instructions.md)
**/*.py: Use triple double quotes (""") for all Python docstrings
Write docstrings in Google-style format
Include a brief one-line description at the top of each docstring
Add a detailed description section when additional context is needed
Document function/method parameters under an Args/Parameters section without typing information
Include a Returns section describing the return value
Include a Raises section listing possible exceptions
Provide an Examples section demonstrating usage when helpfulUse ruff and mypy to validate and lint Python files
**/*.py: Use type hints for all Python function parameters and return values
Prefer asynchronous code in Python when feasible
Define asynchronous Python functions withasync def
Useawaitfor asynchronous calls in Python
Use Pydantic models instead of standard dataclasses
Use ruff and mypy for linting and type checking
**/*.py: Use type hints for all Python function parameters and return values
Use async/await whenever possible in Python code
Define asynchronous functions withasync def
Await asynchronous calls withawait
Use Pydantic models instead of standard dataclasses for data modeling
Use triple quotes (""") for all Python docstrings
Write docstrings in Google-style format
Include docstring sections when applicable: one-line summary, optional details, Args (without types), Returns, Raises, Examples
Validate and lint Python with ruff and mypy
Files:
backend/tests/adapters/http.pybackend/infrahub/services/adapters/workflow/worker.pybackend/infrahub/workers/infrahub_async.py
🧬 Code graph analysis (3)
backend/tests/adapters/http.py (1)
backend/infrahub/services/adapters/http/__init__.py (2)
InfrahubHTTP(11-30)verify_tls(12-13)
backend/infrahub/services/adapters/workflow/worker.py (3)
backend/infrahub/services/adapters/http/httpx.py (1)
HttpxAdapter(17-105)backend/infrahub/services/adapters/http/__init__.py (1)
verify_tls(12-13)backend/tests/adapters/http.py (1)
verify_tls(20-21)
backend/infrahub/workers/infrahub_async.py (3)
backend/infrahub/workers/dependencies.py (1)
get_http(126-127)backend/infrahub/services/adapters/http/__init__.py (1)
verify_tls(12-13)backend/tests/adapters/http.py (1)
verify_tls(20-21)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (8)
- GitHub Check: E2E-testing-version-upgrade / From 1.3.0
- GitHub Check: backend-benchmark
- GitHub Check: E2E-testing-invoke-demo-start
- GitHub Check: E2E-testing-playwright
- GitHub Check: backend-tests-functional
- GitHub Check: backend-docker-integration
- GitHub Check: backend-tests-unit
- GitHub Check: backend-tests-integration
🔇 Additional comments (1)
backend/infrahub/services/adapters/workflow/worker.py (1)
24-27: Good use of a class-level HttpxAdapter to cache TLS contextThis avoids DI/cycles and reuses the SSLContext as intended.
This avoids overhead due to SSL context initialization within Prefect client. Signed-off-by: Fatih Acar <[email protected]>
820eb29 to
f5b5c77
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (1)
backend/infrahub/workers/infrahub_async.py (1)
159-161: Replace DI-dependent get_http() with class-level HttpxAdapter.The previous review comment on these lines remains unresolved.
get_http()uses FastAPI'sDependsand will fail outside the DI context (this Prefect worker is not a FastAPI handler). Additionally, mutating the privatectx._httpx_settingsattribute on line 160 is fragile across Prefect versions.Follow the pattern used in
backend/infrahub/services/adapters/workflow/worker.py(lines 24-27) which uses a class-levelHttpxAdapterto avoid the DI dependency issue.Apply the same fix suggested in the previous review:
+from infrahub.services.adapters.http.httpx import HttpxAdapter + class InfrahubWorkerAsync(BaseWorker): type: str = "infrahubasync" + _http_adapter = HttpxAdapter() ... async def run(self, ...): ... - async with AsyncClientContext(httpx_settings={"verify": get_http().verify_tls()}) as ctx: - ctx._httpx_settings = None # Hack to make all child task/flow runs use the same client + async with AsyncClientContext(httpx_settings={"verify": self._http_adapter.verify_tls()}) as ctx: + if hasattr(ctx, "_httpx_settings"): + ctx._httpx_settings = None # Maintain shared client only if attribute exists await run_flow_async(...)
🧹 Nitpick comments (2)
backend/tests/adapters/http.py (1)
9-12: Remove redundant httpx import from TYPE_CHECKING block.
httpxis already imported at runtime on line 5, so the TYPE_CHECKING import on line 12 is redundant. The TYPE_CHECKING block should only contain imports needed exclusively for type annotations.Apply this diff:
if TYPE_CHECKING: import ssl - - import httpxbackend/infrahub/services/adapters/http/__init__.py (1)
12-13: Add docstring to the abstract method.The new
verify_tlsmethod lacks documentation. Per coding guidelines, all methods should have Google-style docstrings with a brief description, Args, and Returns sections.As per coding guidelines.
Example docstring:
def verify_tls(self, verify: bool | None = None) -> bool | ssl.SSLContext: """Returns TLS verification configuration for HTTP requests. Args: verify: Optional override to disable TLS verification when False. When None or True, returns the cached SSLContext. Returns: False when TLS verification is disabled, otherwise an SSLContext instance configured with the appropriate CA bundle. Raises: NotImplementedError: This method must be implemented by subclasses. """ raise NotImplementedError()
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
backend/infrahub/services/adapters/http/__init__.py(1 hunks)backend/infrahub/services/adapters/workflow/worker.py(3 hunks)backend/infrahub/workers/infrahub_async.py(3 hunks)backend/infrahub/workflows/utils.py(2 hunks)backend/tests/adapters/http.py(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- backend/infrahub/workflows/utils.py
🧰 Additional context used
📓 Path-based instructions (3)
backend/**/*
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Run backend tests with
pytestor viainvoketasks
Files:
backend/infrahub/workers/infrahub_async.pybackend/infrahub/services/adapters/http/__init__.pybackend/infrahub/services/adapters/workflow/worker.pybackend/tests/adapters/http.py
**/*.py
📄 CodeRabbit inference engine (.github/instructions/python-docstring.instructions.md)
**/*.py: Use triple double quotes (""") for all Python docstrings
Write docstrings in Google-style format
Include a brief one-line description at the top of each docstring
Add a detailed description section when additional context is needed
Document function/method parameters under an Args/Parameters section without typing information
Include a Returns section describing the return value
Include a Raises section listing possible exceptions
Provide an Examples section demonstrating usage when helpfulUse ruff and mypy to validate and lint Python files
**/*.py: Use type hints for all Python function parameters and return values
Prefer asynchronous code in Python when feasible
Define asynchronous Python functions withasync def
Useawaitfor asynchronous calls in Python
Use Pydantic models instead of standard dataclasses
Use ruff and mypy for linting and type checking
**/*.py: Use type hints for all Python function parameters and return values
Use async/await whenever possible in Python code
Define asynchronous functions withasync def
Await asynchronous calls withawait
Use Pydantic models instead of standard dataclasses for data modeling
Use triple quotes (""") for all Python docstrings
Write docstrings in Google-style format
Include docstring sections when applicable: one-line summary, optional details, Args (without types), Returns, Raises, Examples
Validate and lint Python with ruff and mypy
Files:
backend/infrahub/workers/infrahub_async.pybackend/infrahub/services/adapters/http/__init__.pybackend/infrahub/services/adapters/workflow/worker.pybackend/tests/adapters/http.py
backend/tests/**/*
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Place backend tests in
backend/tests/
Files:
backend/tests/adapters/http.py
🧬 Code graph analysis (4)
backend/infrahub/workers/infrahub_async.py (3)
backend/infrahub/workers/dependencies.py (1)
get_http(133-134)backend/infrahub/services/adapters/http/__init__.py (1)
verify_tls(12-13)backend/tests/adapters/http.py (1)
verify_tls(20-21)
backend/infrahub/services/adapters/http/__init__.py (1)
backend/tests/adapters/http.py (1)
verify_tls(20-21)
backend/infrahub/services/adapters/workflow/worker.py (3)
backend/infrahub/services/adapters/http/httpx.py (1)
HttpxAdapter(17-105)backend/infrahub/services/adapters/http/__init__.py (1)
verify_tls(12-13)backend/tests/adapters/http.py (1)
verify_tls(20-21)
backend/tests/adapters/http.py (1)
backend/infrahub/services/adapters/http/__init__.py (2)
InfrahubHTTP(11-30)verify_tls(12-13)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (8)
- GitHub Check: E2E-testing-version-upgrade / From 1.3.0
- GitHub Check: backend-benchmark
- GitHub Check: E2E-testing-playwright
- GitHub Check: E2E-testing-invoke-demo-start
- GitHub Check: backend-tests-unit
- GitHub Check: backend-docker-integration
- GitHub Check: backend-tests-integration
- GitHub Check: backend-tests-functional
🔇 Additional comments (2)
backend/infrahub/services/adapters/workflow/worker.py (2)
24-27: Good approach to avoid DI dependency issues.Using a class-level
HttpxAdapterinstance avoids the circular dependency and DI issues. The inline comments clearly explain the rationale and future refactoring path. This pattern should be consistently applied elsewhere (e.g.,backend/infrahub/workers/infrahub_async.pylines 159-161).
89-90: I need to clarify the actual behavior of Prefect'sAsyncClientContextto determine if the review comment's concern is valid.Based on the verification results, I now have enough information to determine the validity of the review comment.Do not merge as-is; the review comment's premise is incorrect.
The review comment asks to verify that the AsyncClientContext "properly shares across multiple
run_deploymentcalls," but the code structure contradicts this:
Single call per context: The
submit_workflowmethod callsrun_deploymentonly ONCE within each AsyncClientContext, not multiple times.Fresh context per invocation: A new AsyncClientContext is instantiated each time
submit_workflowis called, so contexts are not shared across method invocations.HttpxAdapter caching is separate: The
@cached_propertyontls_contextcaches the SSL context at the HttpxAdapter instance level, not at the AsyncClientContext level. The TLS settings are correctly passed to the client viahttpx_settings={"verify": self._http_adapter.verify_tls()}.Compare with existing pattern: The codebase already contains a known pattern in
infrahub_async.py(line 159-160) that includes the hackctx._httpx_settings = Nonewith an explicit comment about reusing the httpx client across multiple child runs. Thesubmit_workflowmethod does NOT use this pattern.Action: The code correctly applies TLS settings per deployment submission. If the intent is to share a single httpx client across multiple
submit_workflowcalls, that would require a different architectural pattern (e.g., reusing the same AsyncClientContext instance globally or using the_httpx_settingshack). The current implementation is appropriate for single-call execution and doesn't require changes.Likely an incorrect or invalid review comment.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, did we also need a followup PR to allow the use of the Infrahub TLS context when we initialize a new SDK client?
I don't think this is supported in the Prefect version we have in stable, but I think you also said that it would just be silently ignored until we have the correct version.
| db_change: bool = False, | ||
| ) -> None: | ||
| client = get_client(sync_client=False) | ||
| client = get_client(httpx_settings={"verify": get_http().verify_tls()}, sync_client=False) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's a bit confusing in the codebase that we use get_client() where we sometimes want an InfrahubClient and in other locations we want a Prefect client. Did you just update this call as a hotspot where you needed this the most? Wondering if we should have a helper function to get the prefect client where we can update these settings in one location. I think we're good to go for this PR but that we should consider the approach for this for all the other calls.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Did you just update this call as a hotspot where you needed this the most?
Exactly, add_tags() is called within almost all workflows.
I agree it's confusing to have get_client for both InfrahubClient and PrefectClient...
Yes, follow up PR is here: #7396 (waiting for an SDK update) Yes, the changes here will be ignored in stable until Prefect is upgraded (which is the case in develop/release-1.5). |
Note: this depends on PrefectHQ/prefect#19106 for Prefect client changes to be effective
We have seen that creating SSL/TLS contexts are resource intensive (CPU and memory).
EDIT: sdk client change is now handled directly by opsmill/infrahub-sdk-python#581
Summary by CodeRabbit
Bug Fixes
Refactor
Chores