Skip to content

LCORE-569: Fix installing cpu version of pytorch, drop LD_LIBRARY_PATH for gpu image#34

Merged
matysek merged 2 commits intolightspeed-core:mainfrom
matysek:lcore-569
Aug 19, 2025
Merged

LCORE-569: Fix installing cpu version of pytorch, drop LD_LIBRARY_PATH for gpu image#34
matysek merged 2 commits intolightspeed-core:mainfrom
matysek:lcore-569

Conversation

@matysek
Copy link
Copy Markdown
Contributor

@matysek matysek commented Aug 19, 2025

Description

LCORE-569: Fix installing cpu version of pytorch, drop LD_LIBRARY_PATH for gpu image

Type of change

  • Refactor
  • New feature
  • Bug fix
  • CVE fix
  • Optimization
  • Documentation Update
  • Configuration Update
  • Bump-up service version
  • Bump-up dependent library
  • Bump-up library or tool used for development (does not change the final image)
  • CI configuration change
  • Konflux configuration change
  • Unit tests improvement
  • Integration tests improvement
  • End to end tests improvement

Related Tickets & Documents

  • Related Issue #
  • Closes #

Checklist before requesting a review

  • I have performed a self-review of my code.
  • PR has passed all pre-merge test jobs.
  • If it is a core feature, I have added thorough tests.

Testing

  • Please provide detailed steps to perform tests related to this code change.
  • How were the fix/results from this change verified? Please provide relevant screenshots or results.

Summary by CodeRabbit

  • Chores
    • GPU container now relies on default CUDA library resolution; explicit compatibility path override removed.
    • Dependency source mapping updated to use a unified alias for the CPU PyTorch wheel; no user-facing functional changes expected.

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Aug 19, 2025

Walkthrough

Removed CUDA compat path from LD_LIBRARY_PATH in Containerfile-gpu and switched the uv source alias in pyproject.toml from "torchvision" to "torch" pointing to the same pytorch-cpu index.

Changes

Cohort / File(s) Summary of changes
GPU container env vars
Containerfile-gpu
Removed the LD_LIBRARY_PATH prefix that injected /usr/local/cuda-12/compat; no other steps changed.
UV source alias config
pyproject.toml
Renamed the uv source alias from torchvision to torch while keeping the same pytorch-cpu index, URL, and explicit flag.

Sequence Diagram(s)

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

Possibly related PRs

Suggested reviewers

  • tisnik

Poem

A rabbit nudges paths at night,
Moves a line out of build sight.
Aliases hopped from one to two,
Quiet changes, tidy view.
🐇✨

Tip

🔌 Remote MCP (Model Context Protocol) integration is now available!

Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats.


📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 2a0a3d2 and aed3093.

⛔ Files ignored due to path filters (1)
  • uv.lock is excluded by !**/*.lock
📒 Files selected for processing (1)
  • Containerfile-gpu (0 hunks)
💤 Files with no reviewable changes (1)
  • Containerfile-gpu
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: build-and-push-dev
  • GitHub Check: mypy
  • GitHub Check: Pylinter
✨ Finishing Touches
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (2)
pyproject.toml (2)

61-61: Future-proof: map torchvision/torchaudio to the same CPU index (if added later)

If you later introduce torchvision or torchaudio, consider mapping them to the same CPU index to avoid accidental CUDA variants or resolution surprises.

Apply this change to expand the sources block:

 [tool.uv.sources]
-torch = [{ index = "pytorch-cpu" }]
+torch = [{ index = "pytorch-cpu" }]
+torchvision = [{ index = "pytorch-cpu" }]
+torchaudio = [{ index = "pytorch-cpu" }]

35-50: Minor consistency nit: unify huggingface package name style

You have both “huggingface_hub” (deps) and “huggingface-hub” (dev). They resolve to the same package, but mixing styles can confuse tooling and readers. Consider normalizing to the canonical PyPI name (“huggingface-hub”) across groups in a follow-up.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 754e6f9 and 2a0a3d2.

📒 Files selected for processing (2)
  • Containerfile-gpu (0 hunks)
  • pyproject.toml (1 hunks)
💤 Files with no reviewable changes (1)
  • Containerfile-gpu
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: Pylinter
  • GitHub Check: build-and-push-dev
  • GitHub Check: mypy
🔇 Additional comments (2)
pyproject.toml (2)

61-61: Correct direction: mapping torch to the CPU index is the right fix

Pointing torch to the pytorch-cpu index aligns with the PR goal and should force CPU wheels when resolving with uv (given the explicit=true index). Nice.


61-61: torch==2.7.1 CPU wheel for Python 3.12 (cp312) is available — no change required

I verified the PyTorch CPU index and found Linux cp312 manylinux wheels for torch 2.7.1 (e.g. torch-2.7.1+cpu-cp312-cp312-manylinux_2_28_x86_64.whl).

  • File: pyproject.toml (around line 61)
    • Current entry: torch = [{ index = "pytorch-cpu" }]

Snippet:
torch = [{ index = "pytorch-cpu" }]

Copy link
Copy Markdown
Collaborator

@tisnik tisnik left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems ok (dunno about those resolution markers for Darwin, but whatever ;)

@matysek matysek merged commit 53179ec into lightspeed-core:main Aug 19, 2025
13 checks passed
@matysek matysek deleted the lcore-569 branch August 19, 2025 14:52
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants