Skip to content

Conversation

@sahiee-dev
Copy link

Summary

Fix broken GPT-OSS vLLM installation instructions by replacing expired PyTorch nightly dependency with stable CUDA 12.8 wheel.

Motivation

Fixes #2330
The Quick Setup section in articles/gpt-oss/run-vllm.md referenced a PyTorch nightly build (torch==2.9.0.dev*) that is no longer available from PyTorch's nightly index. This caused dependency resolution failures when users followed the installation instructions:
No solution found when resolving dependencies: Because there is no version of torch==2.9.0.dev20250804+cu128...

This change:

  • Removes the nightly PyTorch dependency
  • Installs stable PyTorch 2.9.0 (CUDA 12.8) via direct wheel URL
  • Documents Python, CUDA, driver, and GPU requirements upfront
  • Adds a rationale note explaining why stable PyTorch is used
    Verification:
  • PyTorch wheel URL confirmed available (HTTP 200, 900MB)
  • vLLM GPT-OSS wheel index confirmed accessible
  • Installation flow validated by community reports in vLLM installation instructions outdated - PyTorch nightly build no longer available #2330
  • Full E2E verification requires CUDA hardware (not available on macOS)

For new content

Not applicable — this PR fixes existing documentation only. No new content is being added.

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

@sahiee-dev sahiee-dev force-pushed the fix/gpt-oss-vllm-install-docs branch from 1efb44a to 7751c39 Compare January 3, 2026 05:57
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

vLLM installation instructions outdated - PyTorch nightly build no longer available

1 participant