Skip to content

Commit 375b9f1

Browse files
authored
Merge pull request #2 from MooreThreads/xd/cases
Update doc to add integrated projects
2 parents 8551140 + 38d9454 commit 375b9f1

File tree

1 file changed

+146
-0
lines changed

1 file changed

+146
-0
lines changed

README.md

Lines changed: 146 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -283,6 +283,152 @@ if torchada.is_gpu_device(tensor):
283283

284284
This is a fundamental limitation because `device.type` is a C-level property that cannot be patched from Python. Downstream projects that check `device.type == "cuda"` need to be patched to use `torchada.is_gpu_device()` or check for both types: `device.type in ("cuda", "musa")`.
285285

286+
## Real-World Integrations
287+
288+
torchada has been successfully integrated into several popular PyTorch-based projects. Below are examples demonstrating the typical integration patterns.
289+
290+
### Integrated Projects
291+
292+
| Project | Category | PR | Status |
293+
|---------|----------|--------|--------|
294+
| [ComfyUI](https://github.com/comfyanonymous/ComfyUI) | Image/Video Generation | [#11618](https://github.com/comfyanonymous/ComfyUI/pull/11618) | Open |
295+
| [LightLLM](https://github.com/ModelTC/LightLLM) | LLM Inference | [#1162](https://github.com/ModelTC/LightLLM/pull/1162) | Open |
296+
| [Xinference](https://github.com/xorbitsai/inference) | Model Serving | [#4425](https://github.com/xorbitsai/inference/pull/4425) | ✅ Merged |
297+
| [LightX2V](https://github.com/ModelTC/LightX2V) | Image/Video Generation | [#678](https://github.com/ModelTC/LightX2V/pull/678) | ✅ Merged |
298+
299+
### Integration Patterns
300+
301+
#### Pattern 1: Early Import with Platform Detection
302+
303+
The most common pattern is to import `torchada` early in the application lifecycle:
304+
305+
```python
306+
# In __init__.py or main entry point
307+
from your_app.device_utils import is_musa
308+
309+
if is_musa():
310+
import torchada # noqa: F401
311+
312+
# Platform detection function
313+
def is_musa():
314+
import torch
315+
return hasattr(torch.version, "musa") and torch.version.musa is not None
316+
```
317+
318+
This pattern is used by **LightLLM** and **LightX2V**.
319+
320+
#### Pattern 2: Add to Dependencies
321+
322+
Add `torchada` to your project's dependencies:
323+
324+
```python
325+
# pyproject.toml
326+
dependencies = [
327+
"torchada>=0.1.11",
328+
]
329+
330+
# Or requirements.txt
331+
torchada>=0.1.11
332+
```
333+
334+
#### Pattern 3: Device Availability Check
335+
336+
Create a device availability function that checks for MUSA:
337+
338+
```python
339+
def is_musa_available() -> bool:
340+
try:
341+
import torch
342+
import torch_musa # noqa: F401
343+
import torchada # noqa: F401
344+
return torch.musa.is_available()
345+
except ImportError:
346+
return False
347+
348+
def get_available_device():
349+
if torch.cuda.is_available():
350+
return "cuda"
351+
elif is_musa_available():
352+
return "musa"
353+
return "cpu"
354+
```
355+
356+
This pattern is used by **Xinference**.
357+
358+
#### Pattern 4: Platform-Specific Feature Flags
359+
360+
Enable features based on platform capabilities:
361+
362+
```python
363+
import torchada
364+
365+
musa_available = hasattr(torch, "musa") and torch.musa.is_available()
366+
367+
def is_musa():
368+
return musa_available
369+
370+
# Enable NVIDIA-like optimizations on MUSA
371+
if is_nvidia() or is_musa():
372+
ENABLE_PYTORCH_ATTENTION = True
373+
NUM_STREAMS = 2 # Async weight offloading
374+
MAX_PINNED_MEMORY = get_total_memory(torch.device("cpu")) * 0.9
375+
```
376+
377+
This pattern is used by **ComfyUI**.
378+
379+
#### Pattern 5: Platform Device Classes
380+
381+
For projects with a device abstraction layer:
382+
383+
```python
384+
from your_platform.base.nvidia import CudaDevice
385+
from your_platform.registry import PLATFORM_DEVICE_REGISTER
386+
387+
@PLATFORM_DEVICE_REGISTER("musa")
388+
class MusaDevice(CudaDevice):
389+
name = "cuda" # Use CUDA APIs (redirected by torchada)
390+
391+
@staticmethod
392+
def is_available() -> bool:
393+
try:
394+
import torch
395+
import torchada # noqa: F401
396+
return hasattr(torch, "musa") and torch.musa.is_available()
397+
except ImportError:
398+
return False
399+
```
400+
401+
This pattern is used by **LightX2V**.
402+
403+
### Common Integration Steps
404+
405+
1. **Add dependency**: Add `torchada>=0.1.11` to your project dependencies
406+
407+
2. **Import early**: Import `torchada` before using any `torch.cuda` APIs
408+
```python
409+
import torchada # Apply patches
410+
import torch
411+
```
412+
413+
3. **Add platform detection**: Create `is_musa()` function for platform-specific code
414+
```python
415+
def is_musa():
416+
return hasattr(torch.version, "musa") and torch.version.musa is not None
417+
```
418+
419+
4. **Update feature flags**: Include MUSA in capability checks
420+
```python
421+
if is_nvidia() or is_musa():
422+
# Enable GPU-specific features
423+
```
424+
425+
5. **Handle device type checks**: Use `torchada.is_gpu_device()` or check both types
426+
```python
427+
# Instead of: device.type == "cuda"
428+
# Use: device.type in ("cuda", "musa")
429+
# Or: torchada.is_gpu_device(device)
430+
```
431+
286432
## Architecture
287433

288434
torchada uses a decorator-based patch registration system:

0 commit comments

Comments
 (0)