You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: .github/workflows/pr_tests_gpu.yml
+44Lines changed: 44 additions & 0 deletions
Original file line number
Diff line number
Diff line change
@@ -28,7 +28,51 @@ env:
28
28
PIPELINE_USAGE_CUTOFF: 1000000000# set high cutoff so that only always-test pipelines run
29
29
30
30
jobs:
31
+
check_code_quality:
32
+
runs-on: ubuntu-22.04
33
+
steps:
34
+
- uses: actions/checkout@v3
35
+
- name: Set up Python
36
+
uses: actions/setup-python@v4
37
+
with:
38
+
python-version: "3.8"
39
+
- name: Install dependencies
40
+
run: |
41
+
python -m pip install --upgrade pip
42
+
pip install .[quality]
43
+
- name: Check quality
44
+
run: make quality
45
+
- name: Check if failure
46
+
if: ${{ failure() }}
47
+
run: |
48
+
echo "Quality check failed. Please ensure the right dependency versions are installed with 'pip install -e .[quality]' and run 'make style && make quality'" >> $GITHUB_STEP_SUMMARY
49
+
50
+
check_repository_consistency:
51
+
needs: check_code_quality
52
+
runs-on: ubuntu-22.04
53
+
steps:
54
+
- uses: actions/checkout@v3
55
+
- name: Set up Python
56
+
uses: actions/setup-python@v4
57
+
with:
58
+
python-version: "3.8"
59
+
- name: Install dependencies
60
+
run: |
61
+
python -m pip install --upgrade pip
62
+
pip install .[quality]
63
+
- name: Check repo consistency
64
+
run: |
65
+
python utils/check_copies.py
66
+
python utils/check_dummies.py
67
+
python utils/check_support_list.py
68
+
make deps_table_check_updated
69
+
- name: Check if failure
70
+
if: ${{ failure() }}
71
+
run: |
72
+
echo "Repo consistency check failed. Please ensure the right dependency versions are installed with 'pip install -e .[quality]' and run 'make fix-copies'" >> $GITHUB_STEP_SUMMARY
Copy file name to clipboardExpand all lines: examples/research_projects/pytorch_xla/inference/flux/README.md
+8-7Lines changed: 8 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,8 +1,6 @@
1
1
# Generating images using Flux and PyTorch/XLA
2
2
3
-
The `flux_inference` script shows how to do image generation using Flux on TPU devices using PyTorch/XLA. It uses the pallas kernel for flash attention for faster generation.
4
-
5
-
It has been tested on [Trillium](https://cloud.google.com/blog/products/compute/introducing-trillium-6th-gen-tpus) TPU versions. No other TPU types have been tested.
3
+
The `flux_inference` script shows how to do image generation using Flux on TPU devices using PyTorch/XLA. It uses the pallas kernel for flash attention for faster generation using custom flash block sizes for better performance on [Trillium](https://cloud.google.com/blog/products/compute/introducing-trillium-6th-gen-tpus) TPU versions. No other TPU types have been tested.
6
4
7
5
## Create TPU
8
6
@@ -23,20 +21,23 @@ Verify that PyTorch and PyTorch/XLA were installed correctly:
cd examples/research_projects/pytorch_xla/inference/flux/
33
32
```
34
33
35
34
## Run the inference job
36
35
37
36
### Authenticate
38
37
39
-
Run the following command to authenticate your token in order to download Flux weights.
38
+
**Gated Model**
39
+
40
+
As the model is gated, before using it with diffusers you first need to go to the [FLUX.1 [dev] Hugging Face page](https://huggingface.co/black-forest-labs/FLUX.1-dev), fill in the form and accept the gate. Once you are in, you need to log in so that your system knows you’ve accepted the gate. Use the command below to log in:
0 commit comments