|
1 | | -# FlexFlow |
2 | | -       [](https://flexflow.readthedocs.io/en/latest/?badge=latest) |
| 1 | +# flexflow-train |
| 2 | +[](https://github.com/flexflow/flexflow-train/actions/workflows/clang-format-check.yml) |
| 3 | +[](https://github.com/flexflow/flexflow-train/actions/workflows/per-lib-check.yml) |
| 4 | +[](https://github.com/flexflow/flexflow-train/actions/workflows/shell-check.yml) |
| 5 | +[](https://flexflow.readthedocs.io/en/latest/?badge=latest) |
3 | 6 |
|
4 | | -FlexFlow is a deep learning framework that accelerates distributed DNN training by automatically searching for efficient parallelization strategies. FlexFlow provides a drop-in replacement for PyTorch and TensorFlow Keras. Running existing PyTorch and Keras programs in FlexFlow only requires [a few lines of changes to the program](https://flexflow.ai/keras). |
| 7 | +> [!WARNING] |
| 8 | +> The FlexFlow repository has been split into separate [flexflow-train](https://github.com/flexflow/flexflow-train) and [flexflow-serve](https://github.com/flexflow/flexflow-serve) repositories. |
| 9 | +> You are currently viewing [flexflow-train](https://github.com/flexflow/flexflow-train). |
| 10 | +> For anything inference/serving-related, go to [flexflow-serve](https://github.com/flexflow/flexflow-serve). |
5 | 11 |
|
| 12 | +FlexFlow is a deep learning framework that accelerates distributed DNN training by automatically searching for efficient parallelization strategies. |
| 13 | + |
| 14 | +<!-- |
| 15 | +FlexFlow provides a drop-in replacement for PyTorch and TensorFlow Keras. Running existing PyTorch and Keras programs in FlexFlow only requires [a few lines of changes to the program](https://flexflow.ai/keras). |
| 16 | +--> |
| 17 | + |
| 18 | +<!-- |
6 | 19 | ## Install FlexFlow |
7 | 20 | To install FlexFlow from source code, please read the [instructions](INSTALL.md). If you would like to quickly try FlexFlow, we also provide pre-built Docker packages ([flexflow-cuda](https://github.com/flexflow/FlexFlow/pkgs/container/flexflow-cuda) with a CUDA backend, [flexflow-hip_rocm](https://github.com/flexflow/FlexFlow/pkgs/container/flexflow-hip_rocm) with a HIP-ROCM backend) with all dependencies pre-installed (N.B.: currently, the CUDA pre-built containers are only fully compatible with host machines that have CUDA 11.7 installed), together with [Dockerfiles](./docker) if you wish to build the containers manually. You can also use `conda` to install the FlexFlow Python package (coming soon). |
8 | 21 |
|
@@ -67,10 +80,11 @@ Performance auto-tuning flags: |
67 | 80 | * `--enable-parameter-parallel`: allow FlexFlow to explore parameter parallelism for performance auto-tuning. (By default FlexFlow only considers data and model parallelism.) |
68 | 81 | * `--enable-attribute-parallel`: allow FlexFlow to explore attribute parallelism for performance auto-tuning. (By default FlexFlow only considers data and model parallelism.) |
69 | 82 | For performance tuning related flags: see [performance autotuning](https://flexflow.ai/search). |
| 83 | +--> |
70 | 84 |
|
71 | 85 | ## Contributing |
72 | 86 |
|
73 | | -Please let us know if you encounter any bugs or have any suggestions by [submitting an issue](https://github.com/flexflow/flexflow/issues). |
| 87 | +Please let us know if you encounter any bugs or have any suggestions by [submitting an issue](https://github.com/flexflow/flexflow-train/issues). |
74 | 88 |
|
75 | 89 | We welcome all contributions to FlexFlow from bug fixes to new features and extensions. |
76 | 90 |
|
|
0 commit comments