You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+25-39Lines changed: 25 additions & 39 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -21,35 +21,34 @@ Lastly, our [documentation](docs) provides deeper details on the concepts as wel
21
21
22
22
## Installing *f*VDB
23
23
24
-
During the project's initial development stages, it is necessary to [run the build steps](#building-fvdb-from-source) to install ƒVDB. Eventually, ƒVDB will be provided as a pre-built, installable package from anaconda. We support building the latest ƒVDB version for the following dependent library configurations:
24
+
During the project's initial development stages, it is necessary to [run the build steps](#building-fvdb-from-source) to install ƒVDB. Eventually, ƒVDB will be provided as a pre-built, installable package. We support building the latest ƒVDB version for the following library configurations:
25
25
26
26
| PyTorch | Python | CUDA |
27
27
| -------------- | ----------- | ------------ |
28
-
| 2.4.0-2.4.1| 3.10 - 3.12| 12.1 - 12.4|
28
+
| 2.8.0-2.9.0| 3.10 - 3.13| 12.8 - 13.0|
29
29
30
30
31
31
32
32
** Notes:**
33
-
* Linux is the only platform currently supported (Ubuntu >= 20.04 recommended).
33
+
* Linux is the only platform currently supported (Ubuntu >= 22.04 recommended).
34
34
* A CUDA-capable GPU with Ampere architecture or newer (i.e. compute capability >=8.0) is recommended to run the CUDA-accelerated operations in ƒVDB. A GPU with compute capabililty >=7.0 (Volta architecture) is the minimum requirement but some operations and data types are not supported.
35
35
36
36
37
37
## Building *f*VDB from Source
38
38
39
39
### Environment Management
40
-
ƒVDB is a Python library implemented as a C++ Pytorch extension. Of course you can build ƒVDB in whatever environment suits you, but we provide three paths to constructing reliable environments for building and running ƒVDB. These are separate options,
41
-
choose only one. They're not intended to be used together.
40
+
ƒVDB is a Python library implemented as a C++ PyTorch extension. Of course you can build ƒVDB in whatever environment suits you, but we provide three distinct paths to constructing reliable environments for building and running ƒVDB. These are separate options and are not intended to be used together.
`conda` tends to be more flexible since reconfiguring toolchains and modules to suit your larger project can be dynamic, but at the same time this can be a more brittle experience compared to using a virtualized `docker` container. Using `conda` is generally recommended for development and testing, while using `docker` is recommended for CI/CD and deployment.
48
47
49
48
---
50
49
51
50
52
-
#### **OPTION 1**Setting up a Conda Environment (Recommended)
51
+
#### **OPTION 1** Conda Environment (Recommended)
53
52
54
53
*f*VDB can be used with any Conda distribution installed on your system. Below is an installation guide using
55
54
[miniforge](https://github.com/conda-forge/miniforge). You can skip steps 1-3 if you already have a Conda installation.
@@ -84,56 +83,43 @@ conda activate fvdb
84
83
85
84
---
86
85
87
-
#### **OPTION 2**Setting up a Docker Container
86
+
#### **OPTION 2** Docker Container
88
87
89
-
Running a docker container is a great way to ensure that you have a consistent environment for building and running ƒVDB.
88
+
Running a Docker container ensures that you have a consistent environment for building and running ƒVDB. Start by installing Docker and the [NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html).
90
89
91
-
Our provided [`Dockerfile`](Dockerfile) constructs a Docker image which is ready to build ƒVDB. The docker image is configured to install miniforge and the `fvdb` conda environment with all the dependencies needed to build and run ƒVDB.
90
+
Our provided [`Dockerfile`](Dockerfile) constructs a container that pre-installs the dependencies needed to build and run ƒVDB.
92
91
93
-
Building and starting the docker image is done by running the following command from the fvdb directory:
92
+
In the fvdb-core directory, build the Docker image:
94
93
```shell
95
-
docker compose run --rm fvdb-dev
94
+
docker build -t fvdb-devel .
96
95
```
97
96
98
-
99
97
When you are ready to build ƒVDB, run the following command within the docker container. `TORCH_CUDA_ARCH_LIST` specifies which CUDA architectures to build for.
100
98
```shell
101
-
conda activate fvdb;
99
+
docker run -it --mount type=bind,src="$(pwd)",target=/workspace fvdb-devel bash
102
100
cd /workspace;
103
-
TORCH_CUDA_ARCH_LIST="7.5;8.0;8.6+PTX" \
104
-
./build.sh install verbose
105
-
```
106
-
107
-
If you've built an artifact that you want to extract from the container, with "wheel" being the
108
-
most useful... The built wheel can be extracted from the running docker image using `docker cp`, given the
In order to extract an artifact from the container such as the Python wheel, query the container ID using `docker ps` and copy the artifact using `docker cp`.
119
107
120
108
---
121
109
122
-
#### **OPTION 3**Setting up a Python virtual environment
110
+
#### **OPTION 3** Python Virtual Environment
123
111
124
-
Create a python virtual environment and then proceed to install the exact version of PyTorch that corresponds to your CUDA version. Finally, install the rest of the build requirements.
112
+
Using a Python virtual environment enables you to use your system provided compiler and CUDA toolkit. This can be especially useful if you are using ƒVDB in conjunction with other Python packages, especially packages that have been built from source. Start by installing GCC, the CUDA Toolkit, and cuDNN.
113
+
114
+
Then, create a Python virtual environment, install the requisite dependencies, and build:
Note: adjust the TORCH_CUDA_ARCH_LIST to suit your needs. If you are building just to run on a single machine, including only the present GPU architecture(s) reduces build time.
0 commit comments