Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
41 changes: 41 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -87,6 +87,46 @@ pip install -r requirements.txt
</code></pre>
</details>

<details>
<summary>Docker-Based Setup</summary>
You can build with Docker which reduces the discrepencies between systems and standarizes it into one container.

If you are on a Debian-based setup (Ubuntu, Debian, etc...), ensure you have installed:

<pre><code>sudo apt-get install -y nvidia-container-toolkit</code></pre>
If you come accross any issue, follow the instructions under the <a href="https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html">official NVIDIA Guide.</a>

If you are on a Windows-based setup, follow these <a href="https://docs.docker.com/desktop/features/gpu/">instuctions</a> to use your NVIDIA GPU in Docker.


Then, to build and run the Docker container:
```bash
cd docker # Execute this from the root directory of the project
./run_docker.sh -h # Display help and usage instructions
./run_docker.sh -bu # Build the Docker image and launch the container with docker-compose
```

### Additional Docker Notes

> Ensure you specify your GPU architecture under the `TORCH_CUDA_ARCH_LIST` variable in the Dockerfile. This is necessary for proper and optimized compilation of CUDA-based libraries. For example, if your GPU architecture is 8.6, update the Dockerfile as follows:

```dockerfile
ENV TORCH_CUDA_ARCH_LIST="8.6"
```

Refer to [NVIDIA's CUDA GPU support matrix](https://developer.nvidia.com/cuda-gpus) to find the correct architecture for your GPU.

#### Accessing the Project Inside the Container

After starting the Docker container, navigate to the project directory inside the container:

```bash
cd projects/on-the-fly-nvs
```

Enjoy using the project as if it was on your local machine!
</details>

<details>
<summary>Installing CUDA within a Conda Environment</summary>
If <code>nvcc --version</code> returns an error, you can install CUDA within your Conda environment.
Expand All @@ -106,6 +146,7 @@ conda activate &lt;env_path&gt;/onthefly_nvs
Where <code>&lt;pkg_path&gt;</code> is the desired package download location and <code>&lt;env_path&gt;/onthefly_nvs</code> is the desired environment location.
</details>


## Data Guidelines
> Please note that our method **is not a drop-in replacement for COLMAP + 3DGS, as it does not reorder images**. We require sequential capture that implies several constraints on the kind of data that can be handled. Please follow the **[Capture Guidelines](#capture-guidelines) for best results on your own data.**

Expand Down
79 changes: 79 additions & 0 deletions docker/Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,79 @@
FROM nvidia/cuda:12.8.0-devel-ubuntu24.04

ENV DEBIAN_FRONTEND=noninteractive

# --- Base packages
RUN apt-get update && apt-get install -y --no-install-recommends \
build-essential \
curl \
git \
ca-certificates \
gnupg2 \
lsb-release \
locales \
python3 \
python3-pip \
python3-full \
python3-dev \
sudo \
x11-apps \
zenity \
openssh-client \
&& apt-get clean && rm -rf /var/lib/apt/lists/*

# --- Python requirements
COPY ./docker/requirements-docker.txt requirements.txt
COPY ./submodules submodules

RUN pip install --no-cache-dir --break-system-packages torch torchvision xformers --index-url https://download.pytorch.org/whl/cu128
RUN pip install --no-cache-dir --break-system-packages cupy-cuda12x

# === IMPORTANT: Set this to match your GPU Architecture ===
ENV TORCH_CUDA_ARCH_LIST="8.6"
RUN pip install --no-cache-dir --break-system-packages --ignore-installed -r requirements.txt

# --- User Setup
ARG USER_UID
ARG USER_GID
ARG USERNAME

RUN set -eux; \
# Remove user with same UID if exists
EXISTING_USER_BY_UID=$(getent passwd "${USER_UID}" | cut -d: -f1 || true); \
if [ -n "$EXISTING_USER_BY_UID" ]; then \
userdel -r "$EXISTING_USER_BY_UID" || true; \
fi; \
\
# Remove user with same name if exists
if id "${USERNAME}" >/dev/null 2>&1; then \
userdel -r "${USERNAME}" || true; \
fi; \
\
# Remove group with same GID if exists
if getent group "${USER_GID}" >/dev/null; then \
groupdel "$(getent group "${USER_GID}" | cut -d: -f1)"; \
fi; \
\
# Remove group with same name if exists
if getent group "${USERNAME}" >/dev/null; then \
groupdel "${USERNAME}"; \
fi; \
\
# Now create group and user cleanly
groupadd --gid "${USER_GID}" "${USERNAME}"; \
useradd -m -u "${USER_UID}" -g "${USER_GID}" "${USERNAME}"; \
usermod -aG sudo "${USERNAME}"; \
echo "${USERNAME} ALL=(ALL) NOPASSWD:ALL" >> /etc/sudoers

# --- Include this last to avoid cache invalidation
WORKDIR /

# No clue why this is needed, but it is
# Otherwise, build of submodules/diff-gaussian-rasterization fails
RUN sed -i '1i#include <stdint.h>\n#include <cstdint>' submodules/diff-gaussian-rasterization/cuda_rasterizer/rasterizer_impl.h
RUN pip install --no-cache-dir --break-system-packages submodules/diff-gaussian-rasterization

# Set environment
ENV USER=${USERNAME}
ENV HOME=/home/${USERNAME}
WORKDIR /home/${USERNAME}
47 changes: 47 additions & 0 deletions docker/docker-compose.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,47 @@
services:
otf-nvs:
container_name: otf-nvs
entrypoint: /bin/bash
build:
context: ../
dockerfile: ${DOCKERFILE}
args:
USER_UID: ${USER_UID}
USER_GID: ${USER_GID}
USERNAME: ${USERNAME}
HOSTNAME: ${HOSTNAME}
USER_PASSWORD: ${USER_PASSWORD}
image: otf-nvs
network_mode: "host"
working_dir: /home/${USERNAME}
cpus: 8
mem_limit: 16g
ipc: host
pid: host
privileged: true
user: "${USERNAME}:${USER_GID}"
hostname: ${HOSTNAME}
stdin_open: true
tty: true
runtime: nvidia
environment:
- USER_UID=${USER_UID}
- USER_GID=${USER_GID}
- USERNAME=${USERNAME}
- DISPLAY=unix${DISPLAY}
- XAUTHORITY=${XAUTHORITY}
- SSH_AUTH_SOCK=${SSH_AUTH_SOCK}
- NVIDIA_VISIBLE_DEVICES=all
- NVIDIA_DRIVER_CAPABILITIES=all
volumes:
- ../:/home/${USERNAME}/projects/on-the-fly-nvs
- /tmp/.X11-unix:/tmp/.X11-unix:rw
- ${HOME}/.Xauthority:/home/${USERNAME}/.Xauthority:rw
- ${HOME}/.ssh:/home/${USERNAME}/.ssh:ro
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: all
capabilities: [ gpu ]
7 changes: 7 additions & 0 deletions docker/requirements-docker.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
plyfile
tqdm
opencv-python
lpips
submodules/fused-ssim
submodules/simple-knn
submodules/graphdecoviewer
62 changes: 62 additions & 0 deletions docker/run_docker.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,62 @@
#!/bin/bash

usage() {
echo "Usage: $0 [-h] [-b] [-n] [-u] [-c] [-j | -f]"
echo " -h Show this help message"
echo " -b Build with cache"
echo " -n Build without cache"
echo " -u Start docker-compose and enter container"
echo " -c Stop and clean up"
}

if [ "$#" -lt 1 ]; then
usage
exit 1
fi

# Default to Jazzy
DOCKERFILE="docker/Dockerfile"
BUILD=false
BUILD_ARGS=""
COMPOSEUP=false

while getopts "hbnuc" opt; do
case ${opt} in
h ) usage; exit 0 ;;
b ) BUILD=true ;;
n ) BUILD=true; BUILD_ARGS="--no-cache" ;;
u ) COMPOSEUP=true ;;
c ) docker compose down --remove-orphans; exit 0 ;;
* ) usage; exit 1 ;;
esac
done

export USER_UID=$(id -u)
export USER_GID=$(id -g)
export USERNAME=$(id -un)
export USER_PASSWORD=${USERNAME}
export HOSTNAME=$(hostname)
export HOME=$HOME
export DISPLAY=$DISPLAY
export XAUTHORITY=$XAUTHORITY
export SSH_AUTH_SOCK=$SSH_AUTH_SOCK
export DOCKERFILE # Needed by docker-compose

if [ "$BUILD" = true ]; then
echo -e "Building docker image with \033[0;31m$DOCKERFILE\033[0m..."
DOCKER_BUILDKIT=1 docker compose build $BUILD_ARGS
if [ "$?" -ne 0 ]; then
echo "Docker build failed!"
exit 1
fi
fi

if [ "$COMPOSEUP" = true ]; then
echo "Starting docker container..."
docker compose up -d
if [ "$?" -ne 0 ]; then
echo "Docker compose up failed!"
exit 1
fi
docker compose exec otf-nvs bash
fi