This README provides instructions for setting up and using the Docker environment for the DigiForests project, which includes GPU support and a multi-stage build process.
Note: Host machine should support CUDA 11.8!
-
Install Docker on your system.
-
Install NVIDIA Container Runtime: Follow the instructions in the NVIDIA Documentation.
-
Verify GPU support in Docker:
docker run --rm --gpus all nvidia/cuda:11.8.0-base-ubuntu22.04 nvidia-smi
This should display your GPU information if everything is set up correctly.
The Dockerfile uses a 3-stage build process:
- Base CUDA and PyTorch setup
- MinkowskiEngine compilation
- DigiForests package installation
Note: The build process can take 30+ minutes, especially the MinkowskiEngine compilation.
- Ensure your host machine supports CUDA 11.8, as this is the base image which we build upon.
- In the Dockerfile, adjust the
TORCH_CUDA_ARCH_LISTenvironment variable to match your GPU's compute capability.
docker build -t digiforests_devkit -f docker/Dockerfile .docker run -it --rm --gpus all digiforests_devkit-
Ensure your user ID and group ID are set in the environment:
export UID=$(id -u) export GID=$(id -g)
-
Run the container:
docker compose -f docker/compose.yaml run devkit
- The
compose.yamlfile includes volume mounts for development. Adjust these as needed. - Uncomment the data volume mount in
compose.yamlto access external data.