diff --git a/README.md b/README.md index b16cd2ec8..e1f9178bc 100644 --- a/README.md +++ b/README.md @@ -8,16 +8,19 @@ The product documentation portal can be found at: https://docs.nvidia.com/datace ## Building the Container This step is optional if your only goal is to build the documentation. -As an alternative to building the container, you can run `docker pull registry.gitlab.com/nvidia/cloud-native/cnt-docs:0.4.0`. +As an alternative to building the container, you can run `docker pull ghcr.io/nvidia/cloud-native-docs:0.5.1`. Refer to to find the most recent tag. If you change the `Dockerfile`, update `CONTAINER_RELEASE_IMAGE` in the `gitlab-ci.yml` file to the new tag and build the container. Use the `Dockerfile` in the repository (under the `docker` directory) to generate the custom doc build container. +Refer to to find the most recent tag. 1. Build the container: ```bash + git clone https://github.com/NVIDIA/cloud-native-docs.git + cd cloud-native-docs docker build --pull \ --tag cnt-doc-builder \ --file docker/Dockerfile . @@ -52,8 +55,8 @@ The resulting HTML pages are located in the `_build/docs/.../latest/` directory More information about the `repo docs` command is available from . -Additionally, the Gitlab CI for this project builds the documentation on every merge into the default branch (`master`). -The documentation from the current default branch (`master`) is available at . +The GitHub CI for this project builds the documentation on every merge into the default branch (`main`). +The documentation from the current default branch (`main`) is available at . Documentation in the default branch is under development and unstable. ## Checking for Broken Links @@ -153,7 +156,7 @@ Only tags are published to docs.nvidia.com. The first three fields of the semantic version are used. For a "do over," push a tag like `gpu-operator-v23.3.1-1`. - Always tag the openshift docset and for each new gpu-operator docset release. + Always tag the openshift docset for each new gpu-operator docset release. 1. Push the tag to the repository. @@ -175,7 +178,7 @@ If the commit message includes `/not-latest`, then only the documentation in the 1. Update `.github/workflows/docs-build.yaml` and increment the `env.TAG` value. -1. Update `.gitlab-ci.yml` and set the same value--prefixed by `ghcr.io...`--in the `variables.BUILDER_IMAGE` field. +1. Update `.gitlab-ci.yml` and set the same value (prefixed by `ghcr.io...`) in the `variables.BUILDER_IMAGE` field. 1. Optional: [Build the container and docs](#building-the-container) locally and confirm the update works as intended. @@ -187,12 +190,12 @@ If the commit message includes `/not-latest`, then only the documentation in the 1. After you merge the pull request, the `docs-build.yaml` action detects that the newly incremented `env.TAG` container is not in the registry, builds the container with that tag and pushes it to the GitHub registry. - When you tag a commit to publish, GitLab CI pulls image from the `variables.BUILDER_IMAGE` value, + When you tag a commit to publish, GitHub CI pulls image from the `variables.BUILDER_IMAGE` value, builds the documentation, and that HTML is delivered to docs.nvidia.com. ## License and Contributing This documentation repository is licensed under [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0). -Contributions are welcome. Refer to the [CONTRIBUTING.md](https://gitlab.com/nvidia/cloud-native/cnt-docs/-/blob/master/CONTRIBUTING.md) document for more +Contributions are welcome. Refer to the [CONTRIBUTING.md](https://github.com/NVIDIA/cloud-native-docs/blob/main/CONTRIBUTING.md) document for more information on guidelines to follow before contributions can be accepted. diff --git a/gpu-operator/getting-started.rst b/gpu-operator/getting-started.rst index e0098a994..75972b186 100644 --- a/gpu-operator/getting-started.rst +++ b/gpu-operator/getting-started.rst @@ -435,7 +435,7 @@ If you want to use custom driver container images, such as version 465.27, then you can build a custom driver container image. Follow these steps: - Rebuild the driver container by specifying the ``$DRIVER_VERSION`` argument when building the Docker image. For - reference, the driver container Dockerfiles are available on the Git repository at https://gitlab.com/nvidia/container-images/driver. + reference, the driver container Dockerfiles are available on the Git repository at https://github.com/NVIDIA/gpu-driver-container/. - Build the container using the appropriate Dockerfile. For example: .. code-block:: console diff --git a/gpu-operator/gpu-operator-kubevirt.rst b/gpu-operator/gpu-operator-kubevirt.rst index 09824259a..4d8321773 100644 --- a/gpu-operator/gpu-operator-kubevirt.rst +++ b/gpu-operator/gpu-operator-kubevirt.rst @@ -534,8 +534,8 @@ Open a terminal and clone the driver container image repository. .. code-block:: console - $ git clone https://gitlab.com/nvidia/container-images/driver - $ cd driver + $ git clone https://github.com/NVIDIA/gpu-driver-container.git + $ cd gpu-driver-container Change to the vgpu-manager directory for your OS. We use Ubuntu 20.04 as an example. diff --git a/gpu-operator/install-gpu-operator-vgpu.rst b/gpu-operator/install-gpu-operator-vgpu.rst index bd9a3c1cb..e80cd573d 100644 --- a/gpu-operator/install-gpu-operator-vgpu.rst +++ b/gpu-operator/install-gpu-operator-vgpu.rst @@ -104,7 +104,7 @@ Perform the following steps to build and push a container image that includes th .. code-block:: console - $ git clone https://github.com/NVIDIA/gpu-driver-container + $ git clone https://github.com/NVIDIA/gpu-driver-container.git .. code-block:: console diff --git a/gpu-operator/precompiled-drivers.rst b/gpu-operator/precompiled-drivers.rst index 3b9afcf56..a7a880424 100644 --- a/gpu-operator/precompiled-drivers.rst +++ b/gpu-operator/precompiled-drivers.rst @@ -240,11 +240,11 @@ you can perform the following steps to build and run a container image. .. code-block:: console - $ git clone https://gitlab.com/nvidia/container-images/driver + $ git clone https://github.com/NVIDIA/gpu-driver-container.git .. code-block:: console - $ cd driver + $ cd gpu-driver-container #. Change directory to the operating system name and version under the driver directory: diff --git a/openshift/gpu-operator-with-precompiled-drivers.rst b/openshift/gpu-operator-with-precompiled-drivers.rst index 7e96c284a..de44cacf7 100644 --- a/openshift/gpu-operator-with-precompiled-drivers.rst +++ b/openshift/gpu-operator-with-precompiled-drivers.rst @@ -63,13 +63,13 @@ Perform the following steps to build a custom driver image for use with Red Hat .. code-block:: console - $ git clone https://gitlab.com/nvidia/container-images/driver + $ git clone https://github.com/NVIDIA/gpu-driver-container.git #. Change to the ``rhel8/precompiled`` directory under the cloned repository. You can build precompiled driver images for versions 8 and 9 of RHEL from this directory: .. code-block:: console - $ cd driver/rhel8/precompiled + $ cd gpu-driver-container/rhel8/precompiled #. Create a Red Hat Customer Portal Activation Key and note your Red Hat Subscription Management (RHSM) organization ID. These are to install packages during a build. Save the values to files such as ``$HOME/rhsm_org`` and ``$HOME/rhsm_activationkey``: diff --git a/openshift/mig-ocp.rst b/openshift/mig-ocp.rst index 3ada8433f..ef65cb36d 100644 --- a/openshift/mig-ocp.rst +++ b/openshift/mig-ocp.rst @@ -108,7 +108,7 @@ The NVIDIA GPU Operator exposes GPUs to Kubernetes as extended resources that ca Version 1.8 and greater of the NVIDIA GPU Operator supports updating the **Strategy** in the ClusterPolicy after deployment. -The `default configmap `_ defines the combination of single (homogeneous) and mixed (heterogeneous) profiles that are supported for A100-40GB, A100-80GB and A30-24GB. +The `default configmap `_ defines the combination of single (homogeneous) and mixed (heterogeneous) profiles that are supported for A100-40GB, A100-80GB and A30-24GB. The configmap allows administrators to declaratively define a set of possible MIG configurations they would like applied to all GPUs on a node. The tables below describe these configurations: @@ -301,7 +301,7 @@ Creating and applying a custom MIG configuration Follow the guidance below to create a new slicing profile. -#. Prepare a custom ``configmap`` resource file for example ``custom_configmap.yaml``. Use the `configmap `_ as guidance to help you build that custom configuration. For more documentation about the file format see `mig-parted `_. +#. Prepare a custom ``configmap`` resource file for example ``custom_configmap.yaml``. Use the `configmap `_ as guidance to help you build that custom configuration. For more documentation about the file format see `mig-parted `_. .. note:: For a list of all supported combinations and placements of profiles on A100 and A30, refer to the section on `supported profiles `_. diff --git a/openshift/openshift-virtualization.rst b/openshift/openshift-virtualization.rst index 1d97ea6b7..490a59740 100644 --- a/openshift/openshift-virtualization.rst +++ b/openshift/openshift-virtualization.rst @@ -245,7 +245,7 @@ Use the following steps to build the vGPU Manager container and push it to a pri .. code-block:: console - $ git clone https://github.com/NVIDIA/gpu-driver-container + $ git clone https://github.com/NVIDIA/gpu-driver-container.git $ cd gpu-driver-container #. Change to the ``vgpu-manager`` directory for your OS: diff --git a/partner-validated/index.rst b/partner-validated/index.rst index 24a84eae8..985d70773 100644 --- a/partner-validated/index.rst +++ b/partner-validated/index.rst @@ -74,10 +74,10 @@ You provide the following: * Document and contribute the exact software stack that you self-validated. Refer to the - `PARTNER-VALIDATED-TEMPLATE.rst file `__ + `PARTNER-VALIDATED-TEMPLATE.rst file `__ in the ``partner-validated`` directory of the documentation repository as a starting point. Open a pull request to the repository with your update. - Refer to the `CONTRIBUTING.md file `__ + Refer to the `CONTRIBUTING.md file `__ in the root directory of the documentation repository for information about contributing documentation. * Run the self-validated configuration and then share the outcome with NVIDIA by providing