Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
21 changes: 11 additions & 10 deletions .github/workflows/build.yml
Original file line number Diff line number Diff line change
Expand Up @@ -12,12 +12,10 @@ jobs:
fail-fast: false
matrix:
os: [ubuntu-latest, macos-latest]
python-version: ["3.8", "3.11"]
python-version: ["3.9", "3.11", "3.12"]
include:
- os: ubuntu-latest
python-version: "3.9"
- os: ubuntu-latest
python-version: "pypy-3.8"
python-version: "pypy-3.9"
- os: macos-latest
python-version: "3.10"
steps:
Expand All @@ -26,15 +24,17 @@ jobs:
with:
clean: true
- uses: jupyterlab/maintainer-tools/.github/actions/base-setup@v1
- uses: actions/setup-java@v4
with:
distribution: temurin
java-version: 17
- uses: sbt/setup-sbt@v1
- name: Display dependency info
run: |
python --version
pip --version
conda --version
- name: Add SBT launcher
run: |
mkdir -p $HOME/.sbt/launchers/1.3.12
curl -L -o $HOME/.sbt/launchers/1.3.12/sbt-launch.jar https://repo1.maven.org/maven2/org/scala-sbt/sbt-launch/1.3.12/sbt-launch.jar
java --version
sbt --version
- name: Install Python dependencies
run: |
pip install ".[dev]"
Expand All @@ -61,6 +61,7 @@ jobs:
steps:
- name: Checkout
uses: actions/checkout@v4
- uses: sbt/setup-sbt@v1
- name: Base Setup
uses: jupyterlab/maintainer-tools/.github/actions/base-setup@v1
- name: Check Release
Expand All @@ -76,7 +77,7 @@ jobs:
- uses: jupyterlab/maintainer-tools/.github/actions/base-setup@v1
- uses: jupyterlab/maintainer-tools/.github/actions/check-links@v1
with:
ignore_links: "http://my-gateway-server.com:8888"
ignore_links: "http://my-gateway-server.com:8888 https://www.gnu.org/software/make/"
ignore_glob: "gateway_provisioners/app-support/README.md"

build_docs:
Expand Down
2 changes: 1 addition & 1 deletion .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ repos:
- id: check-github-workflows

- repo: https://github.com/executablebooks/mdformat
rev: 0.7.17
rev: 0.7.22
hooks:
- id: mdformat
additional_dependencies:
Expand Down
12 changes: 6 additions & 6 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,11 +20,11 @@

- Update dependabot config [#130](https://github.com/jupyter-server/gateway_provisioners/pull/130) ([@blink1073](https://github.com/blink1073))
- Update Release Workflows [#129](https://github.com/jupyter-server/gateway_provisioners/pull/129) ([@blink1073](https://github.com/blink1073))
- Bump black\[jupyter\] from 23.9.1 to 23.11.0 [#119](https://github.com/jupyter-server/gateway_provisioners/pull/119) ([@dependabot](https://github.com/dependabot))
- Bump black[jupyter] from 23.9.1 to 23.11.0 [#119](https://github.com/jupyter-server/gateway_provisioners/pull/119) ([@dependabot](https://github.com/dependabot))
- Bump actions/checkout from 3 to 4 [#106](https://github.com/jupyter-server/gateway_provisioners/pull/106) ([@dependabot](https://github.com/dependabot))
- Bump black\[jupyter\] from 23.7.0 to 23.9.1 [#105](https://github.com/jupyter-server/gateway_provisioners/pull/105) ([@dependabot](https://github.com/dependabot))
- Bump black[jupyter] from 23.7.0 to 23.9.1 [#105](https://github.com/jupyter-server/gateway_provisioners/pull/105) ([@dependabot](https://github.com/dependabot))
- Adopt sp-repo-review [#104](https://github.com/jupyter-server/gateway_provisioners/pull/104) ([@blink1073](https://github.com/blink1073))
- Bump black\[jupyter\] from 23.3.0 to 23.7.0 [#99](https://github.com/jupyter-server/gateway_provisioners/pull/99) ([@dependabot](https://github.com/dependabot))
- Bump black[jupyter] from 23.3.0 to 23.7.0 [#99](https://github.com/jupyter-server/gateway_provisioners/pull/99) ([@dependabot](https://github.com/dependabot))
- Update mistune requirement from \<3.0.0 to \<4.0.0 [#94](https://github.com/jupyter-server/gateway_provisioners/pull/94) ([@dependabot](https://github.com/dependabot))
- Bump ruff from 0.0.269 to 0.0.270 [#92](https://github.com/jupyter-server/gateway_provisioners/pull/92) ([@dependabot](https://github.com/dependabot))
- Bump ruff from 0.0.267 to 0.0.269 [#91](https://github.com/jupyter-server/gateway_provisioners/pull/91) ([@dependabot](https://github.com/dependabot))
Expand Down Expand Up @@ -63,11 +63,11 @@

- Bump ruff from 0.0.260 to 0.0.261 [#79](https://github.com/jupyter-server/gateway_provisioners/pull/79) ([@dependabot](https://github.com/dependabot))
- Bump ruff from 0.0.259 to 0.0.260 [#77](https://github.com/jupyter-server/gateway_provisioners/pull/77) ([@dependabot](https://github.com/dependabot))
- Bump black\[jupyter\] from 23.1.0 to 23.3.0 [#76](https://github.com/jupyter-server/gateway_provisioners/pull/76) ([@dependabot](https://github.com/dependabot))
- Bump black[jupyter] from 23.1.0 to 23.3.0 [#76](https://github.com/jupyter-server/gateway_provisioners/pull/76) ([@dependabot](https://github.com/dependabot))
- Bump ruff from 0.0.257 to 0.0.259 [#75](https://github.com/jupyter-server/gateway_provisioners/pull/75) ([@dependabot](https://github.com/dependabot))
- Bump ruff from 0.0.254 to 0.0.257 [#73](https://github.com/jupyter-server/gateway_provisioners/pull/73) ([@dependabot](https://github.com/dependabot))
- Bump ruff from 0.0.252 to 0.0.254 [#69](https://github.com/jupyter-server/gateway_provisioners/pull/69) ([@dependabot](https://github.com/dependabot))
- Fix relative link formatting in documentation to be consistent [#68](https://github.com/jupyter-server/gateway_provisioners/pull/68) ([@kiersten-stokes](https://github.com/kiersten-stokes))
- Fix relative link formatting in documentation to be consistent [#68](https://github.com/jupyter-server/gateway_provisioners/pull/68) ([@kiersten-stokes](https://github.com/kiersten-stokes))
- Bump ruff from 0.0.249 to 0.0.252 [#65](https://github.com/jupyter-server/gateway_provisioners/pull/65) ([@dependabot](https://github.com/dependabot))
- Use releaser workflows [#64](https://github.com/jupyter-server/gateway_provisioners/pull/64) ([@blink1073](https://github.com/blink1073))
- Create hatch build env with make-related scripts [#63](https://github.com/jupyter-server/gateway_provisioners/pull/63) ([@kevin-bates](https://github.com/kevin-bates))
Expand All @@ -79,7 +79,7 @@
- Add link to SparkOperatorProvisioner class definition [#81](https://github.com/jupyter-server/gateway_provisioners/pull/81) ([@kevin-bates](https://github.com/kevin-bates))
- Fix minor issues in Developer and Contributor docs [#74](https://github.com/jupyter-server/gateway_provisioners/pull/74) ([@kiersten-stokes](https://github.com/kiersten-stokes))
- Fix grammar NITs in Operator's Guide of docs [#72](https://github.com/jupyter-server/gateway_provisioners/pull/72) ([@kiersten-stokes](https://github.com/kiersten-stokes))
- Fix relative link formatting in documentation to be consistent [#68](https://github.com/jupyter-server/gateway_provisioners/pull/68) ([@kiersten-stokes](https://github.com/kiersten-stokes))
- Fix relative link formatting in documentation to be consistent [#68](https://github.com/jupyter-server/gateway_provisioners/pull/68) ([@kiersten-stokes](https://github.com/kiersten-stokes))
- Fix minor errors in Users subsection of documentation [#67](https://github.com/jupyter-server/gateway_provisioners/pull/67) ([@kiersten-stokes](https://github.com/kiersten-stokes))
- Add application support information for deploying JKG and Lab [#66](https://github.com/jupyter-server/gateway_provisioners/pull/66) ([@kevin-bates](https://github.com/kevin-bates))
- Replace references to gateway-experiments with jupyter-server [#62](https://github.com/jupyter-server/gateway_provisioners/pull/62) ([@kevin-bates](https://github.com/kevin-bates))
Expand Down
2 changes: 1 addition & 1 deletion docs/source/contributors/devinstall.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ will not occur! Always use `make dist` to build the distribution.
### `sbt`

Our Scala launcher is built using `sbt`
([Scala Build Tool](https://www.scala-sbt.org/index.html)). Please check
([Scala Build Tool](https://www.scala-sbt.org/index.html)). Please check
[here](https://www.scala-sbt.org/1.x/docs/Setup.html) for installation instructions for your platform.

## Clone the repo
Expand Down
16 changes: 8 additions & 8 deletions docs/source/contributors/system-architecture.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,8 +51,8 @@ step from its implementation.

### Gateway Provisioner Class Hierarchy

The following block diagram depicts the current class hierarchy for the Gateway Provisioners. The blocks with an
`ABC` badge and dashed border indicate abstract base classes. Those light blue blocks come from `jupyter_client`,
The following block diagram depicts the current class hierarchy for the Gateway Provisioners. The blocks with an
`ABC` badge and dashed border indicate abstract base classes. Those light blue blocks come from `jupyter_client`,
while the others reside in Gateway Provisioners.

```{blockdiag}
Expand Down Expand Up @@ -157,8 +157,8 @@ kernel provisioner configuration to override the global value, enabling finer-gr

`ContainerProvisionerBase` is an abstract base class that derives from `RemoteProvisionerBase`. It implements all
the methods inherited from `RemoteProvsionerBase` interacting with the container API and requiring method implementations
to perform the platform's integration. Subclasses
of `ContainerProvisionerBase` must also implement `get_initial_states()`, `get_error_states()`, `get_container_status()`,
to perform the platform's integration. Subclasses
of `ContainerProvisionerBase` must also implement `get_initial_states()`, `get_error_states()`, `get_container_status()`,
and `terminate_container_resources()`:

```python
Expand Down Expand Up @@ -204,9 +204,9 @@ manages kernels via a custom resource definition (CRD). For example, `SparkAppli
many components of a Spark-on-Kubernetes application.

`CustomResourceProvisioner` could be considered a _virtual abstract base class_ that provides the necessary method overrides of
`KubernetesProvisioner` to manage the lifecycle of CRDs. If you are going to extend `CustomResourceProvisioner`,
`KubernetesProvisioner` to manage the lifecycle of CRDs. If you are going to extend `CustomResourceProvisioner`,
all that should be necessary is to override these custom resource related attributes (i.e. `group`, `version`, `plural` and
`object_kind`) that define the CRD attributes and its implementation should cover the rest. Note that `object_kind` is
`object_kind`) that define the CRD attributes and its implementation should cover the rest. Note that `object_kind` is
an internal attribute that Gateway Provisioners uses, while the other attributes are associated with the Kubernetes CRD
object definition.

Expand All @@ -218,7 +218,7 @@ to function. In addition, the class itself doesn't define any abstract methods

#### `SparkOperatorProvisioner`

A great example of a `CustomResourceProvisioner` is `SparkOperatorProvisioner`. As described in the previous section,
A great example of a `CustomResourceProvisioner` is `SparkOperatorProvisioner`. As described in the previous section,
it's implementation consists of overrides of attributes `group` (e.g, `"sparkoperator.k8s.io"`), `version`
(i.e., `"v1beta2"`), `plural` (i.e., `"sparkapplications"`) and `object_kind` (i.e., `"SparkApplication"`).

Expand All @@ -231,7 +231,7 @@ Operators Guide for details.

Gateway Provisioners provides an implementation of a kernel provisioner that communicates with the Docker Swarm resource
manager via the Docker API. When used, the kernels are launched as swarm services and can reside anywhere in the
managed cluster. The core of a Docker Swarm service is a container, so `DockerSwarmProvisioner` derives from
managed cluster. The core of a Docker Swarm service is a container, so `DockerSwarmProvisioner` derives from
`ContainerProvisionerBase`. To leverage kernels configured in this manner, the host application can be deployed either
as a Docker Swarm _service_ or a traditional Docker container.

Expand Down
4 changes: 2 additions & 2 deletions docs/source/developers/custom-images.md
Original file line number Diff line number Diff line change
Expand Up @@ -108,8 +108,8 @@ the appropriate directory in place. For the purposes of this discussion, we'll a
directory, `/usr/local/share/jupyter/kernels`, is externally mounted.

Depending on the environment, Kubernetes or Docker, you can use with `jupyter-k8s-spec` or `jupyter-docker-spec`,
respectively. Invoke the appropriate script by adding the `--image-name` parameter identifying the name of your
custom kernel image. For example, if your custom image is named `acme/data-sci-py:2.0` and you are targeting
respectively. Invoke the appropriate script by adding the `--image-name` parameter identifying the name of your
custom kernel image. For example, if your custom image is named `acme/data-sci-py:2.0` and you are targeting
Kubernetes, issue:

```dockerfile
Expand Down
2 changes: 1 addition & 1 deletion docs/source/operators/config-file.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ c.DistributedProvisioner.remote_hosts = ["localhost"]

## Provisioner-specific Configuration Options

A complete set of configuration options available for each Gateway Provisioner follows. Where applicable, the
A complete set of configuration options available for each Gateway Provisioner follows. Where applicable, the
configurable option's default value is also provided.

### `KubernetesProvisioner`
Expand Down
10 changes: 5 additions & 5 deletions docs/source/operators/deploy-distributed.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ Steps required to complete deployment on a distributed cluster are:

## Prerequisites

The distributed capabilities of the `DistributedProvisioner` utilize SSH. As a result, you must ensure appropriate
The distributed capabilities of the `DistributedProvisioner` utilize SSH. As a result, you must ensure appropriate
password-less functionality is in place.

If you want to use Spark in "client mode", you'll want to ensure the `SPARK_HOME` environment variable is properly
Expand Down Expand Up @@ -92,9 +92,9 @@ where each provides the following function:
its display name (`display_name`) and language (`language`), as
well as its kernel provisioner's configuration (`metadata.kernel_provisioner`) - which, in this case, will reflect the
`DistributedProvisioner`.
- `logo-64x64.png` - the icon resource corresponding to this kernel specification. Icon resource files must be start
- `logo-64x64.png` - the icon resource corresponding to this kernel specification. Icon resource files must be start
with the `logo-` prefix to be included in the kernel specification.
- `scripts/launch_ipykernel.py` - the "launcher" for the IPyKernel kernel (or subclasses thereof). This file is typically
- `scripts/launch_ipykernel.py` - the "launcher" for the IPyKernel kernel (or subclasses thereof). This file is typically
implemented in the language of the kernel and is responsible for creating the local connection information, asynchronously
starting a SparkContext (if asked), spawning a listener process to receive interrupts and shutdown requests, and starting
the IPyKernel itself.
Expand Down Expand Up @@ -300,7 +300,7 @@ To see all available configurables, use `--help-all`.
## Specifying a load-balancing algorithm

The `DistributedProvisioner` provides two ways to configure how kernels are distributed across
the configured set of hosts: round-robin or least-connection. This configurable option is a _host application_
the configured set of hosts: round-robin or least-connection. This configurable option is a _host application_
setting and is not available to be overridden on a per-kernel basis.

### Round-robin
Expand Down Expand Up @@ -362,7 +362,7 @@ YARN client mode kernel specifications can be considered _distributed mode kerne
happen to use `spark-submit` from different nodes in the cluster but use the
`DistributedProvisioner` to manage their lifecycle.

These kernel specifications are generated using the `'--spark'` command line option as noted above. When provided,
These kernel specifications are generated using the `'--spark'` command line option as noted above. When provided,
a kernel specification similar to the following is produced:

```json
Expand Down
16 changes: 8 additions & 8 deletions docs/source/operators/deploy-docker.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,10 +16,10 @@ for examples of how to configure and deploy such applications.

## Generating Kernel Specifications

Kernelspec generation for Docker and Docker Swarm deployments is performed using the `jupyter-docker-spec` command. Because
Kernelspec generation for Docker and Docker Swarm deployments is performed using the `jupyter-docker-spec` command. Because
the host application will also reside within a docker image, the commands are usually placed into a Dockerfile
that _extends_ an existing image. However, some may choose to `docker exec` into a running container, perform and test
the necessary configuration, then use `docker commit` to generate a new image. That said, the following will assume a
that _extends_ an existing image. However, some may choose to `docker exec` into a running container, perform and test
the necessary configuration, then use `docker commit` to generate a new image. That said, the following will assume a
Dockerfile approach.

```{attention}
Expand Down Expand Up @@ -57,10 +57,10 @@ where each provides the following function:
its display name (`display_name`) and language (`language`), as
well as its kernel provisioner's configuration (`metadata.kernel_provisioner`) - which, in this case, will reflect the
`DockerProvisioner`.
- `logo-64x64.png` - the icon resource corresponding to this kernel specification. Icon resource files must be start
- `logo-64x64.png` - the icon resource corresponding to this kernel specification. Icon resource files must be start
with the `logo-` prefix to be included in the kernel specification.
- `scripts/launch_docker.py` - the "launcher" for the kernel image identified by the
`metadata.kernel_provisioner.config.image_name` entry. This file can be modified to include instructions for
`metadata.kernel_provisioner.config.image_name` entry. This file can be modified to include instructions for
volume mounts, etc., and is compatible with both Docker and Docker Swarm - performing the applicable instructions for
each environment.

Expand All @@ -71,7 +71,7 @@ others.

### Generating Multiple Specifications

Its common practice to support multiple languages or use different images for kernels of the same language. For each
Its common practice to support multiple languages or use different images for kernels of the same language. For each
of those differences, a separate installation command should be provided:

```dockerfile
Expand Down Expand Up @@ -102,9 +102,9 @@ Items worth noting:
## Other Configuration Items

There are some environment variables that can be set in the host application's environment that affect how Gateway
Provisioners operate within a Docker and Docker Swarm environment. For example, `GP_MIRROR_WORKING_DIRS` can be set
Provisioners operate within a Docker and Docker Swarm environment. For example, `GP_MIRROR_WORKING_DIRS` can be set
to `True`, instructing Gateway Provisioners to set the launched container's working directory to the value of
`KERNEL_WORKING_DIR`. When this environment variable is enabled, it usually implies that volume mounts are in play
`KERNEL_WORKING_DIR`. When this environment variable is enabled, it usually implies that volume mounts are in play
such that the per-user volumes are then available to the launched container.

Other [environment variables](config-add-env.md#additional-environment-variables) applicable to Docker/Docker Swarm
Expand Down
Loading