Skip to content

Conversation

@bdrian
Copy link
Contributor

@bdrian bdrian commented Oct 28, 2019

This PR adds support for running BinderHub behind a proxy, at least once complementary changes are made in repo2docker. I'm not sure if this is the prettiest way to go, feedback is welcome.

See #939

@bdrian
Copy link
Contributor Author

bdrian commented Oct 28, 2019

Something broke :| https://travis-ci.org/jupyterhub/binderhub/jobs/604006721#L2830 I'll look into that tomorrow.

@bdrian bdrian force-pushed the pr/proxy-support branch 2 times, most recently from 8171749 to 23ef024 Compare October 29, 2019 11:15
this class can be used as a drop-in replacement for tornado's
httpclient.AsyncHTTPClient and will automatically inject the correct
proxy as specified in the environment variables
calls to JupyterHub are ignored for now; in the unlikely event that
somebody needs to call jupyterhub through a proxy we can still change this later
@bdrian
Copy link
Contributor Author

bdrian commented Oct 29, 2019

The problem above is fixed and Travis should be able to build and test successfully, however the pipeline is randomly failing for reasons unrelated to this PR (one, two).

@meeseeksmachine
Copy link

This pull request has been mentioned on Jupyter Community Forum. There might be relevant details there:

https://discourse.jupyter.org/t/binder-behind-outbound-proxy/7428/2

@g-braeunlich
Copy link
Contributor

Hey guys.
Just had a look at this PR. Does the commit to change the build pod work for you?
In my case, to get it working, I had to inject a /root/.docker/config.json file, containing the proxy vars into the build pod.
Just setting env vars in the build pod was not enough as the docker client inside the pod did not pick up the env vars.
See: https://medium.com/@bennyh/docker-and-proxy-88148a3f35f7

@g-braeunlich
Copy link
Contributor

This is my current changes (on top of current master), which made it work for me:

diff --git a/binderhub/build.py b/binderhub/build.py
index ae16d61..aa8a0c6 100644
--- a/binderhub/build.py
+++ b/binderhub/build.py
@@ -7,6 +7,7 @@ import datetime
 import json
 import threading
 from urllib.parse import urlparse
+import os
 
 from kubernetes import client, watch
 from tornado.ioloop import IOLoop
@@ -256,6 +257,9 @@ class Build:
         if self.git_credentials:
             env.append(client.V1EnvVar(name='GIT_CREDENTIAL_ENV', value=self.git_credentials))
 
+        # If there are proxy_* env variables, inject a /root/.docker/config.json:
+        self.mount_docker_runtime_config(volumes, volume_mounts)
+        
         self.pod = client.V1Pod(
             metadata=client.V1ObjectMeta(
                 name=self.name,
@@ -404,6 +408,45 @@ class Build:
         """Stop watching a build"""
         self.stop_event.set()
 
+    def mount_docker_runtime_config(self, volumes, volume_mounts):
+        proxy_vars = {
+            var: os.environ.get(var, os.environ.get(var.upper()))
+            for var in ("http_proxy", "https_proxy", "no_proxy")
+        }
+
+        def camel_casify(var):
+            return var.replace("_p", "P")
+
+        # Filter None and map do camel case:
+        proxy_vars = {camel_casify(var): val for var, val in proxy_vars.items() if val}
+        if not proxy_vars:
+            return
+
+        config_map = client.V1ConfigMap(
+            api_version="v1",
+            kind="ConfigMap",
+            metadata=client.V1ObjectMeta(
+                name="https-build-docker-runtime-config",
+                labels={"component": self._component_label},
+            ),
+            data={"config.json": json.dumps({"proxies": {"default": proxy_vars}}, indent=2)},
+        )
+        try:
+            self.api.create_namespaced_config_map(self.namespace, config_map)
+        except client.rest.ApiException as e:
+            if e.status != 409: # 409 is ok: already created
+                raise
+
+        volume = client.V1Volume(
+            name="https-build-vol-docker-runtime-config",
+            config_map=client.V1ConfigMapVolumeSource(name=config_map.metadata.name),
+        )
+        volumes.append(volume)
+        volume_mounts.append(
+            client.V1VolumeMount(mount_path="/root/.docker/", name=volume.name)
+        )
+    
+
 class FakeBuild(Build):
     """
     Fake Building process to be able to work on the UI without a running Minikube.
diff --git a/helm-chart/binderhub/templates/rbac.yaml b/helm-chart/binderhub/templates/rbac.yaml
index 5716b5a..9a4dcd8 100644
--- a/helm-chart/binderhub/templates/rbac.yaml
+++ b/helm-chart/binderhub/templates/rbac.yaml
@@ -14,6 +14,9 @@ rules:
 - apiGroups: [""]
   resources: ["pods/log"]
   verbs: ["get"]
+- apiGroups: [""]
+  resources: ["configmaps"]
+  verbs: ["create"]
 ---
 kind: RoleBinding
 apiVersion: rbac.authorization.k8s.io/v1

@g-braeunlich
Copy link
Contributor

Alternatively, the config_map also could be templated with helm (either have to introduce new proxy values or somehow try to extract the proxy env vars out of the extraEnv values)

@betatim
Copy link
Member

betatim commented Jan 13, 2021

I've not followed this PR but if you are interested in working on this @g-braeunlich: do you want to take over the commits in this PR and make a new PR where you add your changes on top? I think having someone who uses this feature and, as a result, is interested in pushing this over the finish line and able to test it would be great.

@g-braeunlich
Copy link
Contributor

@betatim Definitely interested, but I cannot promise that I will have time to also test the part outside the repo2docker build pod. We only connect to internal repos

@manics
Copy link
Member

manics commented Jan 13, 2021

Option 3 could be to add an entrypoint/command script in the repo2docker container that creates the required docker.json file inside the container (perhaps by checking for *_PROXY environment variables?) before executing repo2docker?

@g-braeunlich
Copy link
Contributor

That makes a lot of sense. I will file a MR there.
Additionally this would require to make the env vars for the build pod configurable or pass the env vars from the binder pod to the build pod

@manics
Copy link
Member

manics commented Jan 13, 2021

Sounds good! I'm not 100% sure it'll work though. If it does then one advantage of doing it in the repo2docker Dockerfile is it should "just work" for anyone using that container outside of BinderHub.

@betatim
Copy link
Member

betatim commented Jan 13, 2021

I cannot promise that I will have time to also test the part outside the repo2docker build pod.

That is fine. I think as long as we are clear about what we are doing, can do, can test/not test, etc we can keep moving this forward with the resources/time each of us has.Basically aim for adding things that make the situation better but resist adding things where we aren't sure/can't test if they will make things better. No code is better than broken code :D

@g-braeunlich
Copy link
Contributor

Sounds good! I'm not 100% sure it'll work though. If it does then one advantage of doing it in the repo2docker Dockerfile is it should "just work" for anyone using that container outside of BinderHub.

While preparing the PR, I had the following thoughts:
What if there are some users which have their own usecases and mount a custom /root/.docker/config.json?
The entrypoint then would overwrite their file.
Or if the file is mounted read only, it will break their workflow.

@manics
Copy link
Member

manics commented Jan 13, 2021

Good point. Maybe create the file if a *_PROXY environment variable is set, and otherwise do nothing?

@g-braeunlich
Copy link
Contributor

My current suggestion would be:
If no *_PROXY is set OR the file already exists -> do nothing
If the file exists and PROXY vars are set, display a warning message

@g-braeunlich
Copy link
Contributor

This is the PR: jupyterhub/repo2docker#1003

@manics manics closed this Aug 15, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

code:python Python changes.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants