-
Notifications
You must be signed in to change notification settings - Fork 0
Catalog & broker in a Vagrant VM
Based on Running the catalog with the template broker, which is related to wip catalog & template broker.
This guide assumes you have previously used Vagrant via something like the contributing doc.
If you haven't recently, its time to get a clean fedora inst:
$ cd /go/src/github.com/openshift/origin
$ vagrant box remove fedora_inst
$ vagrant up
# lots happens...
$ vagrant ssh Inside the VM, if go is out of date, time to update:
$ go version
# go version go1.6.2 linux/amd64 <-- if this is less than 1.7, and it is, gotta update
# download a new tar, urls for file should be here: https://golang.org/doc/install?download=go1.8.linux-amd64.tar.gz
$ wget https://storage.googleapis.com/golang/go1.8.linux-amd64.tar.gz
# untar it, may need `sudo`
$ tar -C /usr/local -xzf go1.8.linux-amd64.tar.gz
# delete your old go, may need `sudo`
$ which go
$ rm /bin/go
#
# then update the $PATH in your base profile
$ vim ~/.bash_profile
# add this line:
# PATH=$PATH:/usr/local/go/bin
# exit & ssh again to reload
$ exit
$ vagrant ssh
# test if the new go shows up:
$ which go
$ go versionEven with the above go in your $PATH, you'll have to fiddle with make release a bit.
$ cd /data/src/github.com/openshift/origin
# sudo won't use your .bash_profile
$ sudo GOROOT=/usr/local/go/ PATH=$PATH:/usr/local/go/bin make release
#
# Alt:
# if you want to rebuild oc only (much faster):
$ sudo GOROOT=/usr/local/go/ PATH=$PATH:/usr/local/go/bin hack/build-go.sh cmd/oc
$ docker pull openshift/origin:latest# cluster up, creating a new config
$ mkdir ~/openshift.local.{config,volumes,etcd}
# we dont want the existing $KUBECONFIG, as we will change the config file location:
$ unset KUBECONFIG
$ oc cluster up --version=latest --host-config-dir=$HOME/openshift.local.config --host-data-dir=$HOME/openshift.local.etcd --host-pv-dir=$HOME/openshift.local.volumes --routing-suffix=10.245.2.2.nip.io
# then cluster down
$ oc cluster down
# edit the config file, updating enableTemplateServiceBroker from false to true:
$ vim ~/openshift.local.config/master/master-config.yaml
# then, cluster it back up
$ oc cluster up --version=latest --use-existing-config --host-config-dir=$HOME/openshift.local.config --host-data-dir=$HOME/openshift.local.etcd --host-pv-dir=$HOME/openshift.local.volumes --routing-suffix=10.245.2.2.nip.io
#
# NOTE:
# the flag --routing-suffix=10.245.2.2.nip.io
# is necessary, but you will later have to check and see if the catalog picks it up.
# if it does not, then you will have to edit your local /etc/hosts file to help overcome this flaw
#
# NOTE:
# --version=latest is likely sufficient, unless you really need to pin to a particular commit, then:
# --version="$(git log -1 --pretty=%h )"Allow system:unauthenticated access to the template broker. Yup. Bad.
# not in a real env
$ oc login -u system:admin
$ oc adm policy reconcile-cluster-roles --confirm
$ oc adm policy add-cluster-role-to-group templateservicebroker-client system:unauthenticated$ oc login -u developer -p developer
$ oc new-project service-catalogUse this file to deploy the service catalog, either via the web console, or oc create -f with the raw version.
# NOTE: be sure to use a project named `service-catalog`, the api server name will be at
# apiserver-<project-name>.nip.io
$ oc process -f https://gist.githubusercontent.com/jwforres/78d8c2a939e5e69e31ddd32471ce79fd/raw/518ffcd3139671f04aad9f21342f845c992f5543/gistfile1.txt | oc create -f -Now, double check that things are running.
- visit the web console
- if its not, then ensure your console config is setup correctly. Here is an example with some comments:
// from the usual directory, HOWEVER, you will need to tinker with your
// config.local.json. I find it easier to swap your host like this:
//var hostIP = '127.0.0.1'; // localhost
var hostIP = '10.245.2.2'; // => if Vagrant
// var hostIP = '172.30.0.0/16' // => if Docker
var hostPort = ':8443';
var fullHost = hostIP + hostPort;
window.OPENSHIFT_CONFIG = {
apis: {
hostPort: fullHost,
prefix: "/apis"
},
api: {
openshift: {
hostPort: fullHost,
prefix: "/oapi"
},
k8s: {
hostPort: fullHost,
prefix: "/api"
}
},
auth: {
oauth_authorize_uri: 'https://' + fullHost + "/oauth/authorize",
oauth_redirect_base: "https://localhost:9000/dev-console",
oauth_client_id: "openshift-web-console",
logout_uri: ""
},
loggingURL: "",
metricsURL: "https://metrics-openshift-infra." + hostIP + ".nip.io/hawkular/metrics",
// The additional servers stanza is essential.
//
additionalServers: [{
protocol: "https",
hostPort: "apiserver-service-catalog." + hostIP + ".nip.io",
// or this, if you can't get the --routing-suffix=10.245.2.2.nip.io to work properly:
// if you use this host port, then you need to add a line or two in your /etc/hosts file
// on your local machine (not the vagrant machine).
// instructions are below
hostPort: "apiserver-service-catalog." + '127.0.0.1' + ".nip.io",
prefix: "/apis"
}]
};
window.OPENSHIFT_VERSION = {
openshift: "dev-mode",
kubernetes: "dev-mode"
};Once console is ok, check that you can hit the catalog directly (?) https://apiserver-service-catalog.10.245.2.2.nip.io/.
If you can't hit the catalog properly, or if it is listed with a 127.0.0.1 IP address for the route, then edit your /etc/hosts file:
# /etc/hosts
# service catalog stuff
10.245.2.2 apiserver-service-catalog.10.245.2.2.nip.io
10.245.2.2 apiserver-service-catalog.127.0.0.1.nip.iooc login -u system:admin
oc adm policy add-cluster-role-to-user admin system:serviceaccount:service-catalog:defaultThe 3 components for the broker are in yaml files in this gist.
# create a project for the broker
$ oc login -u developer -p developer
$ oc new-project ups-broker
# deploy the ups broker, 3 parts, in this order.
# the first two can be done from the CLI,
# the last part should be done from the web console.
$ oc create -f https://gist.githubusercontent.com/spadgett/80f844c380b94e0adacfc614013bc774/raw/9e88765d93f5d435c1abc9fad2cda6dea4562ed9/ups-broker-deployment.yaml -n ups-broker
# deployment "ups-broker" created
$ oc create -f https://gist.githubusercontent.com/spadgett/80f844c380b94e0adacfc614013bc774/raw/9e88765d93f5d435c1abc9fad2cda6dea4562ed9/ups-broker-service.yaml -n ups-brokerFinally, in any project in the console (the broker is cluster scoped, so it doesn't matter), navigate to the create 'from yaml' tab and paste in the following (or get it from this gist ups-broker-broker.yaml)
# Step 3
# Run the above 2 commands first
# It is recommended to get this YAML from the actual GIST in case something is updated
# Finally, paste it in the YAML tab of the web console, do not create -f from the CLI,
# currently openshift does not recognize the kind Broker. :/
apiVersion: servicecatalog.k8s.io/v1alpha1
kind: Broker
metadata:
name: ups-broker
spec:
url: http://ups-broker.ups-broker.svc.cluster.localThen, navigate to the service-catalog project in the console. Go to the controller manager (deployment), check its pod, look at the logs and look for these lines:
1 controller.go:189] Successfully converted catalog payload from Broker ups-broker to service-catalog API
1 controller.go:192] Reconciling serviceClass user-provided-service (broker ups-broker)
1 controller.go:199] Reconciled serviceClass user-provided-service (broker ups-broker)Fork & clone the catalog to a directory on your local machine.
/origin-web-console
/origin-web-catalog
CD into the catalog and follow the quick start from the README. It is essentially this:
# in vagrant, patch the oauth client:
$ oc login -u system:admin
$ oc patch oauthclient/openshift-web-console -p '{"redirectURIs":["https://localhost:9001/"]}'
# install the dependencies with npm and bower
$ npm install
$ bower install
# build the library
$ npm run build
# run the server
$ npm run startYou will want a similar config.local.js as is used for the openshift-web-console:
// ORIGIN-WEB-CATALOG
//
//var hostIP = '127.0.0.1'; // localhost
//var hostIP = 'localhost'; // localhost
// var hostIP = '10.0.2.2';
var hostIP = '10.245.2.2'; // => if Vagrant
// var hostIP = '172.30.0.0/16' // => if Docker
var hostPort = ':8443';
var fullHost = hostIP + hostPort;
window.DEV_SERVER_PORT = 9001;
window.OPENSHIFT_CONFIG = {
apis: {
hostPort: fullHost,
prefix: "/apis"
},
api: {
openshift: {
hostPort: fullHost,
prefix: "/oapi"
},
k8s: {
hostPort: fullHost,
prefix: "/api"
}
},
additionalServers: [{
protocol: "https",
// when in Vagrant, hostIP may not work if the --routing-suffix=10.245.2.2.nip.io
// didn't take effect on `oc cluster up`. In that case, you will prob have to use
// 127.0.0.1 and update your local machine `/etc/hosts` file
hostPort: "apiserver-service-catalog." + '127.0.0.1' + ".nip.io",
prefix: "/apis"
}],
auth: {
oauth_authorize_uri: 'https://' + fullHost + "/oauth/authorize",
oauth_redirect_base: "https://localhost:" + window.DEV_SERVER_PORT,
oauth_client_id: "openshift-web-console",
logout_uri: ""
},
loggingURL: "",
metricsURL: ""
};
window.OPENSHIFT_VERSION = {
openshift: "dev-mode",
kubernetes: "dev-mode"
};Visit the catalog urls to accept the certs in your browser, else you will not be able to load the console or the catalog.
# example path:
# swap the IP address portion based on what you have set for your host
https://apiserver-service-catalog.127.0.0.1.nip.io/
# while you are here, visit the metrics url as well
https://metrics-openshift-infra.10.245.2.2.nip.io/hawkular/metrics
In order to get bindings to work properly, you will have to update the permission on the default service account in your service-catalog project. This must be done from within the project you are working on (you are granting permissions across projects).
- Navigate to
<your-current-project>/membership - Click the
Service Accountstab and selectEdit - Add a new row. Choose namespace
service-catalogand accountdefault, then chooseadminfrom the role picker. Add.
Alternatively, you can do this from the CLI:
# system:serviceaccounts:service-catalog:default is system:serviceaccounts:<your-catalog-project>:default
# if the project has been named something other than `service-catalog`
$ oc adm policy add-cluster-role-to-user admin system:serviceaccounts:service-catalog:default -n <your-project-name>-
If the catalog or console do not load, you may have to comment out the
additionalServersblock in your config, load the application, then visit thehostPortof the server block manually in your browser with anhttps. Accept the cert, then uncomment theadditionalServiersand reload the page. -
If things still do not load, check your
config.jsorconfig.local.jsand make sure theauth.oauth_redirect_baseis correct. The web console adds/dev-console, but the catalog does not. A copy-paste error can cause problems here.
If your catalog won't run, check the logs on the pod. If you see certificate errors, you may need to swap the version. Edit YAML from the dropdown then update:
# from this:
image: 'quay.io/kubernetes-service-catalog/apiserver:canary'
# to this:
image: 'quay.io/kubernetes-service-catalog/apiserver:v0.0.3' # or a more current version that isn't broken!If metrics do not start, your images are likely out of date & not updating on their own. Do the following:
# add the `admin` role to your current user (`developer` here)
$ oc login -u system:admin
$ oc policy add-role-to-user admin developer -n openshift-infra
# this will let you see if there are issues in the openshift-infra project, which owns metrics
# likely you just need to pull new images:
# which should resolve issues.
$ docker pull openshift/origin-metrics-hawkular-metricsCurrently, you will have to have an instance of the origin-web-catalog running, which should see the same list of projects as your origin-web-console. It should also see the ServiceClasses made available through the Catalog. Find the user-provided-service and provision it within the project desired.
Once you have created an instance (via the origin-web-catalog) in your project, you can create a binding in your project by using the import YAML tab with something like the following:
apiVersion: servicecatalog.k8s.io/v1alpha1
kind: Binding
metadata:
name: some-binding-7 # give it a unique name
spec:
instanceRef:
name: user-provided-service-mgg09 # this should match an actual instance name
secretName: some-binding-7-secret # this will name the created secret, recommend match to metadata.name above for now
This gist uses this method:
OS_DEBUG=true OS_BUILD_ENV_PRESERVE=_output/local/bin hack/env OS_BUILD_PLATFORMS=darwin/amd64 hack/build-go.sh cmd/oc
Do note that this builds oc in a docker container. Doesn't relate to Vagrant.
You may need to edit the YAML of the two deployments in the catalog project. Likely they reference the :canary tag, but this tends to be broken. As of this writing, v0.0.3 seems pretty stable. Update the yaml, and optionally you can do this:
docker pull quay.io/kubernetes-service-catalog/apiserver:v0.0.3 # :canary tag will prob not work
docker pull quay.io/kubernetes-service-catalog/controller-manager:v0.0.3 # :canary tag will prob not work