| copyright | lastupdated | ||
|---|---|---|---|
|
2017-08-30 |
{:new_window: target="_blank"} {:shortdesc: .shortdesc} {:screen: .screen} {:pre: .pre} {:table: .aria-labeledby="caption"} {:codeblock: .codeblock} {:tip: .tip} {:download: .download}
{: #cs_ov}
{{site.data.keyword.containershort}} combines Docker and Kubernetes to deliver powerful tools, an intuitive user experience, and built-in security and isolation to automate the deployment, operation, scaling, and monitoring of containerized apps over a cluster of independent compute hosts by using the Kubernetes APIs. {:shortdesc}
{: #kubernetes_basics}
Kubernetes was developed by Google as part of the Borg project and handed off to the open source community in 2014. Kubernetes combines more than 15 years of Google research in running a containerized infrastructure with production work loads, open source contributions, and Docker container management tools to provide an isolated and secure app platform that is portable, extensible, and self-healing in case of failovers. {:shortdesc}
Learn about the basics of how Kubernetes works with a little terminology.
- Cluster
- A Kubernetes cluster consists of one or more virtual machines that are called worker nodes. Every worker node represents a compute host where you can deploy, run, and manage containerized apps. Worker nodes are managed by a Kubernetes master that centrally controls and monitors all Kubernetes resources in the cluster. When you deploy a containerized app, the Kubernetes master decides where to deploy the app, taking into account the deployment requirements and available capacity in the cluster.
- Pod
- Every containerized app that is deployed into a Kubernetes cluster is deployed, run, and managed by a pod. Pods represent the smallest deployable units in a Kubernetes cluster and are used to group containers that must be treated as a single unit. In most cases, a container is deployed to its own pod. However, an app might require a container and other helper containers to be deployed into one pod so that those containers can be addressed by using the same private IP address.
- Deployment
- A deployment is a Kubernetes resource where you specify your containers and other Kubernetes resources that are required to run your app, such as persistent storage, services, or annotations. Deployments are documented in a Kubernetes deployment script. When you run a deployment, the Kubernetes master deploys the specified containers into pods taking into account the capacity that is available on the worker nodes of the cluster. Other Kubernetes resources are created and configured as specified in the deployment script.
You can use a deployment to define update strategies for your app, which includes the number of pods that you want to add during a rolling update and the number of pods that can be unavailable at a time. When you perform a rolling update, the deployment checks whether the revision is working and stops the rollout when failures are detected. - Service
- A Kubernetes service groups a set of pods and provides network connection to these pods for other services in the cluster without exposing the actual private IP address of each pod. You can use a service to make your app available within your cluster or to the public internet.
To learn more about Kubernetes terminology, try the Kubernetes Basics tutorial.
{: #cs_ov_benefits}
Each cluster is deployed on shared or dedicated virtual machines that provide native Kubernetes and {{site.data.keyword.IBM_notm}} added capabilities. {:shortdesc}
| Benefit | Description |
|---|---|
| Single-tenant Kubernetes clusters with compute, network, and storage infrastructure isolation |
|
| Image security compliance with Vulnerability Advisor |
|
| Automatic scaling of apps |
|
| Continuous monitoring of the cluster health |
|
| Automatic recovery of unhealthy containers |
|
| Service discovery and service management |
|
| Secure exposure of services to the public |
|
| {{site.data.keyword.Bluemix_notm}} service integration |
|
| {: caption="Table 1. Benefits of using clusters with {{site.data.keyword.containerlong_notm}}" caption-side="top"} |
{: #cs_ov_environments}
You can choose the {{site.data.keyword.Bluemix_notm}} cloud environment on which to deploy clusters and containers. {:shortdesc}
###{{site.data.keyword.Bluemix_notm}} Public {: #public_environment}
Deploy clusters into the public cloud environment (https://console.bluemix.net ) and connect to any service in the {{site.data.keyword.Bluemix_notm}} catalog.
With clusters in {{site.data.keyword.Bluemix_notm}} Public, you can choose the level of hardware isolation for the worker nodes in your cluster. Use dedicated hardware for available physical resources to be dedicated to your cluster only, or shared hardware to allow physical resources to be shared with clusters from other {{site.data.keyword.IBM_notm}} customers. You might choose a dedicated cluster in the {{site.data.keyword.Bluemix_notm}} Public environment when you want isolation for your cluster, but you do not require such isolation for the other {{site.data.keyword.Bluemix_notm}} services that you use.
Click one of the following options to get started:
{: #dedicated_environment}
Deploy clusters (Closed Beta) or single and scalable containers in a dedicated cloud environment (https://<my-dedicated-cloud-instance>.bluemix.net) and connect with the preselected {{site.data.keyword.Bluemix_notm}} services that are also running there.
Clusters with {{site.data.keyword.Bluemix_notm}} Dedicated are equivalent to clusters that are created with dedicated hardware in {{site.data.keyword.Bluemix_notm}} Public. Available physical resources are dedicated to your cluster only and are not shared with clusters from other {{site.data.keyword.IBM_notm}} customers. For both Public and for Dedicated, the public API endpoint is used to create clusters. However, with {{site.data.keyword.Bluemix_notm}} Dedicated, the most significant differences are as followed.
- {{site.data.keyword.IBM_notm}} owns and manages the {{site.data.keyword.BluSoftlayer_notm}} account that the worker nodes, VLANs, and subnets are deployed into, rather than in an account that is owned by you.
- Specifications for those VLANs and subnets are determined when the Dedicated environment is created, not when the cluster is created.
You might choose to set up a {{site.data.keyword.Bluemix_notm}} Dedicated environment when you want isolation for your cluster and you also require such isolation for the other {{site.data.keyword.Bluemix_notm}} services that you use.
Click one of the following options to get started:
{: #env_differences}
| Area | {{site.data.keyword.Bluemix_notm}} Public | {{site.data.keyword.Bluemix_notm}} Dedicated (Closed Beta) |
|---|---|---|
| Cluster creation | Create a lite cluster or specify the following details for a standard cluster:
|
Specify the following details for a standard cluster:
Note: The VLANs and Hardware settings are pre-defined during the creation of the {{site.data.keyword.Bluemix_notm}} environment. |
| Cluster hardware and ownership | In standard clusters, the hardware can be shared by other {{site.data.keyword.IBM_notm}} customers or dedicated to you only. The public and private VLANs are owned and managed by you in your {{site.data.keyword.BluSoftlayer_notm}} account. | In clusters on {{site.data.keyword.Bluemix_notm}} Dedicated, the hardware is always dedicated. The public and private VLANs are owned and managed by IBM for you. Location is pre-defined for the {{site.data.keyword.Bluemix_notm}} environment. |
| Service binding with a cluster | Use the bx cs cluster-service-bind command to bind a Kubernetes secret to the cluster. | Create a JSON key file for the service credentials, and then create Kubernetes secret from that file to bind to the cluster. |
| Load balancer and Ingress networking | During the provisioning of standard clusters, the following actions occur automatically.
|
When you create your Dedicated account, you make the following decisions:
|
| NodePort networking | Expose a public port on your worker node and use the public IP address of the worker node to publicly access your service in the cluster. | All public IP addresses of the workers nodes are blocked by a firewall. However, for {{site.data.keyword.Bluemix_notm}} services that are added to the cluster, the node port can be accessed via a public IP address or a private IP address. |
| Persistent storage | Use dynamic provisioning or static provisioning of volumes. | Use dynamic provisioning of volumes. |
| Image registry URL in {{site.data.keyword.registryshort_notm}} |
|
|
| Accessing the registry | See the options in Using private and public image registries with {{site.data.keyword.containershort_notm}}. |
|
| {: caption="Table 2. Feature differences between {{site.data.keyword.Bluemix_notm}} Public and {{site.data.keyword.Bluemix_notm}} Dedicated" caption-side="top"} |
Setting up {{site.data.keyword.containershort_notm}} on {{site.data.keyword.Bluemix_notm}} Dedicated (Closed Beta)
{: #setup_dedicated}
Administrators must add the IBM administrator ID and users of your organization to the Dedicated environment.
Before you begin, set up a {{site.data.keyword.Bluemix_notm}} Dedicated environment.
To set up your Dedicated environment to use clusters:
-
Add the provided IBM administrator ID to the environment.
- Select your {{site.data.keyword.Bluemix_notm}} Dedicated account.
- From the menu bar, click Manage > Security > Identity and Access. The Users window displays a list of users with their email addresses and status for the selected account.
- Click Invite users.
- In Email address or existing IBMid, enter the following email address:
cfsdl@us.ibm.com. - In the Access section, expand Identity and Access enabled services.
- From the Services drop-down list, select {{site.data.keyword.containershort_notm}}.
- From the Roles drop-down list, select Administrator.
- Click Invite users.
-
Create IBMIDs for the end users of your {{site.data.keyword.Bluemix_notm}} account.
-
Add the users from the previous step to your {{site.data.keyword.Bluemix_notm}} account.
-
Access your {{site.data.keyword.Bluemix_notm}} Dedicated account through the Public console and start creating clusters.
- Log in to {{site.data.keyword.Bluemix_notm}} Public console (https://console.bluemix.net
) with your IBMID.
- From the account menu, select your {{site.data.keyword.Bluemix_notm}} Dedicated account. The console is updated with the services and information for your {{site.data.keyword.Bluemix_notm}} Dedicated instance.
- From the catalog for your {{site.data.keyword.Bluemix_notm}} Dedicated instance, select Containers and click Kubernetes cluster.
- Log in to {{site.data.keyword.Bluemix_notm}} Public console (https://console.bluemix.net
Next, for more information about creating a cluster, see Creating Kubernetes clusters from the GUI in {{site.data.keyword.Bluemix_notm}} Dedicated (Closed Beta).
{: #cs_ov_architecture}
A Kubernetes cluster consists of one or more physical or virtual machines, also known as worker nodes, that are loosely coupled, extensible, and centrally monitored and managed by the Kubernetes master. For each customer account, the Kubernetes master is managed by IBM and is highly resilient and highly available. {:shortdesc}
Each worker node is set up with an {{site.data.keyword.IBM_notm}} managed Docker Engine, separate compute resources, networking, and volume service, as well as built-in security features that provide isolation, resource management capabilities, and worker node security compliance. The worker node communicates with the master by using secure TLS certificates and openVPN connection.
Figure 1. Kubernetes architecture and networking in the IBM Bluemix Container Service
{: #cs_ov_docker}
Docker is an open source project that was released by dotCloud in 2013. Built on features of the existing Linux container technology (LXC), Docker became a software platform that you can use to build, test, deploy, and scale apps quickly. Docker packages software into standardized units that are called containers that include all of the elements that an app needs to run. {:shortdesc}
Review these concepts to learn about basic Docker concepts.
- Container
- A container is a standard way to package an app and all its dependencies so that the app can be moved between environments and run without changes. Unlike virtual machines, containers do not virtualize a device, its operating system, and the underlying hardware. Only the app code, run time, system tools, libraries, and settings are packaged inside the container. Containers run as isolated processes on the compute host where they are deployed to and share the host operating system and its hardware resources. This approach makes a container more lightweight, portable, and efficient than a virtual machine.
- Image
- Every container is based on a Docker image and is considered to be an instance of an image. An image is built from a Dockerfile, which is a file that contains instructions how to build the image, and any build artifacts, such as an app, the app's configuration, and its dependencies.
- Registry
- An image registry is a place where you store, retrieve, and share Docker images. Images that are stored in a registry can either be publicly available (public registry) or accessible by a small group of users only (private registry). {{site.data.keyword.containershort_notm}} offers public images, such as ibmliberty that you can use to get started with Docker and Kubernetes to create your first containerized app in a cluster. When it comes to enterprise applications, use a private registry like the one provided in {{site.data.keyword.Bluemix_notm}} to protect your images from being used and changed by unauthorized users.
When you want to deploy a container from an image, you must make sure that the image is stored in either a public or private image registry.
{: #container_benefits}
- Containers are agile
- Containers simplify system administration by providing standardized environments for development and production teams. The engine's lightweight run time enables rapid scale-up and scale-down in response to changes in demand. They help remove the complexity of managing different operating system platforms and underlying infrastructure. Containers help you deploy and run any app on any infrastructure, quickly and reliably.
- Containers are small
- You can fit more containers in the amount of space that a single virtual machine would require.
- Containers are portable
- Build an image for another container by using another image as the base. Let someone else do the bulk of the work on an image and tweak it for your use. You can also migrate app code from a staging environment to a production environment quickly. The migration process can be automated with tools such as the Delivery Pipeline or UrbanCode Deploy.
{: #cs_terms}
Clients cannot misuse {{site.data.keyword.containershort_notm}}. {:shortdesc}
Misuse includes:
- Any illegal activity
- Distribution or execution of malware
- Harming {{site.data.keyword.containershort_notm}} or interfering with anyone's use of {{site.data.keyword.containershort_notm}}
- Harming or interfering with anyone's use of any other service or system
- Unauthorized access to any service or system
- Unauthorized modification of any service or system
- Violation of the rights of others
See Cloud Services terms for overall terms of use.



