You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/en/blog/_posts/2015-06-00-The-Distributed-System-Toolkit-Patterns.md
+4-8Lines changed: 4 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -12,14 +12,10 @@ In many ways the switch from VMs to containers is like the switch from monolithi
12
12
13
13
The benefits of thinking in terms of modular containers are enormous, in particular, modular containers provide the following:
14
14
15
-
-
16
-
Speed application development, since containers can be re-used between teams and even larger communities
17
-
-
18
-
Codify expert knowledge, since everyone collaborates on a single containerized implementation that reflects best-practices rather than a myriad of different home-grown containers with roughly the same functionality
19
-
-
20
-
Enable agile teams, since the container boundary is a natural boundary and contract for team responsibilities
21
-
-
22
-
Provide separation of concerns and focus on specific functionality that reduces spaghetti dependencies and un-testable components
15
+
- Speed application development, since containers can be re-used between teams and even larger communities
16
+
- Codify expert knowledge, since everyone collaborates on a single containerized implementation that reflects best-practices rather than a myriad of different home-grown containers with roughly the same functionality
17
+
- Enable agile teams, since the container boundary is a natural boundary and contract for team responsibilities
18
+
- Provide separation of concerns and focus on specific functionality that reduces spaghetti dependencies and un-testable components
23
19
24
20
Building an application from modular containers means thinking about symbiotic groups of containers that cooperate to provide a service, not one container per service. In Kubernetes, the embodiment of this modular container service is a Pod. A Pod is a group of containers that share resources like file systems, kernel namespaces and an IP address. The Pod is the atomic unit of scheduling in a Kubernetes cluster, precisely because the symbiotic nature of the containers in the Pod require that they be co-scheduled onto the same machine, and the only way to reliably achieve this is by making container groups atomic scheduling units.
@@ -14,121 +14,71 @@ Here are the notes from today's meeting:
14
14
15
15
16
16
17
-
-
18
-
Eric Paris: replacing salt with ansible (if we want)
19
-
20
-
-
21
-
In contrib, there is a provisioning tool written in ansible
22
-
-
23
-
The goal in the rewrite was to eliminate as much of the cloud provider stuff as possible
24
-
-
25
-
The salt setup does a bunch of setup in scripts and then the environment is setup with salt
26
-
27
-
-
28
-
This means that things like generating certs is done differently on GCE/AWS/Vagrant
29
-
-
30
-
For ansible, everything must be done within ansible
31
-
-
32
-
Background on ansible
33
-
34
-
-
35
-
Does not have clients
36
-
-
37
-
Provisioner ssh into the machine and runs scripts on the machine
38
-
-
39
-
You define what you want your cluster to look like, run the script, and it sets up everything at once
40
-
-
41
-
If you make one change in a config file, ansible re-runs everything (which isn’t always desirable)
42
-
-
43
-
Uses a jinja2 template
44
-
-
45
-
Create machines with minimal software, then use ansible to get that machine into a runnable state
46
-
47
-
-
48
-
Sets up all of the add-ons
49
-
-
50
-
Eliminates the provisioner shell scripts
51
-
-
52
-
Full cluster setup currently takes about 6 minutes
53
-
54
-
-
55
-
CentOS with some packages
56
-
-
57
-
Redeploy to the cluster takes 25 seconds
58
-
-
59
-
Questions for Eric
60
-
61
-
-
62
-
Where does the provider-specific configuration go?
63
-
64
-
-
65
-
The only network setup that the ansible config does is flannel; you can turn it off
66
-
-
67
-
What about init vs. systemd?
68
-
69
-
-
70
-
Should be able to support in the code w/o any trouble (not yet implemented)
71
-
-
72
-
Discussion
73
-
74
-
-
75
-
Why not push the setup work into containers or kubernetes config?
76
-
77
-
-
78
-
To bootstrap a cluster drop a kubelet and a manifest
79
-
-
80
-
Running a kubelet and configuring the network should be the only things required. We can cut a machine image that is preconfigured minus the data package (certs, etc)
81
-
82
-
-
83
-
The ansible scripts install kubelet & docker if they aren’t already installed
84
-
-
85
-
Each OS (RedHat, Debian, Ubuntu) could have a different image. We could view this as part of the build process instead of the install process.
86
-
-
87
-
There needs to be solution for bare metal as well.
88
-
-
89
-
In favor of the overall goal -- reducing the special configuration in the salt configuration
90
-
-
91
-
Everything except the kubelet should run inside a container (eventually the kubelet should as well)
92
-
93
-
-
94
-
Running in a container doesn’t cut down on the complexity that we currently have
95
-
-
96
-
But it does more clearly define the interface about what the code expects
97
-
-
98
-
These tools (Chef, Puppet, Ansible) conflate binary distribution with configuration
99
-
100
-
-
101
-
Containers more clearly separate these problems
102
-
-
103
-
The mesos deployment is not completely automated yet, but the mesos deployment is completely different: kubelets get put on top on an existing mesos cluster
104
-
105
-
-
106
-
The bash scripts allow the mesos devs to see what each cloud provider is doing and re-use the relevant bits
107
-
-
108
-
There was a large reverse engineering curve, but the bash is at least readable as opposed to the salt
109
-
-
110
-
Openstack uses a different deployment as well
111
-
-
112
-
We need a well documented list of steps (e.g. create certs) that are necessary to stand up a cluster
113
-
114
-
-
115
-
This would allow us to compare across cloud providers
116
-
-
117
-
We should reduce the number of steps as much as possible
118
-
-
119
-
Ansible has 241 steps to launch a cluster
120
-
-
121
-
1.0 Code freeze
122
-
123
-
-
124
-
How are we getting out of code freeze?
125
-
-
126
-
This is a topic for next week, but the preview is that we will move slowly rather than totally opening the firehose
127
-
128
-
-
129
-
We want to clear the backlog as fast as possible while maintaining stability both on HEAD and on the 1.0 branch
130
-
-
131
-
The backlog of almost 300 PRs but there are also various parallel feature branches that have been developed during the freeze
132
-
-
133
-
Cutting a cherry pick release today (1.0.1) that fixes a few issues
17
+
- Eric Paris: replacing salt with ansible (if we want)
18
+
19
+
- In contrib, there is a provisioning tool written in ansible
20
+
- The goal in the rewrite was to eliminate as much of the cloud provider stuff as possible
21
+
- The salt setup does a bunch of setup in scripts and then the environment is setup with salt
22
+
23
+
- This means that things like generating certs is done differently on GCE/AWS/Vagrant
24
+
- For ansible, everything must be done within ansible
25
+
- Background on ansible
26
+
27
+
- Does not have clients
28
+
- Provisioner ssh into the machine and runs scripts on the machine
29
+
- You define what you want your cluster to look like, run the script, and it sets up everything at once
30
+
- If you make one change in a config file, ansible re-runs everything (which isn’t always desirable)
31
+
- Uses a jinja2 template
32
+
- Create machines with minimal software, then use ansible to get that machine into a runnable state
33
+
34
+
- Sets up all of the add-ons
35
+
- Eliminates the provisioner shell scripts
36
+
- Full cluster setup currently takes about 6 minutes
37
+
38
+
- CentOS with some packages
39
+
- Redeploy to the cluster takes 25 seconds
40
+
- Questions for Eric
41
+
42
+
- Where does the provider-specific configuration go?
43
+
44
+
- The only network setup that the ansible config does is flannel; you can turn it off
45
+
- What about init vs. systemd?
46
+
47
+
- Should be able to support in the code w/o any trouble (not yet implemented)
48
+
- Discussion
49
+
50
+
- Why not push the setup work into containers or kubernetes config?
51
+
52
+
- To bootstrap a cluster drop a kubelet and a manifest
53
+
- Running a kubelet and configuring the network should be the only things required. We can cut a machine image that is preconfigured minus the data package (certs, etc)
54
+
55
+
- The ansible scripts install kubelet & docker if they aren’t already installed
56
+
- Each OS (RedHat, Debian, Ubuntu) could have a different image. We could view this as part of the build process instead of the install process.
57
+
- There needs to be solution for bare metal as well.
58
+
- In favor of the overall goal -- reducing the special configuration in the salt configuration
59
+
- Everything except the kubelet should run inside a container (eventually the kubelet should as well)
60
+
61
+
- Running in a container doesn’t cut down on the complexity that we currently have
62
+
- But it does more clearly define the interface about what the code expects
63
+
- These tools (Chef, Puppet, Ansible) conflate binary distribution with configuration
64
+
65
+
- Containers more clearly separate these problems
66
+
- The mesos deployment is not completely automated yet, but the mesos deployment is completely different: kubelets get put on top on an existing mesos cluster
67
+
68
+
- The bash scripts allow the mesos devs to see what each cloud provider is doing and re-use the relevant bits
69
+
- There was a large reverse engineering curve, but the bash is at least readable as opposed to the salt
70
+
- Openstack uses a different deployment as well
71
+
- We need a well documented list of steps (e.g. create certs) that are necessary to stand up a cluster
72
+
73
+
- This would allow us to compare across cloud providers
74
+
- We should reduce the number of steps as much as possible
75
+
- Ansible has 241 steps to launch a cluster
76
+
- 1.0 Code freeze
77
+
78
+
- How are we getting out of code freeze?
79
+
- This is a topic for next week, but the preview is that we will move slowly rather than totally opening the firehose
80
+
81
+
- We want to clear the backlog as fast as possible while maintaining stability both on HEAD and on the 1.0 branch
82
+
- The backlog of almost 300 PRs but there are also various parallel feature branches that have been developed during the freeze
83
+
- Cutting a cherry pick release today (1.0.1) that fixes a few issues
134
84
- Next week we will discuss the cadence for patch releases
Copy file name to clipboardExpand all lines: content/en/blog/_posts/2016-03-00-Elasticbox-Introduces-Elastickube-To.md
+8-19Lines changed: 8 additions & 19 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -16,17 +16,10 @@ Fundamentally, ElasticKube delivers a web console for which compliments Kubernet
16
16
17
17
ElasticKube enables organizations to accelerate adoption by developers, application operations and traditional IT operations teams and shares a mutual goal of increasing developer productivity, driving efficiency in container management and promoting the use of microservices as a modern application delivery methodology. When leveraging ElasticKube in your environment, users need to ensure the following technologies are configured appropriately to guarantee everything runs correctly:
18
18
19
-
-
20
-
Configure Google Container Engine (GKE) for cluster installation and management
21
-
22
-
-
23
-
Use Kubernetes to provision the infrastructure and clusters for containers
24
-
25
-
-
26
-
Use your existing tools of choice to actually build your containers
27
-
-
28
-
29
-
Use ElasticKube to run, deploy and manage your containers and services
19
+
- Configure Google Container Engine (GKE) for cluster installation and management
20
+
- Use Kubernetes to provision the infrastructure and clusters for containers
21
+
- Use your existing tools of choice to actually build your containers
22
+
- Use ElasticKube to run, deploy and manage your containers and services
Copy file name to clipboardExpand all lines: content/en/blog/_posts/2016-03-00-Kubernetes-In-Enterprise-With-Fujitsus.md
+11-22Lines changed: 11 additions & 22 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -13,24 +13,18 @@ Today, we want to take you on a short tour explaining the background of our offe
13
13
14
14
In mid 2014 we looked at the challenges enterprises are facing in the context of digitization, where traditional enterprises experience that more and more competitors from the IT sector are pushing into the core of their markets. A big part of Fujitsu’s customers are such traditional businesses, so we considered how we could help them and came up with three basic principles:
15
15
16
-
-
17
-
Decouple applications from infrastructure - Focus on where the value for the customer is: the application.
18
-
-
19
-
Decompose applications - Build applications from smaller, loosely coupled parts. Enable reconfiguration of those parts depending on the needs of the business. Also encourage innovation by low-cost experiments.
20
-
-
21
-
Automate everything - Fight the increasing complexity of the first two points by introducing a high degree of automation.
16
+
- Decouple applications from infrastructure - Focus on where the value for the customer is: the application.
17
+
- Decompose applications - Build applications from smaller, loosely coupled parts. Enable reconfiguration of those parts depending on the needs of the business. Also encourage innovation by low-cost experiments.
18
+
- Automate everything - Fight the increasing complexity of the first two points by introducing a high degree of automation.
22
19
23
20
We found that Linux containers themselves cover the first point and touch the second. But at this time there was little support for creating distributed applications and running them managed automatically. We found Kubernetes as the missing piece.
24
21
**Not a free lunch**
25
22
26
23
The general approach of Kubernetes in managing containerized workload is convincing, but as we looked at it with the eyes of customers, we realized that it’s not a free lunch. Many customers are medium-sized companies whose core business is often bound to strict data protection regulations. The top three requirements we identified are:
27
24
28
-
-
29
-
On-premise deployments (with the option for hybrid scenarios)
30
-
-
31
-
Efficient operations as part of a (much) bigger IT infrastructure
32
-
-
33
-
Enterprise-grade support, potentially on global scale
25
+
- On-premise deployments (with the option for hybrid scenarios)
26
+
- Efficient operations as part of a (much) bigger IT infrastructure
27
+
- Enterprise-grade support, potentially on global scale
34
28
35
29
We created Cloud Load Control with these requirements in mind. It is basically a distribution of Kubernetes targeted for on-premise use, primarily focusing on operational aspects of container infrastructure. We are committed to work with the community, and contribute all relevant changes and extensions upstream to the Kubernetes project.
36
30
**On-premise deployments**
@@ -39,12 +33,9 @@ As Kubernetes core developer Tim Hockin often puts it in his[talks](https://spea
39
33
40
34
Cloud Load Control addresses these issues. It enables customers to reliably and readily provision a production grade Kubernetes clusters on their own infrastructure, with the following benefits:
41
35
42
-
-
43
-
Proven setup process, lowers risk of problems while setting up the cluster
44
-
-
45
-
Reduction of provisioning time to minutes
46
-
-
47
-
Repeatable process, relevant especially for large, multi-tenant environments
36
+
- Proven setup process, lowers risk of problems while setting up the cluster
37
+
- Reduction of provisioning time to minutes
38
+
- Repeatable process, relevant especially for large, multi-tenant environments
48
39
49
40
Cloud Load Control delivers these benefits for a range of platforms, starting from selected OpenStack distributions in the first versions of Cloud Load Control, and successively adding more platforms depending on customer demand. We are especially excited about the option to remove the virtualization layer and support Kubernetes bare-metal on Fujitsu servers in the long run. By removing a layer of complexity, the total cost to run the system would be decreased and the missing hypervisor would increase performance.
50
41
@@ -53,10 +44,8 @@ Right now we are in the process of contributing a generic provider to set up Kub
53
44
54
45
Reducing operation costs is the target of any organization providing IT infrastructure. This can be achieved by increasing the efficiency of operations and helping operators to get their job done. Considering large-scale container infrastructures, we found it is important to differentiate between two types of operations:
55
46
56
-
-
57
-
Platform-oriented, relates to the overall infrastructure, often including various systems, one of which might be Kubernetes.
58
-
-
59
-
Application-oriented, focusses rather on a single, or a small set of applications deployed on Kubernetes.
47
+
- Platform-oriented, relates to the overall infrastructure, often including various systems, one of which might be Kubernetes.
48
+
- Application-oriented, focusses rather on a single, or a small set of applications deployed on Kubernetes.
60
49
61
50
Kubernetes is already great for the application-oriented part. Cloud Load Control was created to help platform-oriented operators to efficiently manage Kubernetes as part of the overall infrastructure and make it easy to execute Kubernetes tasks relevant to them.
Copy file name to clipboardExpand all lines: content/en/blog/_posts/2016-03-00-State-Of-Container-World-February-2016.md
+6-9Lines changed: 6 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -11,15 +11,12 @@ Hello, and welcome to the second installment of the Kubernetes state of the cont
11
11
In January, 71% of respondents were currently using containers, in February, 89% of respondents were currently using containers. The percentage of users not even considering containers also shrank from 4% in January to a surprising 0% in February. Will see if that holds consistent in March.Likewise, the usage of containers continued to march across the dev/canary/prod lifecycle. In all parts of the lifecycle, container usage increased:
12
12
13
13
14
-
-
15
-
Development: 80% -\> 88%
16
-
-
17
-
Test: 67% -\> 72%
18
-
-
19
-
Pre production: 41% -\> 55%
20
-
-
21
-
Production: 50% -\> 62%
22
-
What is striking in this is that pre-production growth continued, even as workloads were clearly transitioned into true production. Likewise the share of people considering containers for production rose from 78% in January to 82% in February. Again we’ll see if the trend continues into March.
14
+
- Development: 80% -\> 88%
15
+
- Test: 67% -\> 72%
16
+
- Pre production: 41% -\> 55%
17
+
- Production: 50% -\> 62%
18
+
19
+
What is striking in this is that pre-production growth continued, even as workloads were clearly transitioned into true production. Likewise the share of people considering containers for production rose from 78% in January to 82% in February. Again we’ll see if the trend continues into March.
0 commit comments