Skip to content

Commit a94ff06

Browse files
Merge branch 'master' of github.com:kubernetes/website
2 parents 06f1e78 + d79edc1 commit a94ff06

File tree

369 files changed

+9490
-8044
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

369 files changed

+9490
-8044
lines changed

OWNERS_ALIASES

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -10,6 +10,7 @@ aliases:
1010
- kbarnard10
1111
- mrbobbytables
1212
- onlydole
13+
- sftim
1314
sig-docs-de-owners: # Admins for German content
1415
- bene2k1
1516
- mkorbi
@@ -175,14 +176,20 @@ aliases:
175176
# zhangxiaoyu-zidif
176177
sig-docs-pt-owners: # Admins for Portuguese content
177178
- femrtnz
179+
- jailton
178180
- jcjesus
179181
- devlware
180182
- jhonmike
183+
- rikatz
184+
- yagonobre
181185
sig-docs-pt-reviews: # PR reviews for Portugese content
182186
- femrtnz
187+
- jailton
183188
- jcjesus
184189
- devlware
185190
- jhonmike
191+
- rikatz
192+
- yagonobre
186193
sig-docs-vi-owners: # Admins for Vietnamese content
187194
- huynguyennovem
188195
- ngtuna

config.toml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -91,7 +91,7 @@ blog = "/:section/:year/:month/:day/:slug/"
9191
[outputs]
9292
home = [ "HTML", "RSS", "HEADERS" ]
9393
page = [ "HTML"]
94-
section = [ "HTML"]
94+
section = [ "HTML", "print" ]
9595

9696
# Add a "text/netlify" media type for auto-generating the _headers file
9797
[mediaTypes]

content/en/blog/_posts/2018-08-03-make-kubernetes-production-grade-anywhere.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -176,7 +176,7 @@ Cluster-distributed stateful services (e.g., Cassandra) can benefit from splitti
176176

177177
[Logs](/docs/concepts/cluster-administration/logging/) and [metrics](/docs/tasks/debug-application-cluster/resource-usage-monitoring/) (if collected and persistently retained) are valuable to diagnose outages, but given the variety of technologies available it will not be addressed in this blog. If Internet connectivity is available, it may be desirable to retain logs and metrics externally at a central location.
178178

179-
Your production deployment should utilize an automated installation, configuration and update tool (e.g., [Ansible](https://github.com/kubernetes-incubator/kubespray), [BOSH](https://github.com/cloudfoundry-incubator/kubo-deployment), [Chef](https://github.com/chef-cookbooks/kubernetes), [Juju](/docs/getting-started-guides/ubuntu/installation/), [kubeadm](/docs/reference/setup-tools/kubeadm/kubeadm/), [Puppet](https://forge.puppet.com/puppetlabs/kubernetes), etc.). A manual process will have repeatability issues, be labor intensive, error prone, and difficult to scale. [Certified distributions](https://www.cncf.io/certification/software-conformance/#logos) are likely to include a facility for retaining configuration settings across updates, but if you implement your own install and config toolchain, then retention, backup and recovery of the configuration artifacts is essential. Consider keeping your deployment components and settings under a version control system such as Git.
179+
Your production deployment should utilize an automated installation, configuration and update tool (e.g., [Ansible](https://github.com/kubernetes-incubator/kubespray), [BOSH](https://github.com/cloudfoundry-incubator/kubo-deployment), [Chef](https://github.com/chef-cookbooks/kubernetes), [Juju](/docs/getting-started-guides/ubuntu/installation/), [kubeadm](/docs/reference/setup-tools/kubeadm/), [Puppet](https://forge.puppet.com/puppetlabs/kubernetes), etc.). A manual process will have repeatability issues, be labor intensive, error prone, and difficult to scale. [Certified distributions](https://www.cncf.io/certification/software-conformance/#logos) are likely to include a facility for retaining configuration settings across updates, but if you implement your own install and config toolchain, then retention, backup and recovery of the configuration artifacts is essential. Consider keeping your deployment components and settings under a version control system such as Git.
180180

181181
## Outage recovery
182182

content/en/blog/_posts/2018-12-03-kubernetes-1-13-release-announcement.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ Let’s dive into the key features of this release:
1717

1818
## Simplified Kubernetes Cluster Management with kubeadm in GA
1919

20-
Most people who have gotten hands-on with Kubernetes have at some point been hands-on with kubeadm. It's an essential tool for managing the cluster lifecycle, from creation to configuration to upgrade; and now kubeadm is officially GA. [kubeadm](/docs/reference/setup-tools/kubeadm/kubeadm/) handles the bootstrapping of production clusters on existing hardware and configuring the core Kubernetes components in a best-practice-manner to providing a secure yet easy joining flow for new nodes and supporting easy upgrades. What’s notable about this GA release are the now graduated advanced features, specifically around pluggability and configurability. The scope of kubeadm is to be a toolbox for both admins and automated, higher-level system and this release is a significant step in that direction.
20+
Most people who have gotten hands-on with Kubernetes have at some point been hands-on with kubeadm. It's an essential tool for managing the cluster lifecycle, from creation to configuration to upgrade; and now kubeadm is officially GA. [kubeadm](/docs/reference/setup-tools/kubeadm/) handles the bootstrapping of production clusters on existing hardware and configuring the core Kubernetes components in a best-practice-manner to providing a secure yet easy joining flow for new nodes and supporting easy upgrades. What’s notable about this GA release are the now graduated advanced features, specifically around pluggability and configurability. The scope of kubeadm is to be a toolbox for both admins and automated, higher-level system and this release is a significant step in that direction.
2121

2222
## Container Storage Interface (CSI) Goes GA
2323

content/en/blog/_posts/2020-12-02-dockershim-faq.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -114,7 +114,7 @@ will have strictly better performance and less overhead. However, we encourage y
114114
to explore all the options from the [CNCF landscape] in case another would be an
115115
even better fit for your environment.
116116

117-
[CNCF landscape]: https://landscape.cncf.io/category=container-runtime&format=card-mode&grouping=category
117+
[CNCF landscape]: https://landscape.cncf.io/card-mode?category=container-runtime&grouping=category
118118

119119

120120
### What should I look out for when changing CRI implementations?
181 KB
Loading
153 KB
Loading
43.3 KB
Loading
Lines changed: 63 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,63 @@
1+
---
2+
layout: blog
3+
title: "The Evolution of Kubernetes Dashboard"
4+
date: 2021-03-09
5+
slug: the-evolution-of-kubernetes-dashboard
6+
---
7+
8+
Authors: Marcin Maciaszczyk, Kubermatic & Sebastian Florek, Kubermatic
9+
10+
In October 2020, the Kubernetes Dashboard officially turned five. As main project maintainers, we can barely believe that so much time has passed since our very first commits to the project. However, looking back with a bit of nostalgia, we realize that quite a lot has happened since then. Now it’s due time to celebrate “our baby” with a short recap.
11+
12+
## How It All Began
13+
14+
The initial idea behind the Kubernetes Dashboard project was to provide a web interface for Kubernetes. We wanted to reflect the kubectl functionality through an intuitive web UI. The main benefit from using the UI is to be able to quickly see things that do not work as expected (monitoring and troubleshooting). Also, the Kubernetes Dashboard is a great starting point for users that are new to the Kubernetes ecosystem.
15+
16+
The very [first commit](https://github.com/kubernetes/dashboard/commit/5861187fa807ac1cc2d9b2ac786afeced065076c) to the Kubernetes Dashboard was made by Filip Grządkowski from Google on 16th October 2015 – just a few months from the initial commit to the Kubernetes repository. Our initial commits go back to November 2015 ([Sebastian committed on 16 November 2015](https://github.com/kubernetes/dashboard/commit/09e65b6bb08c49b926253de3621a73da05e400fd); [Marcin committed on 23 November 2015](https://github.com/kubernetes/dashboard/commit/1da4b1c25ef040818072c734f71333f9b4733f55)). Since that time, we’ve become regular contributors to the project. For the next two years, we worked closely with the Googlers, eventually becoming main project maintainers ourselves.
17+
18+
{{< figure src="first-ui.png" caption="The First Version of the User Interface" >}}
19+
20+
{{< figure src="along-the-way-ui.png" caption="Prototype of the New User Interface" >}}
21+
22+
{{< figure src="current-ui.png" caption="The Current User Interface" >}}
23+
24+
As you can see, the initial look and feel of the project were completely different from the current one. We have changed the design multiple times. The same has happened with the code itself.
25+
26+
## Growing Up - The Big Migration
27+
28+
At [the beginning of 2018](https://github.com/kubernetes/dashboard/pull/2727), we reached a point where AngularJS was getting closer to the end of its life, while the new Angular versions were published quite often. A lot of the libraries and the modules that we were using were following the trend. That forced us to spend a lot of the time rewriting the frontend part of the project to make it work with newer technologies.
29+
30+
The migration came with many benefits like being able to refactor a lot of the code, introduce design patterns, reduce code complexity, and benefit from the new modules. However, you can imagine that the scale of the migration was huge. Luckily, there were a number of contributions from the community helping us with the resource support, new Kubernetes version support, i18n, and much more. After many long days and nights, we finally released the [first beta version](https://github.com/kubernetes/dashboard/releases/tag/v2.0.0-beta1) in July 2019, followed by the [2.0 release](https://github.com/kubernetes/dashboard/releases/tag/v2.0.0) in April 2020 — our baby had grown up.
31+
32+
## Where Are We Standing in 2021?
33+
34+
Due to limited resources, unfortunately, we were not able to offer extensive support for many different Kubernetes versions. So, we’ve decided to always try and support the latest Kubernetes version available at the time of the Kubernetes Dashboard release. The latest release, [Dashboard v2.2.0](https://github.com/kubernetes/dashboard/releases/tag/v2.2.0) provides support for Kubernetes v1.20.
35+
36+
On top of that, we put in a great deal of effort into [improving resource support](https://github.com/kubernetes/dashboard/issues/5232). Meanwhile, we do offer support for most of the Kubernetes resources. Also, the Kubernetes Dashboard supports multiple languages: English, German, French, Japanese, Korean, Chinese (Traditional, Simplified, Traditional Hong Kong). Persian and Russian localizations are currently in progress. Moreover, we are working on the support for 3rd party themes and the design of the app in general. As you can see, quite a lot of things are going on.
37+
38+
Luckily, we do have regular contributors with domain knowledge who are taking care of the project, updating the Helm charts, translations, Go modules, and more. But as always, there could be many more hands on deck. So if you are thinking about contributing to Kubernetes, keep us in mind ;)
39+
40+
## What’s Next
41+
42+
The Kubernetes Dashboard has been growing and prospering for more than 5 years now. It provides the community with an intuitive Web UI, thereby decreasing the complexity of Kubernetes and increasing its accessibility to new community members. We are proud of what the project has achieved so far, but this is by far not the end. These are our priorities for the future:
43+
44+
* Keep providing support for the new Kubernetes versions
45+
* Keep improving the support for the existing resources
46+
* Keep working on auth system improvements
47+
* [Rewrite the API to use gRPC and shared informers](https://github.com/kubernetes/dashboard/pull/5449): This will allow us to improve the performance of the application but, most importantly, to support live updates coming from the Kubernetes project. It is one of the most requested features from the community.
48+
* Split the application into two containers, one with the UI and the second with the API running inside.
49+
50+
## The Kubernetes Dashboard in Numbers
51+
52+
* Initial commit made on October 16, 2015
53+
* Over 100 million pulls from Dockerhub since the v2 release
54+
* 8 supported languages and the next 2 in progress
55+
* Over 3360 closed PRs
56+
* Over 2260 closed issues
57+
* 100% coverage of the supported core Kubernetes resources
58+
* Over 9000 stars on GitHub
59+
* Over 237 000 lines of code
60+
61+
## Join Us
62+
63+
As mentioned earlier, we are currently looking for more people to help us further develop and grow the project. We are open to contributions in multiple areas, i.e., [issues with help wanted label](https://github.com/kubernetes/dashboard/issues?q=is%3Aissue+is%3Aopen+label%3A%22help+wanted%22). Please feel free to reach out via GitHub or the #sig-ui channel in the [Kubernetes Slack](https://slack.k8s.io/).

content/en/docs/concepts/architecture/control-plane-node-communication.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,7 @@ One or more forms of [authorization](/docs/reference/access-authn-authz/authoriz
2424
Nodes should be provisioned with the public root certificate for the cluster such that they can connect securely to the apiserver along with valid client credentials. A good approach is that the client credentials provided to the kubelet are in the form of a client certificate. See [kubelet TLS bootstrapping](/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/) for automated provisioning of kubelet client certificates.
2525

2626
Pods that wish to connect to the apiserver can do so securely by leveraging a service account so that Kubernetes will automatically inject the public root certificate and a valid bearer token into the pod when it is instantiated.
27-
The `kubernetes` service (in all namespaces) is configured with a virtual IP address that is redirected (via kube-proxy) to the HTTPS endpoint on the apiserver.
27+
The `kubernetes` service (in `default` namespace) is configured with a virtual IP address that is redirected (via kube-proxy) to the HTTPS endpoint on the apiserver.
2828

2929
The control plane components also communicate with the cluster apiserver over the secure port.
3030

0 commit comments

Comments
 (0)