Skip to content

Commit 663e33b

Browse files
Chris "Not So" ShortTim Bannisterfsmunozdipesh-rawat
authored
Add 10th Birthday Blog article (#46679)
* Docs to Markdown (no images) Signed-off-by: Chris Short <[email protected]> * Adding Authors Signed-off-by: Chris Short <[email protected]> * Revise to align with style guide * Revise to align with style guide * Images for 10th B-Day Blog Signed-off-by: Chris Short <[email protected]> * Update content/en/blog/_posts/2024-06-06-10-Years-of-Kubernetes/index.md Co-authored-by: Tim Bannister <[email protected]> * Bolding Aggregate DevStats numbers Signed-off-by: Chris Short <[email protected]> * Style changes * Add images shortcodes * Update image inclusion format to HTML * Even more fixups Co-authored-by: Dipesh Rawat <[email protected]> --------- Signed-off-by: Chris Short <[email protected]> Co-authored-by: Tim Bannister <[email protected]> Co-authored-by: Frederico Muñoz <[email protected]> Co-authored-by: Dipesh Rawat <[email protected]>
1 parent 388e45a commit 663e33b

File tree

8 files changed

+208
-0
lines changed

8 files changed

+208
-0
lines changed
31.2 KB
Loading
Lines changed: 208 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,208 @@
1+
---
2+
layout: blog
3+
title: "10 Years of Kubernetes"
4+
date: 2024-06-06
5+
slug: 10-years-of-kubernetes
6+
author: >
7+
[Bob Killen](https://github.com/mybobbytables) (CNCF),
8+
[Chris Short](https://github.com/chris-short) (AWS),
9+
[Frederico Muñoz](https://github.com/fsmunoz) (SAS),
10+
[Kaslin Fields](https://github.com/kaslin) (Google),
11+
[Tim Bannister](https://github.com/sftim) (The Scale Factory),
12+
and every contributor across the globe
13+
---
14+
![KCSEU 2024 group photo](kcseu2024.jpg)
15+
16+
Ten (10) years ago, on June 6th, 2014, the
17+
[first commit](https://github.com/kubernetes/kubernetes/commit/2c4b3a562ce34cddc3f8218a2c4d11c7310e6d56)
18+
of Kubernetes was pushed to GitHub. That first commit with 250 files and 47,501 lines of go, bash
19+
and markdown kicked off the project we have today. Who could have predicted that 10 years later,
20+
Kubernetes would grow to become one of the largest Open Source projects to date with over
21+
[88,000 contributors](https://k8s.devstats.cncf.io/d/24/overall-project-statistics?orgId=1) from
22+
more than [8,000 companies](https://www.cncf.io/reports/kubernetes-project-journey-report/), across
23+
44 countries.
24+
25+
<img src="kcscn2019.jpg" alt="KCSCN 2019" class="left" style="max-width: 20em; margin: 1em" >
26+
27+
This milestone isn't just for Kubernetes but for the Cloud Native ecosystem that blossomed from
28+
it. There are close to [200 projects](https://all.devstats.cncf.io/d/18/overall-project-statistics-table?orgId=1)
29+
within the CNCF itself, with contributions from
30+
[240,000+ individual contributors](https://all.devstats.cncf.io/d/18/overall-project-statistics-table?orgId=1) and
31+
thousands more in the greater ecosystem. Kubernetes would not be where it is today without them, the
32+
[7M+ Developers](https://www.cncf.io/blog/2022/05/18/slashdata-cloud-native-continues-to-grow-with-more-than-7-million-developers-worldwide/),
33+
and the even larger user community that have all helped shape the ecosystem that it is today.
34+
35+
## Kubernetes' beginnings - a converging of technologies
36+
37+
The ideas underlying Kubernetes started well before the first commit, or even the first prototype
38+
([which came about in 2013](/blog/2018/07/20/the-history-of-kubernetes-the-community-behind-it/)).
39+
In the early 2000s, Moore's Law was well in effect. Computing hardware was becoming more and more
40+
powerful at an incredibly fast rate. Correspondingly, applications were growing more and more
41+
complex. This combination of hardware commoditization and application complexity pointed to a need
42+
to further abstract software from hardware, and solutions started to emerge.
43+
44+
Like many companies at the time, Google was scaling rapidly, and its engineers were interested in
45+
the idea of creating a form of isolation in the Linux kernel. Google engineer Rohit Seth described
46+
the concept in an [email in 2006](https://lwn.net/Articles/199643/):
47+
48+
> We use the term container to indicate a structure against which we track and charge utilization of
49+
system resources like memory, tasks, etc. for a Workload.
50+
51+
52+
Google's Borg system for managing application orchestration at scale had adopted Linux containers as
53+
they were developed in the mid-2000s. Since then, the company had also started working on a new
54+
version of the system called "Omega." Engineers at Google who were familiar with the Borg and Omega
55+
systems saw the popularity of containerization driven by Docker. They recognized not only the need
56+
for an open source container orchestration system but its "inevitability," as described by Brendan
57+
Burns in this [blog post](/blog/2018/07/20/the-history-of-kubernetes-the-community-behind-it/). That
58+
realization in the fall of 2013 inspired a small team to start working on a project that would later
59+
become **Kubernetes**. That team included Joe Beda, Brendan Burns, Craig McLuckie, Ville Aikas, Tim
60+
Hockin, Dawn Chen, Brian Grant, and Daniel Smith.
61+
62+
63+
<img src="future.png" alt="The future of Linux containers" class="right" style="max-width: 20em; margin: 1em">
64+
65+
In March of 2013, a 5-minute lightning talk called
66+
["The future of Linux Containers," presented by Solomon Hykes at PyCon](https://youtu.be/wW9CAH9nSLs?si=VtK_VFQHymOT7BIB),
67+
introduced an upcoming open source tool called "Docker" for creating and using Linux
68+
Containers. Docker introduced a level of usability to Linux Containers that made them accessible to
69+
more users than ever before, and the popularity of Docker, and thus of Linux Containers,
70+
skyrocketed. With Docker making the abstraction of Linux Containers accessible to all, running
71+
applications in much more portable and repeatable ways was suddenly possible, but the question of
72+
scale remained.
73+
74+
Google's Borg system for managing application orchestration at scale had adopted Linux containers as
75+
they were developed in the mid-2000s. Since then, the company had also started working on a new
76+
version of the system called "Omega." Engineers at Google who were familiar with the Borg and Omega
77+
systems saw the popularity of containerization driven by Docker. They recognized not only the need
78+
for an open source container orchestration system but its "inevitability," as described by Brendan
79+
Burns in
80+
[this blog post](/blog/2018/07/20/the-history-of-kubernetes-the-community-behind-it/).
81+
That realization in the fall of 2013 inspired a small team to start working on a project that would
82+
later become **Kubernetes**. That team included Joe Beda, Brendan Burns, Craig McLuckie, Ville
83+
Aikas, Tim Hockin, Dawn Chen, Brian Grant, and Daniel Smith.
84+
85+
## A decade of Kubernetes
86+
87+
<img src="kubeconeu2017.jpg" alt="KubeCon EU 2017" class="left" style="max-width: 20em; margin: 1em">
88+
89+
Kubernetes' history begins with that historic commit on June 6th, 2014, and the subsequent
90+
announcement of the project in a June 10th
91+
[keynote by Google engineer Eric Brewer at DockerCon 2014](https://youtu.be/YrxnVKZeqK8?si=Q_wYBFn7dsS9H3k3)
92+
(and its corresponding [Google blog](https://cloudplatform.googleblog.com/2014/06/an-update-on-container-support-on-google-cloud-platform.html)).
93+
94+
Over the next year, a small community of
95+
[contributors, largely from Google and Red Hat](https://k8s.devstats.cncf.io/d/9/companies-table?orgId=1&var-period_name=Before%20joining%20CNCF&var-metric=contributors),
96+
worked hard on the project, culminating in a [version 1.0 release on July 21st, 2015](https://cloudplatform.googleblog.com/2015/07/Kubernetes-V1-Released.html).
97+
Alongside 1.0, Google announced that Kubernetes would be donated to a newly formed branch of the
98+
Linux Foundation called the
99+
[Cloud Native Computing Foundation (CNCF)](https://www.cncf.io/announcements/2015/06/21/new-cloud-native-computing-foundation-to-drive-alignment-among-container-technologies/).
100+
101+
Despite reaching 1.0, the Kubernetes project was still very challenging to use and
102+
understand. Kubernetes contributor Kelsey Hightower took special note of the project's shortcomings
103+
in ease of use and on July 7, 2016, he pushed the
104+
[first commit of his famed "Kubernetes the Hard Way" guide](https://github.com/kelseyhightower/kubernetes-the-hard-way/commit/9d7ace8b186f6ebd2e93e08265f3530ec2fba81c).
105+
106+
The project has changed enormously since its original 1.0 release; experiencing a number of big wins
107+
such as
108+
[Custom Resource Definitions (CRD) going GA in 1.16](/blog/2019/09/18/kubernetes-1-16-release-announcement/)
109+
or [full dual stack support launching in 1.23](/blog/2021/12/08/dual-stack-networking-ga/) and
110+
community "lessons learned" from the [removal of widely used beta APIs in 1.22](/blog/2021/07/14/upcoming-changes-in-kubernetes-1-22/)
111+
or the deprecation of [Dockershim](/blog/2020/12/02/dockershim-faq/).
112+
113+
Some notable updates, milestones and events since 1.0 include:
114+
115+
* December 2016 - [Kubernetes 1.5](/blog/2016/12/kubernetes-1-5-supporting-production-workloads/)introduces runtime pluggability with initial CRI support and alpha Windows node support. OpenAPI also appears for the first time, paving the way for clients to be able to discover extension APIs.
116+
* This release also introduced StatefulSets and PodDisruptionBudgets in Beta.
117+
* April 2017 — [Introduction of Role-Based Access Controls or RBAC](/blog/2017/04/rbac-support-in-kubernetes/).
118+
* June 2017 — In [Kubernetes 1.7](/blog/2017/06/kubernetes-1-7-security-hardening-stateful-application-extensibility-updates/), ThirdPartyResources or "TPRs" are replaced with CustomResourceDefinitions (CRDs).
119+
* December 2017 — [Kubernetes 1.9](/blog/2017/12/kubernetes-19-workloads-expanded-ecosystem/) sees the Workloads API becoming GA (Generally Available). The release blog states: _"Deployment and ReplicaSet, two of the most commonly used objects in Kubernetes, are now stabilized after more than a year of real-world use and feedback."_
120+
* December 2018 — In 1.13, the Container Storage Interface (CSI) reaches GA, kubeadm tool for bootstrapping minimum viable clusters reaches GA, and CoreDNS becomes the default DNS server.
121+
* September 2019 — [Custom Resource Definitions go GA](/blog/2019/09/18/kubernetes-1-16-release-announcement/)in Kubernetes 1.16.
122+
* August 2020 — [Kubernetes 1.19](/blog/2016/12/kubernetes-1-5-supporting-production-workloads/) increases the support window for releases to 1 year.
123+
* December 2020 — [Dockershim is deprecated](/blog/2020/12/18/kubernetes-1.20-pod-impersonation-short-lived-volumes-in-csi/) in 1.20
124+
* April 2021 — the [Kubernetes release cadence changes](/blog/2021/07/20/new-kubernetes-release-cadence/#:~:text=On%20April%2023%2C%202021%2C%20the,Kubernetes%20community's%20contributors%20and%20maintainers.) from 4 releases per year to 3 releases per year.
125+
* July 2021 — Widely used beta APIs are [removed](/blog/2021/07/14/upcoming-changes-in-kubernetes-1-22/) in Kubernetes 1.22.
126+
* May 2022 — Kubernetes 1.24 sees [beta APIs become disabled by default](/blog/2022/05/03/kubernetes-1-24-release-announcement/) to reduce upgrade conflicts and removal of [Dockershim](/dockershim), leading to [widespread user confusion](https://www.youtube.com/watch?v=a03Hh1kd6KE) (we've since [improved our communication!](https://github.com/kubernetes/community/tree/master/communication/contributor-comms))
127+
* December 2022 — In 1.26, there was a significant batch and [Job API overhaul](/blog/2022/12/29/scalable-job-tracking-ga/) that paved the way for better support for AI /ML / batch workloads.
128+
129+
**PS:** Curious to see how far the project has come for yourself? Check out this [tutorial for spinning up a Kubernetes 1.0 cluster](https://github.com/spurin/kubernetes-v1.0-lab) created by community members Carlos Santana, Amim Moises Salum Knabben, and James Spurin.
130+
131+
---
132+
133+
Kubernetes offers more extension points than we can count. Originally designed to work with Docker
134+
and only Docker, now you can plug in any container runtime that adheres to the CRI standard. There
135+
are other similar interfaces: CSI for storage and CNI for networking. And that's far from all you
136+
can do. In the last decade, whole new patterns have emerged, such as using
137+
138+
[Custom Resource Definitions](/docs/concepts/extend-kubernetes/api-extension/custom-resources/)
139+
(CRDs) to support third-party controllers - now a huge part of the Kubernetes ecosystem.
140+
141+
The community building the project has also expanded immensely over the last decade. Using
142+
[DevStats](https://k8s.devstats.cncf.io/d/24/overall-project-statistics?orgId=1), we can see the
143+
incredible volume of contribution over the last decade that has made Kubernetes the
144+
[second-largest open source project in the world](https://www.cncf.io/reports/kubernetes-project-journey-report/):
145+
146+
* **88,474** contributors
147+
* **15,121** code committers
148+
* **4,228,347** contributions
149+
* **158,530** issues
150+
* **311,787** pull requests
151+
152+
## Kubernetes today
153+
154+
<img src="welcome.jpg" alt="KubeCon NA 2023" class="left" style="max-width: 20em; margin: 1em">
155+
156+
Since its early days, the project has seen enormous growth in technical capability, usage, and
157+
contribution. The project is still actively working to improve and better serve its users.
158+
159+
In the upcoming 1.31 release, the project will celebrate the culmination of an important long-term
160+
project: the removal of in-tree cloud provider code. In this
161+
[largest migration in Kubernetes history](/blog/2024/05/20/completing-cloud-provider-migration/),
162+
roughly 1.5 million lines of code have been removed, reducing the binary sizes of core components
163+
by approximately 40%. In the project's early days, it was clear that extensibility would be key to
164+
success. However, it wasn't always clear how that extensibility should be achieved. This migration
165+
removes a variety of vendor-specific capabilities from the core Kubernetes code
166+
base. Vendor-specific capabilities can now be better served by other pluggable extensibility
167+
features or patterns, such as
168+
[Custom Resource Definitions (CRDs)](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/)
169+
or API standards like the [Gateway API](https://gateway-api.sigs.k8s.io/).
170+
Kubernetes also faces new challenges in serving its vast user base, and the community is adapting
171+
accordingly. One example of this is the migration of image hosting to the new, community-owned
172+
registry.k8s.io. The egress bandwidth and costs of providing pre-compiled binary images for user
173+
consumption have become immense. This new registry change enables the community to continue
174+
providing these convenient images in more cost- and performance-efficient ways. Make sure you check
175+
out the [blog post](/blog/2022/11/28/registry-k8s-io-faster-cheaper-ga/) and
176+
update any automation you have to use registry.k8s.io!
177+
178+
## The future of Kubernetes
179+
180+
<img src="lts.jpg" alt="" class="right" width="300px" style="max-width: 20em; margin: 1em">
181+
182+
A decade in, the future of Kubernetes still looks bright. The community is prioritizing changes that
183+
both improve the user experiences, and enhance the sustainability of the project. The world of
184+
application development continues to evolve, and Kubernetes is poised to change along with it.
185+
186+
In 2024, the advent of AI changed a once-niche workload type into one of prominent
187+
importance. Distributed computing and workload scheduling has always gone hand-in-hand with the
188+
resource-intensive needs of Artificial Intelligence, Machine Learning, and High Performance
189+
Computing workloads. Contributors are paying close attention to the needs of newly developed
190+
workloads and how Kubernetes can best serve them. The new
191+
[Serving Working Group](https://github.com/kubernetes/community/tree/master/wg-serving) is one
192+
example of how the community is organizing to address these workloads' needs. It's likely that the
193+
next few years will see improvements to Kubernetes' ability to manage various types of hardware, and
194+
its ability to manage the scheduling of large batch-style workloads which are run across hardware in
195+
chunks.
196+
197+
The ecosystem around Kubernetes will continue to grow and evolve. In the future, initiatives to
198+
maintain the sustainability of the project, like the migration of in-tree vendor code and the
199+
registry change, will be ever more important.
200+
201+
The next 10 years of Kubernetes will be guided by its users and the ecosystem, but most of all, by
202+
the people who contribute to it. The community remains open to new contributors. You can find more
203+
information about contributing in our New Contributor Guide at
204+
[https://k8s.dev/contributors](https://k8s.dev/contributors).
205+
206+
We look forward to building the future of Kubernetes with you!
207+
208+
{{< figure src="kcsna2023.jpg" alt="KCSNA 2023">}}
183 KB
Loading
274 KB
Loading
479 KB
Loading
76.1 KB
Loading
284 KB
Loading
279 KB
Loading

0 commit comments

Comments
 (0)