Skip to content

Commit 73cd20d

Browse files
authored
Merge branch 'main' into rholling-SCS-docs
Signed-off-by: Kurt Garloff <[email protected]>
2 parents 3847ced + 1a557d7 commit 73cd20d

30 files changed

+6393
-3604
lines changed

.github/workflows/build.yml

Lines changed: 5 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,8 @@ jobs:
2222
key: ${{ runner.os }}-node-${{ hashFiles('**/package-lock.json') }}
2323
restore-keys: |
2424
${{ runner.os }}-node-
25-
- name: Install dependencies
26-
run: npm install
27-
- name: build page
28-
run: npm run build
25+
26+
- name: Install dependencies and build page
27+
run: |
28+
npm ci
29+
npm run build

.gitignore

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -15,10 +15,12 @@
1515
/docs/04-operating-scs/components
1616
/docs/04-operating-scs/01-guides
1717
/docs/06-releases
18+
/docs/turnkey-solution
1819
/standards/*.md
1920
/standards/*/*.md
2021
/standards/*/*.mdx
2122
/standards/scs-*.yaml
23+
/user-docs/application-examples
2224

2325
# Dependencies
2426
node_modules

.markdownlint-cli2.jsonc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -55,6 +55,6 @@
5555
"markdownlint-rule-search-replace",
5656
"markdownlint-rule-relative-links"
5757
],
58-
"ignores": ["node_modules", ".github", "docs"],
58+
"ignores": ["node_modules", ".github", "docs", "standards"],
5959
"globs": ["**/*.{md}"]
6060
}

README.md

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -26,3 +26,8 @@ CD in your Terminal to the root directory of the cloned repository. Install all
2626
npm install
2727
npm start
2828
```
29+
30+
## Linting problems
31+
32+
The repository establishes commit hooks which check the files for correctness and style.
33+
Have a look at the [linting-guide](https://docs.scs.community/community/contribute/linting-guide/) to get detailed information.

community/contribute/adding-docs-guide.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -23,15 +23,15 @@ Your repository containing the documentation has to...
2323

2424
The documentation files have to be in markdown format and...
2525

26-
- comply [SCS licensing guidelines](https://github.com/SovereignCloudStack/docs/blob/main/community/github/dco-and-licenses.md)
26+
- comply [SCS licensing guidelines](https://github.com/SovereignCloudStack/docs/blob/main/community/license-considerations.md)
2727
- match our
2828
- [markdown file structure guideline](https://github.com/SovereignCloudStack/docs/blob/main/community/contribute/doc-files-structure-guide.md)
2929
- linting Rules
3030
- [styleguide](https://github.com/SovereignCloudStack/docs/blob/main/community/contribute/styleguide.md)
3131

3232
### Step 2 – Adding your repo to the docs.json
3333

34-
File a Pull Request within the [docs-page](https://github.com/SovereignCloudStack/docs-page) repository and add your repo to the docs.package.json:
34+
File a Pull Request within the [docs](https://github.com/SovereignCloudStack/docs) repository and add your repo to the docs.package.json:
3535

3636
```json
3737
[

community/contribute/linting-guide.md

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -21,6 +21,13 @@ The markdownlint rules are defined in the configuration file `.markdownlint-cli2
2121

2222
Additionally we use [markdownlint-rule-search-replace](https://github.com/OnkarRuikar/markdownlint-rule-search-replace) for fixing
2323

24+
## Local Usage for development
25+
26+
```bash
27+
npm run lint:md <file>
28+
npm run fix:md <file>
29+
```
30+
2431
## Github Workflows
2532

2633
There are two actions running on every Pull Request on the `main` branch.

docs.package.json

Lines changed: 24 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -11,12 +11,6 @@
1111
"target": "docs/02-iaas/components",
1212
"label": ""
1313
},
14-
{
15-
"repo": "SovereignCloudStack/k8s-cluster-api-provider",
16-
"source": "doc",
17-
"target": "docs/03-container/components",
18-
"label": "k8s-cluster-api-provider"
19-
},
2014
{
2115
"repo": "SovereignCloudStack/cluster-stack-provider-openstack",
2216
"source": "docs",
@@ -35,6 +29,18 @@
3529
"target": "docs/02-iaas/",
3630
"label": ""
3731
},
32+
{
33+
"repo": "osism/osism.github.io",
34+
"source": "docs/cloud-in-a-box",
35+
"target": "docs/02-iaas/deployment-examples",
36+
"label": ""
37+
},
38+
{
39+
"repo": "osism/osism.github.io",
40+
"source": "docs/testbed.mdx",
41+
"target": "docs/02-iaas/deployment-examples",
42+
"label": ""
43+
},
3844
{
3945
"repo": "SovereignCloudStack/k8s-harbor",
4046
"source": "docs",
@@ -129,5 +135,17 @@
129135
"source": ["docs/*"],
130136
"target": "docs/03-container/components/cluster-stacks/components",
131137
"label": "cluster-stack-operator"
138+
},
139+
{
140+
"repo": "SovereignCloudStack/hardware-landscape",
141+
"source": ["documentation/overview.md"],
142+
"target": "docs/turnkey-solution",
143+
"label": ""
144+
},
145+
{
146+
"repo": "SovereignCloudStack/opendesk-on-scs",
147+
"source": "docs/*",
148+
"target": "user-docs/application-examples",
149+
"label": "opendesk-on-scs"
132150
}
133151
]
Lines changed: 161 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,161 @@
1+
# artcodix
2+
3+
## Preface
4+
5+
This document describes a possible environment setup for a pre-production or minimal production setup.
6+
In general hardware requirements can vary largely from environment to environment and this guide is not
7+
a hardware sizing guide nor the best placement solution of services for every setup. This guide intends to
8+
provide a starting point for a hardware based deployment of the SCS-IaaS reference implementation based on OSISM.
9+
10+
## Node type definitions
11+
12+
### Control Node
13+
14+
A control node runs all or most of the openstack services, that are responsible for API-services and the corresponding
15+
runtimes. These nodes are necessary for any user to interact with the cloud and to keep the cloud in a managed state.
16+
However these nodes are usualy **not** running user virtual machines.
17+
Hence it is advisable to have the control nodes replicated. To have a RAFT-quorum three nodes are a good starting point.
18+
19+
### Compute Node (HCI/no HCI)
20+
21+
#### Not Hyperconverged Infrastructure (no HCI)
22+
23+
Non HCI compute nodes are exclusively running user virtual machines. They are running no API-services, no storage daemons
24+
and no network routers, except for the necessary network infrastructure to connect virtual machines.
25+
26+
#### Hyperconverged Infrastructure (HCI)
27+
28+
HCI nodes generally run at least user virtual machines and storage daemons. It is possible to place networking services
29+
here as well but that is not considered good practice.
30+
31+
#### No HCI / vs HCI
32+
33+
Whether to use HCI nodes or not is in general not an easy question. For a getting started (pre production/smalles possible production)
34+
environment however, it is the most cost efficent option. Therefore we will continue with HCI nodes (compute + storage).
35+
36+
### Storage Node
37+
38+
A dedicated storage node runs only storage daemons. This can be necessary in larger deployments to protect the storage daemons from
39+
ressource starvation through user workloads.
40+
41+
Not used in this setup.
42+
43+
### Network Node
44+
45+
A dedicated network node runs the routing infrastructure for user virtual machines that connects these machines with provider / external
46+
networks. In larger deployments these can be useful to enhance scaling and improve network performance.
47+
48+
Not used in this setup.
49+
50+
## Nodes in this deployment example
51+
52+
As mentioned before we are running three dedicated control nodes. To be able to fully test an openstack environment it is
53+
recommended to run three compute nodes (HCI) as well. Technically you can get a setup running with just one compute node.
54+
See the following chapter (Use cases and validation) for more information.
55+
56+
### Use cases and validation
57+
58+
The setup described allows for the following use cases / test cases:
59+
60+
- Highly available control plane
61+
- Control plane failure toleration test (Database, RabbitMQ, Ceph Mons, Routers)
62+
- Highly available user virtual clusters (e.g. Kubernetes clusters)
63+
- Compute host failure simulation
64+
- Host aggregates / compute node grouping
65+
- Host based storage replication (instead of OSD based)
66+
- Fully replicated storage / storage high availability test
67+
68+
### Control Node
69+
70+
#### General requirements
71+
72+
The control nodes do not run any user workloads. This means they are usually not sized as big as the compute nodes.
73+
Relevant metrics for control nodes are:
74+
75+
- Fast and big enough discs. At least SATA-SSDs are recommended, NVMe will greatly improve the overall responsiveness.
76+
- A rather large amount of memory to house all the caches for databases and queues.
77+
- CPU performance should be average. A good compromise between amount of cores and speed should be used. However this is
78+
the least important requirement on the list.
79+
80+
#### Hardware recommendation
81+
82+
The following server specs are just a starting point and can greatly vary between environments.
83+
84+
Example:
85+
3x Dell R630/R640/R650 1HE Server
86+
87+
- Dual 8 Core 3,00 GHz Intel/AMD
88+
- 128 GB RAM
89+
- 2x 3,84 TB NVMe in (Software-) RAID 1
90+
- 2x 10/25/40 GBit 2 Port SFP+/QSFP Network Cards
91+
92+
### Compute Node (HCI)
93+
94+
The compute nodes in this scenario run all the user virtual workloads **and** the storage infrastructure. To make sure
95+
we don't starve these nodes, they should be of decent size.
96+
97+
> This setup takes local storage tests into consideration. The SCS-standards require certain flavors with very fast disc speed
98+
> to house customer kubernetes control planes (etcd). These speeds are usually not achievable with shared storage. If you don't
99+
> intend to test this scenario, you can skip the NVMe discs.
100+
101+
#### Hardware recommendation
102+
103+
The following server specs are just a starting point and can greatly vary between environments. The sizing of the nodes needs to fit
104+
the expected workloads (customer VMs).
105+
106+
Example:
107+
3x Dell R730(xd)/R740(xd)/R750(xd)
108+
or
109+
3x Supermicro
110+
111+
- Dual 16 Core 2,8 GHz Intel/AMD
112+
- 512 GB RAM
113+
- 2x 3,84 TB NVMe in (Software-) RAID 1 if you want to have local storage available (optional)
114+
115+
For hyperconverged ceph osds:
116+
117+
- 4x 10 TB HDD -> This leads to ~30 TB of available HDD storage (optional)
118+
- 4x 7,68 TB SSD -> This leads to ~25 TB of available SSD storage (optional)
119+
- 2x 10/25/40 GBit 2 Port SFP+/QSFP Network Cards
120+
121+
## Network
122+
123+
The network infrastructure can vary a lot from setup to setup. This guide does not intend to define the best networking solution
124+
for every cluster but rather give two possible scenarios.
125+
126+
### Scenario A: Not recommended for production
127+
128+
The smallest possible setup is just a single switch connected to all the nodes physically on one interface. The switch has to be
129+
VLAN enabled. Openstack recommends multiple isolated networks but the following are at least recommended to be split:
130+
131+
- Out of Band network
132+
- Management networks
133+
- Storage backend network
134+
- Public / External network for virutal machines
135+
If there is only one switch, these networks should all be defined as seperate VLANs. One of the networks can run in untagged default
136+
VLAN 1.
137+
138+
### Scenario B: Minimum recommended setup for small production environments
139+
140+
The recommended setup uses two stacked switches connected in a LAG and at least three different physical network ports on each node.
141+
142+
- Physical Network 1: VLANs for Public / External network for virutal machines, Management networks
143+
- Physical Network 2: Storage backend network
144+
- Physical Network 3: Out of Band network
145+
146+
### Network adapters
147+
148+
The out of band network does usually not need a lot of bandwith. Most modern servers come with 1Gbit/s adapters which are sufficient.
149+
For small test clusters, it might also be sufficient to use 1Gbit/s networks for the other two physical networks.
150+
For a minimum production cluster it is recommended to use the following:
151+
152+
- Out of Band Network: 1Gbit/s
153+
- VLANs for Public / External network for virutal machines, Management networks: 10 / 25 Gbit/s
154+
- Storage backend network: 10 / 25 / 40 Gbit/s
155+
156+
Whether you need a higher throughput for your storage backend services depends on your expected storage load. The faster the network
157+
the faster storage data can be replicated between nodes. This usually leads to improved performance and better/faster fault tolerance.
158+
159+
## How to continue
160+
161+
After implementing the recommended deployment example hardware, you can continue with the [deployment guide](https://docs.scs.community/docs/iaas/guides/deploy-guide/).

docs/03-container/index.md

Lines changed: 5 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -18,26 +18,17 @@ The container layer within the Sovereign Cloud Stack (SCS) offers a robust solut
1818
### Prerequisites and Requirements
1919

2020
- Knowledge: Familiarity with Kubernetes, container orchestration, and basic cloud infrastructure principles is pivotal.
21-
- Software: The core software component is the K8s Cluster API Provider, crafted to function optimally on OpenStack environments. Although designed to run on the SCS IaaS layer, with minor configuration adjustments, it can operate on any OpenStack environment.
21+
- Software: The core software component are the Cluster Stacks based on Cluster API, crafted to function best on OpenStack environments. Although designed to run on the SCS IaaS layer, with minor configuration adjustments, it can operate on any OpenStack environment.
2222
- Hardware: Virtualization-enabled hardware capable of running OpenStack is essential if hosting the IaaS layer independently. For further details, refer to the IaaS layer documentation.
2323

2424
### Features
2525

26-
- Automated Cluster Management: The K8s Cluster API Provider automates the process of creating, scaling, managing and updating Kubernetes clusters, thus significantly reducing the operational overhead.
26+
- Automated Cluster Management: The Cluster API automates the process of creating, scaling, managing and updating Kubernetes clusters, thus significantly reducing the operational overhead.
2727
- Standardized Operations: Upholding SCS standards across various clusters ensures operational consistency and reliability.
28-
- Integration with OpenStack: The K8s Cluster API Provider is tailored to work seamlessly with SCS IaaS (OpenStack), thus offering a unified platform for managing both containers and the underlying infrastructure.
29-
- Container Registry Integration: The container layer has an integrated container registry, facilitating easy management and deployment of container images.
30-
- Certificate Managment: The kubernetes clusters can optionaly include a certbot allowing for ease of deployment of public facing services out of the box.
31-
- Preconfigured ingress: Certificate Management: Optional inclusion of Certbot in Kubernetes clusters facilitates straightforward deployment of publicly accessible services.
32-
Preconfigured Ingress: Kubernetes clusters come with a preconfigured Nginx ingress, designed with OpenStack in mind, providing a ready-to-use ingress solution with enhancements like out-of-the-box client source IP visibility.
28+
- Integration with OpenStack: The Cluster Stacks are tailored to work seamlessly with SCS IaaS (OpenStack), thus offering a unified platform for managing both containers and the underlying infrastructure.
29+
- Container Registry Integration: The container layer has an optional container registry, facilitating easy management and deployment of container images.
30+
- Cluster Addons: Cluster Stacks come with a small default set of workload applications needed to make the cluster usable, such as CNI plugin, CSI plugin and a cloud controller manager.
3331

3432
### Limitations
3533

36-
- OpenStack Dependency: The current design primarily supports OpenStack environments, which could be a limitation for other infrastructure setups.
3734
- Serverless/Functions as a Service Support: Lack of direct support for serverless containers and Functions as a Service (FaaS) might require additional tools or platforms.
38-
39-
### Current state and future Outlook
40-
41-
The container layer has matured with multiple cloud providers now offering Kubernetes as a Service using this layer to manage a multitude of clusters. It follows a half-yearly release schedule to ensure security and up-to-date Kubernetes clusters, alongside providing backports for significant features into older versions.
42-
43-
Looking ahead, a new version based on ClusterStacks is in the pipeline, currently in its Alpha state. This upcoming release aims to be backward compatible, facilitating smooth migration from existing setups, and further extending the capabilities of the SCS container layer.

docs/index.mdx

Lines changed: 31 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -16,38 +16,54 @@ SCS is built, backed, and operated by an active open-source community worldwide.
1616

1717
## Use Cases and Deployment Examples
1818

19-
### IaaS Layer
19+
### Virtualization (IaaS) Layer
2020

21-
#### Quick Start with Cloud-In-A-Box
21+
The SCS IaaS Reference Implementation is based on [OSISM](https://osism.tech/).
2222

23-
The fastest way to get in touch with SCS is to deploy a SCS cloud virtually. The Cloud-In-A-Box was built explicitly for this scenario. Check it out [here](/docs/iaas/guides/deploy-guide/examples/cloud-in-a-box)
23+
#### Quick Start with Cloud-in-a-Box
24+
25+
You can do a single node installation for learning, testing or development purposes.
26+
The Cloud-in-a-Box configuration was built explicitly for this scenario.
27+
Check it out [here](/docs/iaas/deployment-examples/cloud-in-a-box)
28+
It comes with a complete set of services, even Ceph is part of it
29+
(despite of course not offering much redundancy on a single-node).
2430

2531
#### Reference Implementation Testbed
2632

33+
The fastest way to get in touch with SCS is to deploy a SCS cloud virtually.
34+
2735
This means that you set up an SCS test installation including all the infrastructure
2836
pieces such as database, message queueing, ceph, monitoring and logging, IAM, the
2937
[OpenStack](https://openstack.org/) core services, and (soon) the Container layer
30-
on top of an existing IaaS platform.
38+
on top of an existing OpenStack IaaS platform, ideally one that allows for nested
39+
virtualization.
3140

32-
The SCS IaaS reference implementation is based on [OSISM](https://osism.tech/). Read on the
33-
[OSISM testbed docs](https://docs.osism.de/testbed/) to learn how to get the
41+
Read the [testbed docs](/docs/iaas/deployment-examples/testbed) to learn how to get the
3442
testbed running. Please read carefully through the
35-
[deployment](https://docs.osism.de/testbed/deployment.html) section of the
43+
[deployment](/docs/iaas/deployment-examples/testbed#deployment) section of the
3644
manual.
3745

38-
### Container Layer
46+
#### Examples for real deployments
3947

40-
#### K8s Cluster API Provider
48+
[artcodix](https://artcodix.com/) has [shared](/docs/iaas/deployment-examples/artcodix/)
49+
some details on their production setup. The SCS team itself has created [extensive
50+
documentation](/docs/turnkey-solution/hardware-landscape/) including details on the
51+
used hardware.
52+
53+
### Container Layer
4154

42-
You can easily deploy the container layer on top of the testbed (or a production
43-
SCS cloud) checking out the code from
44-
[k8s-cluster-api-provider](https://github.com/SovereignCloudStack/k8s-cluster-api-provider/).
55+
The Reference Implementation (v2) for the container (Kubernetes-as-a-Service = KaaS) layer
56+
is provided by [Cluster Stacks](/docs/category/cluster-stacks)
57+
from [syself](https://syself.com/).
4558

4659
#### Cluster Stacks
4760

48-
With the Cluster Stacks, in the V2 KaaS reference implementation, we provide an opinionated optimized configuration of Kubernetes clusters. Through better packaging, integrated testing, and bundled configuration, SCS-based Kubernetes clusters provide easier individualization.
49-
Throughout the R6 development cycle Cluster Stacks are taken from a technical preview to be [functional and available on top of the IaaS reference implementation](https://github.com/SovereignCloudStack/issues/milestone/8) as well to replace the V1 KaaS reference implementation [k8s-cluster-api-provider](https://github.com/SovereignCloudStack/k8s-cluster-api-provider/).
50-
The Cluster Stacks can already be tried with the [demo](https://github.com/SovereignCloudStack/cluster-stacks-demo) repository. Although this is based on the not-production-ready Docker provider, the usage is the same for every provider.
61+
With the Cluster Stacks, in the V2 KaaS reference implementation, we provide an opinionated optimized configuration of Kubernetes cluster management based on [Kubernetes Cluster-API](https://cluster-api.sigs.k8s.io/).
62+
Through better packaging, integrated testing, and bundled configuration, SCS-based Kubernetes clusters provide easier individualization.
63+
Throughout the R6 development cycle Cluster Stacks were taken from a technical preview to be [functional and available on top of the IaaS reference implementation](https://github.com/SovereignCloudStack/issues/milestone/8) as well to replace the V1 KaaS reference implementation [k8s-cluster-api-provider](https://github.com/SovereignCloudStack/k8s-cluster-api-provider/).
64+
The Cluster Stacks have fully replaced V1 meanwhile to be the production-grade KaaS solution in SCS, please check out the
65+
[Quick Start Guide](/docs/container/components/cluster-stacks/providers/openstack/quickstart).
66+
For demo, test and development purposes, you can also try the [demo](https://github.com/SovereignCloudStack/cluster-stacks-demo) repository which is an implementation using the (not-for-production) Docker provider. Implementations for other infrastructure are intended; the one for Hetzner for example is maintained by syself itself.
5167

5268
### Public SCS Clouds in production
5369

0 commit comments

Comments
 (0)