Skip to content

Commit 1124740

Browse files
committed
add
Signed-off-by: Marc Schöchlin <[email protected]>
1 parent c28ef65 commit 1124740

File tree

1 file changed

+56
-0
lines changed

1 file changed

+56
-0
lines changed

docs/turnkey-solution/hardware-landscape.md

Lines changed: 56 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -60,3 +60,59 @@ The primary point of information and orientation is the [*readme file*](https://
6060
which is stored at the top level of the [configuration repository](https://github.com/SovereignCloudStack/hardware-landscape).
6161

6262
The relevant **References** section refers here to the individual documentation areas.
63+
64+
## Specific installation and configuration details
65+
66+
* Processes for access management to the environment (2 VPN gateways, SSH logins, SSH profiles,..) have been implemented
67+
* The production and lab environments have been set up, automated and documented as described above
68+
* The complete environment is managed in a [GIT repository](https://github.com/SovereignCloudStack/hardware-landscape),
69+
adjustments and further developments are managed via GIT merge requests
70+
* Almost all installation steps are [documented and automated](https://github.com/SovereignCloudStack/hardware-landscape/blob/main/documentation/System_Deployment.md)
71+
based on a pure rack installation (The setup is extensively documented, in particular the few manual steps)
72+
* The entire customized setup of the nodes is [implemented by OSISM/Ansible]<https://github.com/SovereignCloudStack/hardware-landscape/tree/main/environments/custom()>
73+
* All secrets (e.g. passwords) of the environment are stored and versioned in the encrypted Ansible Vault in i
74+
the repository (when access is transferred, rekeying can be used to change the access or the rights to it).
75+
* A far-reaching or in-depth automation has been created that allows the environment to be re-set up or parts of it to
76+
be re-set up with a reasonable amount of personnel.
77+
* The setup of the basic environment was implemented appropriately with Ansible and using the OSISM environment (the reference implementation)
78+
* Python tooling was created that adds areas that are specific to the use case of the environment and provides functions that simplify the operation of the infrastructure
79+
* Server systems
80+
* Backup and restore of the hardware configuration
81+
* Templating of the BMC configuration
82+
* Automatic installation of the operating system base image via Redfish Virtual Media
83+
* Control of the server status via command line (to stop and start the system for test, maintenance and energy-saving purposes)
84+
* Generation of base profiles for the Ansible Inventory based on the hardware key data stored in the documentation
85+
* Switches
86+
* Backup and restore of the switch configuration
87+
* Generation of base profiles for the Ansible Inventory based on the hardware key data stored in the documentation
88+
* Network setup
89+
* The two management hosts act as redundant VPN gateways, ssh jumphosts, routers and uplink routers
90+
* The system is deployed with a layer 3 underlay concept
91+
* An "eBGP router on the host" is implemented for the node-interconnectivity
92+
(all nodes and all switches are running FRR instances)
93+
* All Ceph and Openstack nodes of the system do not have a direct upstream routing
94+
(access is configured and provided by HTTP-, NTP and DNS-proxies)
95+
* For security reasons, the system itself can only be accessed via VPN.
96+
The provider network of the production environment is realized with a VXLAN which is terminated on the managers for routing
97+
('a virtual provider network')).
98+
* The basic node installation was realised in such a way that specific [node images](https://github.com/osism/node-image)
99+
are created for the respective rack, which make the operation or reconfiguration of network equipment for PXE bootstrap
100+
unnecessary. (Preliminary stage for rollout via OpenStack Ironic)
101+
* The management of the hardware (BMC and switch management) is implemented with a VLAN
102+
* Routing, firewalling and NAT is managed by a NFTables Script which adds rules in a idempotent way to the existing rules
103+
of the manager nodes.
104+
* The [openstack workload generator](https://github.com/SovereignCloudStack/openstack-workload-generator) is used put test workloads
105+
on the system
106+
* Automated creation of OpenStack domains, projects, servers, networks, users, etc.
107+
* Launching test workloads
108+
* Dismantling test workloads
109+
* An observability stack was built
110+
* Prometheus for metrics
111+
* Opensearch for log aggregation
112+
* Central syslog server for the switches on the managers (recorded via the manager nodes in Opensearch)
113+
* Specific documentation created for the project
114+
* Details of the hardware installed in the environment
115+
* The physical structure of the environment was documented in detail (rack installation and cabling)
116+
* The technical and logical structure of the environment was documented in detail
117+
* A FAQ for handling the open-source network operating system SONiC was created with relevant topics for the test environment
118+
* As part of the development, the documentation and implementation of the OSISM reference implementation was significantly improved (essentially resulting from

0 commit comments

Comments
 (0)