@@ -232,10 +232,10 @@ The following fields are also filled in when creating a Device:
232232* Device Type
233233* MAC Addresses
234234
235- Note there is typically both a primary and management (e.g., BMC/IPMI)
235+ Note there is typically both a primary and a management (e.g., BMC/IPMI)
236236interface. One convenience feature of Netbox is to use the *Device Type * as a
237- template that will set the default naming of interfaces, power connections, and
238- other equipment model specific characteristics .
237+ template that sets the default naming of interfaces, power connections, and
238+ other equipment model specific attributes .
239239
240240Finally, the virtual interfaces for the Device must be specified, with its
241241*Label * field set to the physical network interface that it is assigned. IP
@@ -347,11 +347,11 @@ the eNBs when it is physically installed, but those parameters will
347347become settable through the Management Platform once the cluster is
348348brought online.
349349
350- Manual configuration work done at this stage should be minimized, and most
351- systems should be configured to use automated means of configuration. For
352- example, using DHCP pervasively with MAC reservations for IP address assignment
353- instead of manual configuration of each interface allows for management to be
354- Zero Touch and simplifies future reconfiguration.
350+ Manual configuration work done at this stage should be minimized, and
351+ most systems should use automated means of configuration. For example,
352+ using DHCP pervasively with MAC reservations for IP address assignment
353+ instead of manual configuration of each interface allows for
354+ management to be zero-touch and simplifies future reconfiguration.
355355
356356The automated aspects of configuration are implemented as a set of
357357Ansible *roles * and *playbooks *, which in terms of the high-level
@@ -375,18 +375,18 @@ the management network is online.
375375
376376 The Ansible playbooks install and configure the network services on the
377377Management Server. The role of DNS and DHCP are obvious. As for iPXE and Nginx,
378- they are used to bootstrap the rest of the infrastructure: the compute servers
379- are configured by iPXE delivered over DHCP/TFTP, then loading the scripted OS
380- installation from a Nginx webserver, and the fabric switches receive their
381- Stratum OS package from a webserver .
378+ they are used to bootstrap the rest of the infrastructure. The compute servers
379+ are configured by iPXE delivered over DHCP/TFTP, and then load the scripted OS
380+ installation from a Nginx webserver. The fabric switches load their
381+ Stratum OS package from Nginx .
382382
383383In many cases, the playbooks use parameters—such as VLANs, IP
384384addresses, DNS names, and so on—extracted from NetBox. :numref: `Figure
385385%s <fig-ansible>` illustrates the approach, and fills in a few
386386details. For example, a home-grown Python program (``edgeconfig.py ``)
387387extracts data from NetBox using the REST API and outputs a corresponding
388388set of YAML files, crafted to serve as input to Ansible, which creates yet
389- more configuration on management and compute systems. One example of this
389+ more configuration on the management and compute systems. One example of this
390390is the *Netplan * file, which is used in Ubuntu to manage network interfaces.
391391More information about Ansible and Netplan can be found on their respective web
392392sites.
@@ -644,18 +644,17 @@ Starting with declarative language and auto-generating the right
644644sequence of API calls is a proven way to overcome that problem.
645645
646646
647- We conclude the discussion by drawing attention to the fact that while
648- we now have a declarative specification for our cloud infrastructure,
649- which we refer to as the *Aether Platform *, these specification files
650- are yet another software artifact that we check into the Config
651- Repo. This is what we mean by Infrastructure-as-Code: infrastructure
652- specifications are checked into a repo and version-controlled like
653- any other code. This repo, in turn, feeds the lifecycle management
654- pipeline described in the next chapter. The physical provisioning
655- steps described in Section 3.1 happen "outside" the pipeline (which is
656- why we don't just fold resource provisioning into Lifecycle
657- Management), but it is fair to think of resource provisioning as
658- "Stage 0" of lifecycle management.
647+ We conclude by drawing attention to the fact that while we now have a
648+ declarative specification for our cloud infrastructure, which we refer
649+ to as the *Aether Platform *, these specification files are yet another
650+ software artifact that we check into the Config Repo. This is what we
651+ mean by Infrastructure-as-Code: infrastructure specifications are
652+ checked into a repo and version-controlled like any other code. This
653+ repo, in turn, feeds the lifecycle management pipeline described in
654+ the next chapter. The physical provisioning steps described in Section
655+ 3.1 happen "outside" the pipeline (which is why we don't just fold
656+ resource provisioning into Lifecycle Management), but it is fair to
657+ think of resource provisioning as "Stage 0" of lifecycle management.
659658
6606593.3 Platform Definition
661660------------------------
0 commit comments