Skip to content

Commit f606a81

Browse files
committed
fixing conflict
2 parents 4d05b74 + 0a6c029 commit f606a81

File tree

2 files changed

+25
-22
lines changed

2 files changed

+25
-22
lines changed

authors.rst

Lines changed: 13 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -38,14 +38,17 @@ been a systems programmer at the University of Arizona. He received
3838
his BS in Computer Science from the University of Arizona in 2001.
3939

4040
**Bruce Davie** is a computer scientist noted for his contributions to
41-
the field of networking. He is a former VP and CTO for the Asia
42-
Pacific region at VMware. He joined VMware during the acquisition of
43-
Software Defined Networking (SDN) startup Nicira. Prior to that, he
44-
was a Fellow at Cisco Systems, leading a team of architects
45-
responsible for Multiprotocol Label Switching (MPLS). Davie has over
46-
30 years of networking industry experience and has co-authored 17
47-
RFCs. He was recognized as an ACM Fellow in 2009 and chaired ACM
48-
SIGCOMM from 2009 to 2013. He was also a visiting lecturer at the
49-
Massachusetts Institute of Technology for five years. Davie is the
50-
author of multiple books and the holder of more than 40 U.S. Patents.
41+
the field of networking. He began his networking career at Bellcore
42+
where he worked on the Aurora Gigabit testbed and collaborated with
43+
Larry Peterson on high-speed host-network interfaces. He then went to
44+
Cisco where he led a team of architects responsible for Multiprotocol
45+
Label Switching (MPLS). He worked extensively at the IETF on
46+
standardizing MPLS and various quality of service technologies. He
47+
also spent five years as a visiting lecturer at the Massachusetts
48+
Institute of Technology. In 2012 he joined Software Defined Networking
49+
(SDN) startup Nicira and was then a principal engineer at VMware
50+
following the acquisition of Nicira. In 2017 he took on the role of VP
51+
and CTO for the Asia Pacific region at VMware. He is a Fellow of the
52+
ACM and chaired ACM SIGCOMM from 2009 to 2013. Davie is the author of
53+
multiple books and the holder of more than 40 U.S. patents.
5154

monitor.rst

Lines changed: 12 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -77,7 +77,7 @@ closed-loop control where the automated tool not only detects problems
7777
but is also able to issue corrective control directives. For the
7878
purpose of this chapter, we give examples of the first two (alerts and
7979
dashboards), and declare the latter two (analytics and close-loop
80-
control) as out-of-scope (but likely running as applications that
80+
control) as out of scope (but likely running as applications that
8181
consume the telemetry data outlined in the sections that follow).
8282

8383
Third, when viewed from the perspective of lifecycle management,
@@ -96,9 +96,9 @@ Finally, because the metrics, logs, and traces collected by the
9696
various subsystems are timestamped, it is possible to establish
9797
correlations among them, which is helpful when debugging a problem or
9898
deciding whether or not an alert is warranted. We give examples of how
99-
such telemetry-wide functions are implemented in practice today, as
100-
well as discuss the future future of generating and using telemetry
101-
data, in the final two sections of this chapter.
99+
such telemetry-wide functions are implemented in practice today, and
100+
discuss the future of generating and using telemetry data, in the
101+
final two sections of this chapter.
102102

103103
6.1 Metrics and Alerts
104104
-------------------------------
@@ -170,7 +170,7 @@ to the central location (e.g., to be displayed by Grafana as described
170170
in the next subsection). This is appropriate for metrics that are both
171171
high-volume and seldom viewed. One exception is the end-to-end tests
172172
described in the previous paragraph. These results are immediately
173-
pushed to the central site (bypassing the local Prometheus), because
173+
pushed to the central site (bypassing the local Prometheus instance), because
174174
they are low-volume and may require immediate attention.
175175

176176
6.1.2 Creating Dashboards
@@ -179,7 +179,7 @@ they are low-volume and may require immediate attention.
179179
The metrics collected by Prometheus are visualized using Grafana
180180
dashboards. In Aether, this means the Grafana instance running as
181181
part of AMP in the central cloud sends queries to some combination of
182-
the central Prometheus and a subset of the Prometheus instances
182+
the central Prometheus instance and a subset of the Prometheus instances
183183
running on edge clusters. For example, :numref:`Figure %s
184184
<fig-ace_dash>` shows the summary dashboard for a collection of Aether
185185
edge sites.
@@ -497,9 +497,9 @@ SD-Core, which augments the UPF performance data shown in
497497
in a Grafana dashboard.
498498

499499
Second, the runtime control interface described in Chapter 5 provides
500-
a means to change various parameters of a running system, but having
501-
access to the data needed to know what changes (if any) need to be
502-
made is a prerequisite for making informed decisions. To this end, it
500+
a means to change various parameters of a running system, but to make
501+
informed decisions about what changes (if any) need to be
502+
made, it is necessary to have access to the right data. To this end, it
503503
is ideal to have access to both the "knobs" and the "dials" on an
504504
integrated dashboard. This can be accomplished by incorporating
505505
Grafana frames in the Runtime Control GUI, which, in its simplest form,
@@ -584,9 +584,9 @@ Chapter 1. A Service Mesh framework such as Istio provides a means to
584584
enforce fine-grained security policies and collect telemetry data in
585585
cloud native applications by injecting "observation/enforcement
586586
points" between microservices. These injection points, called
587-
*sidecars*, are typically implemented by a container that "runs along
588-
side" the containers that implement each microservice, with all RPC
589-
calls from Service A to Service B passing through their associated
587+
*sidecars*, are typically implemented by a container that "runs
588+
alongside" the containers that implement each microservice, with all
589+
RPC calls from Service A to Service B passing through their associated
590590
sidecars. As shown in :numref:`Figure %s <fig-mesh>`, these sidecars
591591
then implement whatever policies the operator wants to impose on the
592592
application, sending telemetry data to a global collector and

0 commit comments

Comments
 (0)