Skip to content

Commit 77a3636

Browse files
NiuJ1aoustiugov
authored andcommitted
correct grammar issues
Signed-off-by: NiuJ1ao <[email protected]>
1 parent 1df18eb commit 77a3636

File tree

3 files changed

+14
-7
lines changed

3 files changed

+14
-7
lines changed

CHANGELOG.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,6 +8,7 @@
88
- Knative serving now can be tested separately from vHive. More info [here](./docs/developers_guide.md#Testing-stock-Knative-images).
99
- Zipkin support added for tracing Knative function call requests. More info [here](./docs/developers_guide.md#Knative-request-tracing)
1010
- added support for MinIO object store. More info [here](./docs/developers_guide.md#MinIO-S3-service)
11+
- Added an automated tail-latency-aware profiler that collects the metrics for [TopDown](https://ieeexplore.ieee.org/document/6844459) characterization from Intel.
1112

1213
### Changed
1314

@@ -23,7 +24,6 @@
2324
- Extended the developers guide on the modes of operation, performance analysis and vhive development environment inside containers.
2425
- Added a slide deck of Dmitrii's talk at Amazon.
2526
- Added a commit linter and a spell checker for `*.md` files.
26-
- Added an automated tail-latency-aware profiler that collects the metrics for [TopDown](https://ieeexplore.ieee.org/document/6844459) characterization from Intel.
2727

2828
### Changed
2929

configs/.wordlist.txt

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -210,6 +210,7 @@ WOOT
210210
WoNDP
211211
YY
212212
Zhu
213+
Zipkin
213214
al
214215
amna
215216
analytics
@@ -273,6 +274,7 @@ inproceedings
273274
integ
274275
invoker
275276
io
277+
istioctl
276278
jekyll
277279
jpg
278280
json
@@ -288,6 +290,7 @@ latencies
288290
ldquo
289291
linter
290292
loadStep
293+
localhost
291294
lsi
292295
lsquo
293296
margaritov
@@ -320,6 +323,7 @@ pre
320323
preconfigured
321324
priyank
322325
profileCPU
326+
profileCPUID
323327
profileTime
324328
profiler
325329
ps
@@ -369,3 +373,4 @@ xml
369373
xyz
370374
yaml
371375
yml
376+
zipkin

docs/profiling.md

Lines changed: 8 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ framework to run. The cool-down period is because the requests are issued in Rou
2929
requests that serve last on some VMs. So, the system runs more stably in the profiling period.
3030

3131
During the profile period, the loader function records the average execution time of invocations and
32-
how many invocations return successfully. The Profiler and latency measurement goroutine also measures
32+
how many invocations return successfully. The Profiler and latency measurement goroutine also measure
3333
hardware counters and latencies at this phase.
3434

3535
If tail latency violates 10x image unloaded service time at an RPS step, the function stops the iteration
@@ -38,11 +38,13 @@ the maximum RPS step. After the iteration stops, completed RPS per CPU, average
3838
average counters are saved in the `profile.csv`.
3939

4040
For stable and accurate measurements, there are two ways of binding VMs. If all VMs are running the same image,
41-
only one VM needs to be measured to get rid of potential noises from global measurement. User can set profile CPU
41+
only one VM needs to be measured to get rid of potential noises from the global measurement. User can set profile CPU
4242
ID and the tool allocates only one VM to the physical core of the CPU. Then, the profiler collects counters from the core.
4343

44-
While the vHive is running, Other processes may interfere the framework. Therefore, all VMs can be bind to a socket.
45-
If both profile CPU ID and bind socket are set, the CPU ID must be in the socket.
44+
While the tool is running, one may want to bind the VMs to a single socket to exclude
45+
the interference of the processes that are running on the same CPU, e.g.,
46+
the loader functionality. If both parameters, `-profileCPUID` and `-bindSocket`,
47+
are defined, the former should point to a core in the same socket.
4648

4749
## Runtime Arguments
4850
```
@@ -101,7 +103,7 @@ INFO[] Bottleneck Backend_Bound with value 75.695000
101103
...
102104
```
103105

104-
To study microarchitectural bottlenecks in a more detail, one needs to profile the same
106+
To study microarchitectural bottlenecks in more detail, one needs to profile the same
105107
configuration at the lower level.
106108
For example, if the bottleneck is in the backend at level 1, one should profile level 2
107109
of the backend category:
@@ -114,7 +116,7 @@ sudo env "PATH=$PATH" go test -v -timeout 99999s -run TestProfileSingleConfigura
114116
`TestProfileIncrementConfiguration` increments the number of VMs by a user-defined number,
115117
further referred to as the increment, until the number of active VMs reaches the user-defined
116118
maximum. At each step, the maximum RPS is projected based on the independently measured
117-
average unloaded service time of the deployed functions, and the number of cores that are
119+
average unloaded service time of the deployed functions and the number of cores that are
118120
available for the VMs to run on. For instance, let us assume that there are 4 VMs running
119121
a `helloworld` function. The unloaded service time of this function is 1 millisecond, thus
120122
the maximum RPS is 4000, assuming that there are more than 4 cores in the CPU.

0 commit comments

Comments
 (0)