Skip to content

Commit 89f2c92

Browse files
committed
Markdown linting
1 parent 45a9576 commit 89f2c92

File tree

8 files changed

+74
-12
lines changed

8 files changed

+74
-12
lines changed

content/en/imt/dashboards/signalflow.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -36,11 +36,13 @@ Also, you can copy the SignalFlow and use it when interacting with the API or wi
3636

3737
{{< tabs >}}
3838
{{% tab name="SignalFlow" %}}
39+
3940
```python
4041
A = data('demo.trans.latency', filter=filter('demo_datacenter', 'Paris')).percentile(pct=95).publish(label='A', enable=False)
4142
B = data('demo.trans.latency', filter=filter('demo_datacenter', 'Paris')).percentile(pct=95).timeshift('1w').publish(label='B', enable=False)
4243
C = (A-B).publish(label='C')
4344
```
45+
4446
{{% /tab %}}
4547
{{< /tabs >}}
4648

content/en/tko/session-5/docs/edit-hpa.md

Lines changed: 0 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -12,11 +12,6 @@ Increase the `maxReplicas` to 8
1212
``` bash
1313
kubectl edit hpa php-apache -n apache
1414
```
15-
<<<<<<< HEAD
16-
=======
17-
18-
Save the changes youhave made. (Hint: Use `Esc` followed by `:wq!` to save your changes).
19-
>>>>>>> main
2015

2116
Save the changes youhave made. (Hint: Use `Esc` followed by `:wq!` to save your changes).
2217

content/en/tko/session-6/docs/lambda.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -295,8 +295,8 @@ Look at all these differences!
295295

296296
Notice how we are now importing some OpenTelemetry libraries directly into our function to handle some of the manual instrumenation tasks we require.
297297

298-
We are using https://www.npmjs.com/package/@opentelemetry/api to manipulate the tracing logic in our functions.
299-
We are using https://www.npmjs.com/package/@opentelemetry/core to access the **Propagator** objects that we will use to manually propagate our context with.
298+
We are using <https://www.npmjs.com/package/@opentelemetry/api> to manipulate the tracing logic in our functions.
299+
We are using <https://www.npmjs.com/package/@opentelemetry/core> to access the **Propagator** objects that we will use to manually propagate our context with.
300300

301301
The bellow code executes the following steps inside the Producer function:
302302

@@ -425,8 +425,8 @@ Note how the *Trace ID* is something that makes up the trace *context* that we p
425425

426426
You can read up on the two common propagation standards:
427427

428-
1. W3C: https://www.w3.org/TR/trace-context/#traceparent-header
429-
2. B3: https://github.com/openzipkin/b3-propagation#overall-process
428+
1. W3C: <https://www.w3.org/TR/trace-context/#traceparent-header>
429+
2. B3: <https://github.com/openzipkin/b3-propagation#overall-process>
430430

431431
Which one are we using? *It should be self-explanatory from the Propagator we are creating in the Functions*
432432

content/ja/imt/dashboards/adding-charts.md

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -36,7 +36,6 @@ isCJKLanguage: true
3636

3737
![Three Dashboard](../../images/M-MoreCharts-6.png)
3838

39-
4039
## 3. 貼り付けたチャートを編集する
4140

4241
ダッシュボードの **Latency History** チャートの3つの点 **`...`** をクリックし、**Open** をクリックします(または、チャートの名前(ここでは **Latency History**)をクリックすることもできます)。

content/ja/imt/dashboards/signalflow.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -37,11 +37,13 @@ SignalFlow の詳細については、 [Analyze incoming data using SignalFlow](
3737

3838
{{< tabs >}}
3939
{{% tab name="SignalFlow" %}}
40+
4041
```python
4142
A = data('demo.trans.latency', filter=filter('demo_datacenter', 'Paris')).percentile(pct=95).publish(label='A', enable=False)
4243
B = data('demo.trans.latency', filter=filter('demo_datacenter', 'Paris')).percentile(pct=95).timeshift('1w').publish(label='B', enable=False)
4344
C = (A-B).publish(label='C')
4445
```
46+
4547
{{% /tab %}}
4648
{{< /tabs >}}
4749

content/ja/imt/gdi/_index.md

Lines changed: 24 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -35,17 +35,21 @@ Kubernetes が起動したら、Splunk の UI から Access Token[^1] を取得
3535

3636
{{< tabs >}}
3737
{{% tab name="Export ACCESS TOKEN" %}}
38+
3839
```bash
3940
export ACCESS_TOKEN="<replace_with_O11y-Workshop-ACCESS_TOKEN>"
4041
```
42+
4143
{{% /tab %}}
4244
{{< /tabs >}}
4345

4446
{{< tabs >}}
4547
{{% tab name="Export REALM" %}}
48+
4649
```bash
4750
export REALM="<replace_with_REALM>"
4851
```
52+
4953
{{% /tab %}}
5054
{{< /tabs >}}
5155

@@ -75,6 +79,7 @@ Update Complete. ⎈Happy Helming!⎈
7579

7680
{{< tabs >}}
7781
{{% tab name="Helm Install" %}}
82+
7883
```bash
7984
helm install splunk-otel-collector \
8085
--set="splunkObservability.realm=$REALM" \
@@ -87,6 +92,7 @@ helm install splunk-otel-collector \
8792
splunk-otel-collector-chart/splunk-otel-collector \
8893
-f ~/workshop/k3s/otel-collector.yaml
8994
```
95+
9096
{{% /tab %}}
9197
{{% tab name="Helm Install Output" %}}
9298
Using ACCESS_TOKEN={REDACTED}
@@ -99,6 +105,7 @@ REVISION: 1
99105
TEST SUITE: None
100106
{{% /tab %}}
101107
{{% tab name="Install Network Explorer" %}}
108+
102109
```bash
103110
helm install splunk-otel-collector \
104111
--set="splunkObservability.realm=$REALM" \
@@ -117,6 +124,7 @@ helm install splunk-otel-collector \
117124
splunk-otel-collector-chart/splunk-otel-collector \
118125
-f ~/workshop/k3s/otel-collector.yaml
119126
```
127+
120128
{{% /tab %}}
121129
{{< /tabs >}}
122130

@@ -126,14 +134,20 @@ splunk-otel-collector-chart/splunk-otel-collector \
126134

127135
{{< tabs >}}
128136
{{% tab name="Kubectl Get Pods" %}}
137+
129138
```bash
130139
kubectl get pods
131140
```
141+
132142
{{% /tab %}}
133143
{{% tab name="Kubectl Get Pods Output" %}}
144+
145+
``` text
134146
NAME READY STATUS RESTARTS AGE
135147
splunk-otel-collector-agent-2sk6k 0/1 Running 0 10s
136148
splunk-otel-collector-k8s-cluster-receiver-6956d4446f-gwnd7 0/1 Running 0 10s
149+
```
150+
137151
{{% /tab %}}
138152
{{< /tabs >}}
139153

@@ -143,11 +157,15 @@ OpenTelemetry Collector podのログを確認して、エラーがないこと
143157

144158
{{< tabs >}}
145159
{{% tab name="Kubectl Logs" %}}
160+
146161
```bash
147162
kubectl logs -l app=splunk-otel-collector -f --container otel-collector
148163
```
164+
149165
{{% /tab %}}
150166
{{% tab name="Kubectl Logs Output" %}}
167+
168+
``` text
151169
2021-03-21T16:11:10.900Z INFO service/service.go:364 Starting receivers...
152170
2021-03-21T16:11:10.900Z INFO builder/receivers_builder.go:70 Receiver is starting... {"component_kind": "receiver", "component_type": "prometheus", "component_name": "prometheus"}
153171
2021-03-21T16:11:11.009Z INFO builder/receivers_builder.go:75 Receiver started. {"component_kind": "receiver", "component_type": "prometheus", "component_name": "prometheus"}
@@ -158,6 +176,8 @@ kubectl logs -l app=splunk-otel-collector -f --container otel-collector
158176
2021-03-21T16:11:11.009Z INFO service/service.go:267 Everything is ready. Begin running and processing data.
159177
2021-03-21T16:11:11.009Z INFO [email protected]/receiver.go:59 Starting shared informers and wait for initial cache sync. {"component_kind": "receiver", "component_type": "k8s_cluster", "component_name": "k8s_cluster"}
160178
2021-03-21T16:11:11.281Z INFO [email protected]/receiver.go:75 Completed syncing shared informer caches. {"component_kind": "receiver", "component_type": "k8s_cluster", "component_name": "k8s_cluster"}
179+
```
180+
161181
{{% /tab %}}
162182
{{< /tabs >}}
163183

@@ -167,6 +187,7 @@ OpenTelemetry Collectorのインストールに失敗した場合は、次のよ
167187
``` sh
168188
helm delete splunk-otel-collector
169189
```
190+
170191
{{% /notice %}}
171192

172193
---
@@ -183,9 +204,11 @@ Splunk の UI で左下の **>>** を開いて **Infrastructure** をクリッ
183204

184205
{{< tabs >}}
185206
{{% tab name="Echo Cluster Name" %}}
207+
186208
```bash
187209
echo $(hostname)-k3s-cluster
188210
```
211+
189212
{{% /tab %}}
190213
{{< /tabs >}}
191214

@@ -198,7 +221,7 @@ echo $(hostname)-k3s-cluster
198221
![Filtered K8S Cluster](../images/filtered-k3s-cluster.png)
199222

200223
ノードの状態を確認するには、クラスターの淡いブルーの背景にカーソルを置き、左上に表示される青い虫眼鏡をクリックしてください 。
201-
![Magnifying Glass](../images/blue-cross.png)
224+
![Magnifying Glass](../images/blue-cross.png)
202225

203226
これで、ノードレベルまでドリルダウンできます。 次に、サイドバーボタンをクリックしてサイドバーを開き、Metricsサイドバーを開きます。
204227

content/ja/imt/gdi/nginx.md

Lines changed: 20 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -24,9 +24,11 @@ Multipass または AWS/EC2 のシェルセッションで、`nginx` ディレ
2424

2525
{{< tabs >}}
2626
{{% tab name="Change Directory" %}}
27+
2728
```bash
2829
cd ~/workshop/k3s/nginx
2930
```
31+
3032
{{% /tab %}}
3133
{{< /tabs >}}
3234

@@ -38,22 +40,30 @@ NGINX の ConfigMap[^1] を `nginx.conf` ファイルを使って作成します
3840

3941
{{< tabs >}}
4042
{{% tab name="Kubectl Configmap Create" %}}
43+
4144
```bash
4245
kubectl create configmap nginxconfig --from-file=nginx.conf
4346
```
47+
4448
{{% /tab %}}
4549
{{% tab name="Kubectl Create Configmap Output" %}}
50+
51+
``` text
4652
configmap/nginxconfig created
53+
```
54+
4755
{{% /tab %}}
4856
{{< /tabs >}}
4957

5058
続いて、デプロイメントを作成します。
5159

5260
{{< tabs >}}
5361
{{% tab name="Kubectl Create Deployment" %}}
62+
5463
```bash
5564
kubectl create -f nginx-deployment.yaml
5665
```
66+
5767
{{% /tab %}}
5868
{{% tab name="Kubectl Create Deployment Output" %}}
5969
deployment.apps/nginx created
@@ -65,15 +75,19 @@ service/nginx created
6575

6676
{{< tabs >}}
6777
{{% tab name="Kubectl Create Deployment" %}}
78+
6879
```bash
6980
kubectl create -f locust-deployment.yaml
7081
```
82+
7183
{{% /tab %}}
7284
{{% tab name="Kubectl Create Deployment Output" %}}
85+
7386
```bash
7487
deployment.apps/nginx-loadgenerator created
7588
service/nginx-loadgenerator created
7689
```
90+
7791
{{% /tab %}}
7892
{{< /tabs >}}
7993

@@ -95,11 +109,15 @@ Pod が実行状態に移行するまでには 20 秒程度しかかかりませ
95109

96110
{{< tabs >}}
97111
{{% tab name="Kubectl Get Pods" %}}
112+
98113
```bash
99114
kubectl get pods
100115
```
116+
101117
{{% /tab %}}
102118
{{% tab name="Kubectl Get Pods Output" %}}
119+
120+
``` text
103121
NAME READY STATUS RESTARTS AGE
104122
splunk-otel-collector-k8s-cluster-receiver-77784c659c-ttmpk 1/1 Running 0 9m19s
105123
splunk-otel-collector-agent-249rd 1/1 Running 0 9m19s
@@ -110,6 +128,8 @@ nginx-7b95fb6b6b-hlx27 1/1 Running
110128
nginx-7b95fb6b6b-zwns9 1/1 Running 0 5m57s
111129
svclb-nginx-loadgenerator-nscx4 1/1 Running 0 2m20s
112130
nginx-loadgenerator-755c8f7ff6-x957q 1/1 Running 0 2m20s
131+
```
132+
113133
{{% /tab %}}
114134
{{< /tabs >}}
115135

content/ja/imt/monitoring-as-code/_index.md

Lines changed: 22 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -26,15 +26,16 @@ AWS/EC2 インスタンスにログインして、`o11y-cloud-jumpstart` ディ
2626

2727
{{< tabs >}}
2828
{{% tab name="Change directory" %}}
29+
2930
```bash
3031
cd observability-content-contrib/integration-examples/terraform-jumpstart
3132
```
33+
3234
{{</tab >}}
3335
{{< /tabs >}}
3436

3537
必要な環境変数は、[Helmによるインストール](../gdi/#2-helmによるインストール) ですでに設定されているはずです。そうでない場合は、以下の Terraform のステップで使用するために、以下の環境変数を作成してください。
3638

37-
3839
{{< tabs >}}
3940
{{% tab name="Export ACCESS TOKEN" %}}
4041

@@ -63,11 +64,15 @@ Splunk Terraform Provider の新バージョンがリリースされるたびに
6364

6465
{{< tabs >}}
6566
{{% tab name="Initialise Terraform" %}}
67+
6668
```bash
6769
terraform init -upgrade
6870
```
71+
6972
{{</tab >}}
7073
{{% tab name="Initialise Output" %}}
74+
75+
``` text
7176
Upgrading modules...
7277
- aws in modules/aws
7378
- azure in modules/azure
@@ -106,6 +111,8 @@ terraform init -upgrade
106111
If you ever set or change modules or backend configuration for Terraform,
107112
rerun this command to reinitialize your working directory. If you forget, other
108113
commands will detect it and remind you to do so if necessary.
114+
```
115+
109116
{{</tab >}}
110117
{{< /tabs >}}
111118

@@ -150,22 +157,30 @@ Plan: 146 to add, 0 to change, 0 to destroy.
150157

151158
{{< tabs >}}
152159
{{% tab name="Apply Plan" %}}
160+
153161
```bash
154162
terraform apply -var="access_token=$ACCESS_TOKEN" -var="realm=$REALM" -var="o11y_prefix=[$(hostname)]"
155163
```
164+
156165
{{</tab >}}
157166
{{% tab name="Apply Plan Output" %}}
167+
168+
``` text
158169
Apply complete! Resources: 146 added, 0 changed, 0 destroyed.
170+
```
171+
159172
{{</tab >}}
160173
{{< /tabs >}}
161174

162175
適用が完了したら、 **Alerts → Detectors** でディテクターが作成されたことを確認してください。ディテクターのプレフィックスには、インスタンスのホスト名が入ります。プレフィックスの値を確認するには以下を実行してください。
163176

164177
{{< tabs >}}
165178
{{% tab name="Echo Hostname" %}}
179+
166180
```bash
167181
echo $(hostname)
168182
```
183+
169184
{{</tab >}}
170185
{{< /tabs >}}
171186

@@ -183,12 +198,18 @@ echo $(hostname)
183198

184199
{{< tabs >}}
185200
{{% tab name="Destroy" %}}
201+
186202
```bash
187203
terraform destroy -var="access_token=$ACCESS_TOKEN" -var="realm=$REALM"
188204
```
205+
189206
{{</tab >}}
190207
{{% tab name="Destroy Output" %}}
208+
209+
``` text
191210
Destroy complete! Resources: 146 destroyed.
211+
```
212+
192213
{{</tab >}}
193214
{{< /tabs >}}
194215

0 commit comments

Comments
 (0)