Skip to content

Commit 48d7372

Browse files
committed
added solving-problems-with-o11y-cloud workshop and updated docker details in docker-k8s-otel workshop
1 parent e21d526 commit 48d7372

File tree

6 files changed

+142
-35
lines changed

6 files changed

+142
-35
lines changed

content/en/ninja-workshops/8-docker-k8s-otel/2-deploy-collector.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -114,5 +114,5 @@ Where do we find the configuration that is used by this collector?
114114

115115
It's available in the `/etc/otel/collector` directory. Since we installed the
116116
collector in `agent` mode, the collector configuration can be found in the
117-
`agent_config.yaml file`.
117+
`agent_config.yaml` file.
118118

content/en/ninja-workshops/8-docker-k8s-otel/4-instrument-app-with-otel.md

Lines changed: 14 additions & 28 deletions
Original file line numberDiff line numberDiff line change
@@ -13,25 +13,12 @@ using the NuGet packages.
1313
We'll start by downloading the latest `splunk-otel-dotnet-install.sh` file,
1414
which we'll use to instrument our .NET application:
1515

16-
{{< tabs >}}
17-
{{% tab title="Script" %}}
18-
1916
``` bash
2017
cd ~/workshop/docker-k8s-otel/helloworld
2118

2219
curl -sSfL https://github.com/signalfx/splunk-otel-dotnet/releases/latest/download/splunk-otel-dotnet-install.sh -O
2320
```
2421

25-
{{% /tab %}}
26-
{{% tab title="Example Output" %}}
27-
28-
``` bash
29-
TBD
30-
```
31-
32-
{{% /tab %}}
33-
{{< /tabs >}}
34-
3522
Refer to [Install the Splunk Distribution of OpenTelemetry .NET manually](https://docs.splunk.com/observability/en/gdi/get-data-in/application/otel-dotnet/instrumentation/instrument-dotnet-application.html#install-the-splunk-distribution-of-opentelemetry-net-manually)
3623
for further details on the installation process.
3724

@@ -78,23 +65,29 @@ environment within Splunk Observability Cloud:
7865
export OTEL_RESOURCE_ATTRIBUTES=deployment.environment=otel-$INSTANCE
7966
```
8067

81-
## A Challenge For You
8268

83-
Before starting our .NET application with the instrumentation, there's a challenge for you.
69+
## Run the Application with Instrumentation
70+
71+
We can run the application as follows:
72+
73+
```
74+
dotnet run
75+
```
76+
77+
## A Challenge For You
8478

85-
How can we see what traces are being exported by the .NET application? (i.e. on our Linux instance
86-
rather than within Observability Cloud)?
79+
How can we see what traces are being exported by the .NET application from our Linux instance?
8780

8881
<details>
8982
<summary><b>Click here to see the answer</b></summary>
9083

91-
There are two ways we can do this:
84+
There are two ways we can do this:
9285

9386
1. We could add `OTEL_TRACES_EXPORTER=otlp,console` at the start of the `dotnet run` command, which ensures that traces are both written to collector via OTLP as well as the console.
9487
``` bash
9588
OTEL_TRACES_EXPORTER=otlp,console dotnet run
9689
```
97-
2. Alternatively, we could add the debug exporter to the collector configuration, and add it to the traces pipeline, which ensures the traces are written to the collector logs.
90+
2. Alternatively, we could add the debug exporter to the collector configuration, and add it to the traces pipeline, which ensures the traces are written to the collector logs.
9891

9992
``` yaml
10093
exporters:
@@ -112,16 +105,9 @@ service:
112105
```
113106
</details>
114107
108+
## Access the Application
115109
116-
## Run the Application with Instrumentation
117-
118-
We can run the application as follows:
119-
120-
```
121-
dotnet run
122-
```
123-
124-
Once it's running, use a second SSH terminal and access the application using curl:
110+
Once the application is running, use a second SSH terminal and access it using curl:
125111
126112
``` bash
127113
curl http://localhost:8080/hello

content/en/ninja-workshops/8-docker-k8s-otel/5-dockerize-app.md

Lines changed: 75 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -48,7 +48,81 @@ What does all this mean? Let's break it down.
4848

4949
## Walking through the Dockerfile
5050

51-
TODO
51+
We've used a multi-stage Dockerfile for this example, which separates the Docker image creation process into the following stages:
52+
53+
* Base
54+
* Build
55+
* Publish
56+
* Final
57+
58+
While a multi-stage approach is more complex, it allows us to create a
59+
lighter-weight runtime image for deployment. We'll explain the purpose of
60+
each of these stages below.
61+
62+
### The Base Stage
63+
64+
The base stage defines the user that will
65+
be running the app, the working directory, and exposes
66+
the port that will be used to access the app.
67+
It's based off of Microsoft's `mcr.microsoft.com/dotnet/aspnet:8.0` image:
68+
69+
``` dockerfile
70+
FROM mcr.microsoft.com/dotnet/aspnet:8.0 AS base
71+
USER app
72+
WORKDIR /app
73+
EXPOSE 8080
74+
```
75+
76+
Note that the `mcr.microsoft.com/dotnet/aspnet:8.0` image includes the .NET runtime only,
77+
rather than the SDK, so is relatively lightweight. It's based off of the Debian 12 Linux
78+
distribution. You can find more information about the ASP.NET Core Runtime Docker images
79+
in [GitHub](https://github.com/dotnet/dotnet-docker/blob/main/README.aspnet.md).
80+
81+
### The Build Stage
82+
83+
The next stage of the Dockerfile is the build stage. For this stage, the
84+
`mcr.microsoft.com/dotnet/sdk:8.0` image is used, which is also based off of
85+
Debian 12 but includes the full [.NET SDK](https://github.com/dotnet/dotnet-docker/blob/main/README.sdk.md) rather than just the runtime.
86+
87+
This stage copies the application code to the build image and then
88+
uses `dotnet build` to build the project and its dependencies into a
89+
set of `.dll` binaries:
90+
91+
``` dockerfile
92+
FROM mcr.microsoft.com/dotnet/sdk:8.0 AS build
93+
ARG BUILD_CONFIGURATION=Release
94+
WORKDIR /src
95+
COPY ["helloworld.csproj", "helloworld/"]
96+
RUN dotnet restore "./helloworld/./helloworld.csproj"
97+
WORKDIR "/src/helloworld"
98+
COPY . .
99+
RUN dotnet build "./helloworld.csproj" -c $BUILD_CONFIGURATION -o /app/build
100+
```
101+
102+
### The Publish Stage
103+
104+
The third stage is publish, which is based on build stage image rather than a Microsoft image. In this stage, `dotnet publish` is used to
105+
package the application and its dependencies for deployment:
106+
107+
``` dockerfile
108+
FROM build AS publish
109+
ARG BUILD_CONFIGURATION=Release
110+
RUN dotnet publish "./helloworld.csproj" -c $BUILD_CONFIGURATION -o /app/publish /p:UseAppHost=false
111+
```
112+
113+
### The Final Stage
114+
115+
The fourth stage is our final stage, which is based on the base
116+
stage image (which is lighter-weight than the build and publish stages). It copies the output from the publish stage image and
117+
defines the entry point for our application:
118+
119+
``` dockerfile
120+
FROM base AS final
121+
WORKDIR /app
122+
COPY --from=publish /app/publish .
123+
124+
ENTRYPOINT ["dotnet", "helloworld.dll"]
125+
```
52126

53127
## Build a Docker Image
54128

content/en/ninja-workshops/8-docker-k8s-otel/6-add-instrumentation-to-dockerfile.md

Lines changed: 25 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -20,10 +20,17 @@ After the .NET application is built in the Dockerfile, we want to:
2020
* Download the Splunk OTel .NET installer
2121
* Install the distribution
2222

23-
We can add the following to the Dockerfile to do so:
23+
We can add the following to the build stage of the Dockerfile to do so:
2424

2525
``` dockerfile
26-
RUN dotnet build "./diceroll-app.csproj" -c $BUILD_CONFIGURATION -o /app/build
26+
FROM mcr.microsoft.com/dotnet/sdk:8.0 AS build
27+
ARG BUILD_CONFIGURATION=Release
28+
WORKDIR /src
29+
COPY ["helloworld.csproj", "helloworld/"]
30+
RUN dotnet restore "./helloworld/./helloworld.csproj"
31+
WORKDIR "/src/helloworld"
32+
COPY . .
33+
RUN dotnet build "./helloworld.csproj" -c $BUILD_CONFIGURATION -o /app/build
2734

2835
# Add dependencies for splunk-otel-dotnet-install.sh
2936
RUN apt-get update && \
@@ -36,14 +43,13 @@ RUN curl -sSfL https://github.com/signalfx/splunk-otel-dotnet/releases/latest/do
3643
RUN sh ./splunk-otel-dotnet-install.sh
3744
```
3845

39-
Next, we'll update the Dockerfile to make the following changes to the final image:
46+
Next, we'll update the final stage of the Dockerfile with the following changes:
4047

4148
* Copy the /root/.splunk-otel-dotnet/ from the build image to the final image
4249
* Copy the entrypoint.sh file as well
4350
* Set the `OTEL_SERVICE_NAME` and `OTEL_RESOURCE_ATTRIBUTES` environment variables
4451
* Set the `ENTRYPOINT` to `entrypoint.sh`
4552

46-
4753
> Note: replace `$INSTANCE` in your Dockerfile with your instance name,
4854
> which can be determined by running `echo $INSTANCE`.
4955
@@ -83,6 +89,21 @@ with the following content:
8389
exec "$@"
8490
```
8591

92+
The `entrypoint.sh` script is required for sourcing environment variables from the instrument.sh script,
93+
which is included with the instrumentation. This ensures the correct setup of environment variables
94+
for each platform.
95+
96+
> You may be wondering, why can't we just include the following command in the Dockerfile to do this,
97+
> like we did when activating OpenTelemetry .NET instrumentation on our Linux host?
98+
> ``` dockerfile
99+
> RUN . $HOME/.splunk-otel-dotnet/instrument.sh
100+
> ```
101+
> The problem with this approach is that each Dockerfile RUN step runs a new container and a new shell.
102+
> If you try to set an environment variable in one shell, it will not be visible later on.
103+
> This problem is resolved by using an entry point script, as we've done here.
104+
> Refer to this [Stack Overflow post](https://stackoverflow.com/questions/55921914/how-to-source-a-script-with-environment-variables-in-a-docker-build-process)
105+
> for further details on this issue.
106+
86107
## Build the Docker Image
87108
88109
Let's build a new Docker image that includes the OpenTelemetry .NET instrumentation:

content/en/ninja-workshops/8-docker-k8s-otel/_index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ archetype: chapter
66
time: 2 minutes
77
authors: ["Derek Mitchell"]
88
description: By the end of this workshop you'll have gotten hands-on experience instrumenting a .NET application with OpenTelemetry, then Dockerizing the application and deploying it to Kubernetes. You’ll also gain experience deploying the OpenTelemetry collector using Helm, customizing the collector configuration, and troubleshooting collector configuration issues.
9-
9+
draft: true
1010
---
1111

1212
In this workshop, you'll get hands-on experience with the following:
Lines changed: 26 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,26 @@
1+
---
2+
title: Solving Problems with O11y Cloud
3+
linkTitle: Solving Problems with O11y Cloud
4+
weight: 9
5+
archetype: chapter
6+
time: 2 minutes
7+
authors: ["Derek Mitchell"]
8+
description: By the end of this workshop you'll have gotten hands-on experience deploying the OpenTelemetry Collector, instrumenting an application with OpenTelemetry, and using Troubleshooting MetricSets and Tag Spotlight to determine the root cause of an issue.
9+
draft: true
10+
---
11+
12+
In this workshop, you'll get hands-on experience with the following:
13+
14+
* Deploying the **OpenTelemetry Collector** and customizing the collector config
15+
* Deploy an application and instrumenting it with **OpenTelemetry**
16+
* Creating a Troubleshooting MetricSet
17+
* Troubleshoot a problem and determining root cause using **Tag Spotlight**
18+
19+
Let's get started!
20+
21+
{{% notice title="Tip" style="primary" icon="lightbulb" %}}
22+
The easiest way to navigate through this workshop is by using:
23+
24+
* the left/right arrows (**<** | **>**) on the top right of this page
25+
* the left (◀️) and right (▶️) cursor keys on your keyboard
26+
{{% /notice %}}

0 commit comments

Comments
 (0)