Skip to content

Commit deb2b80

Browse files
committed
Wrote 05 and few minor tweaks to 04
1 parent 3c27c2e commit deb2b80

File tree

2 files changed

+123
-2
lines changed

2 files changed

+123
-2
lines changed

docs/04-running-tests.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2,14 +2,14 @@
22

33
More to come here soon!
44

5-
## 🐳 Docker recap
5+
## 🐳 Docker Recap
66

77
Before moving on, let's take a step back and focus on what we learned.
88

99
- **Ephemeral test environments.** We no longer need long-running test environments with databases and other services just for testing. We can spin them up when needed and then tear them down, saving on costs and maintenance.
1010
- **No more resource contention.** Building on the previous notes, tests can now run in parallel as they spin up their own resources. No more waiting for another test suite to finish before the database can be used for the next test run.
1111
- **Test consistency.** By using containers in testing, the tests run by developers on their local machines will run the same way as they do in their CI environments. No more "it worked on my machine" for testing!
1212

13-
## What's next?
13+
## Next steps
1414

1515
Now that we've learned about development and testing, let's prepare our application for deployment by containerizing it!
Lines changed: 121 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,121 @@
1+
# 🔨 Building and pushing an image
2+
3+
So far, we've experienced the "it just works" capabilities of containers. The same can be true for our own apps too!
4+
5+
By running our apps in containers, we know they're going to work the same way anywhere there is a container engine. It'll just work!
6+
7+
To do this, we need to write a `Dockerfile`.
8+
9+
10+
11+
## Writing a Dockerfile
12+
13+
**NOTE:** With the Docker AI Agent, you can run the following command to create a Dockerfile. The AI features are not yet supported in Labspaces. But, stay tuned!
14+
15+
```bash
16+
docker ai "Containerize this project for me"
17+
```
18+
19+
But, since we can't run that in the Labspace, we'll create a Dockerfile manually.
20+
21+
1. Create a file at the root of the project called `Dockerfile`.
22+
23+
2. The first step when building an image is to determine what are we basing from - what are we extending/building on top of?
24+
25+
Since this project is a Node-based project, we can use the [Docker Official Node image](https://hub.docker.com/_/node).
26+
27+
```dockerfile
28+
FROM node:lts-slim
29+
```
30+
31+
The `:lts-slim` portion is called a "tag". Think of it like the "version" we want to use. In this case, we are indicating we want to use the "LTS" (Long-Term Support) version and a slimmed version of it.
32+
33+
3. The next step is often to specify our "working directory." Where do we want to add files and run commands inside this new image?
34+
35+
```dockerfile
36+
WORKDIR /usr/local/app
37+
```
38+
39+
The path can vary depending on teams, orgs, and companies. There's no one universal "right" path.
40+
41+
4. Next, let's install our app's dependencies. We'll do so by copying in the files that define our dependencies and then running the command to install them:
42+
43+
```dockerfile
44+
COPY package*.json ./
45+
RUN npm ci --production
46+
```
47+
48+
5. Finally, we'll copy in our app source code and set a few environment variables to have the app run in a "production" mode (these variables vary depending on the languages and frameworks being used).
49+
50+
```dockerfile
51+
ENV NODE_ENV=production
52+
COPY src/ ./src/
53+
```
54+
55+
6. Finally, we're going to add some configuration to specify how a container using this image should run by default - what's the default command and what port does it want to use?
56+
57+
```dockerfile
58+
EXPOSE 3000
59+
CMD ["node", "src/index.js"]
60+
```
61+
62+
Your Dockerfile should now look like this:
63+
64+
```dockerfile
65+
FROM node:lts-slim
66+
WORKDIR /usr/local/app
67+
COPY package*.json ./
68+
RUN npm ci --production
69+
ENV NODE_ENV=production
70+
COPY src/ ./src/
71+
EXPOSE 3000
72+
CMD ["node", "src/index.js"]
73+
```
74+
75+
76+
## Building the image
77+
78+
Now that we have a Dockerfile, let's build it and push it to Docker Hub!
79+
80+
1. Before moving forward, login with your Docker account by running the following command following the instructions:
81+
82+
```console
83+
docker login
84+
```
85+
86+
2. Run the following to set an environment variable with your username:
87+
88+
```bash
89+
DOCKER_USERNAME=$(jq -r '.auths["https://index.docker.io/v1/"].auth' ~/.docker/config.json | base64 -d | cut -d: -f1); echo "Logged in as $DOCKER_USERNAME"
90+
```
91+
92+
3. You're ready to build your image now. Build the container image using the following `docker build` command:
93+
94+
```bash
95+
docker build -t $DOCKER_USERNAME/memes-r-us --load .
96+
```
97+
98+
**NOTE:** The `--load` flag will "load" the image into the local container image store. This is only required because we're doing the build in a Labspace environment. If running directly on your machine, it'll load automatically.
99+
100+
4. As of right now, the image is only available locally on your machine. To push the image, we can use the `docker push` command:
101+
102+
```bash
103+
docker push $DOCKER_USERNAME/memes-r-us
104+
```
105+
106+
That's it! Or is it? Well, our image is built and pushed, but did we build a _good_ image? We'll talk about that in the next step!
107+
108+
109+
110+
## 🐳 Docker Recap
111+
112+
Before moving on, let's take a step back and focus on what we learned.
113+
114+
- **We build images using a Dockerfile.** The Dockerfile provides the instruction set on how to build container images.
115+
- **The Docker AI tooling can help write that Dockerfile.** Since writing a Dockerfile can be tricky, the Docker AI Agent can make this process easier by analyzing the project and creating a Dockerfile for us, following current best practices.
116+
117+
While we didn't talk about it here, Docker Offload and Docker Build Cloud can be used to delegate our builds to cloud-based infrastructure to allow our builds to run faster by accessing stronger machines and a consistent build cache.
118+
119+
## Next steps
120+
121+
Now that we have an image, we want to explore the question of "did we build a _good_ image?". To do this, we'll leverage Docker Scout and explore Docker Hardened Images!

0 commit comments

Comments
 (0)