You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: Demo.md
+30-31Lines changed: 30 additions & 31 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -10,53 +10,50 @@ This demonstration will walk through this project to showcase Docker's build, te
10
10
- Select the stars on the top left to "Ask Gordon"
11
11
- Select Explain my Dockerfile -> Give access to CatBot directory
12
12
- See the various descriptions of lines in the Dockerfile
13
-
14
-
Knowing this, let's start developing.
13
+
- Let's run this and see it in action.
15
14
16
15
## Running in my dev environment
17
-
### Containers, TestContainers, TCC 🐳
16
+
### Topics: Docker Model Runner, Containers, Docker Compose 🐳
18
17
- Navigate back to project on VS Code
19
-
- Start model container as instructed in README: `docker run -p 11434:11434 --name model samanthamorris684/ollama@sha256:78a199fa9652a16429037726943a82bd4916975fecf2b105d06e140ae70a1420`
20
-
- Run app locally using: `dotenv -e .env.dev -- npm run start:dev`
21
18
- Split view between VSCode and Chrome
19
+
- Run `docker compose up --build`
20
+
- Build the images and run them
22
21
- Navigate to localhost:3000 on Chrome
23
-
-*Note: We are only running the LLM container*
24
22
- Test it out!
25
-
-*What if I wanted to test this locally?*
23
+
-*How did this work?*
24
+
- Move into Docker Compose `compose.yaml`
25
+
- See we automatically spun up a frontend and a backend service
26
+
-*How did the cat talk to us?*
27
+
-*Easy: We are using Docker Model Runner to run a model locally.*
28
+
- Review logs where we connect to `http://model-runner.docker.internal/engines/llama.cpp/v1/chat/completions`
29
+
- Navigate to `server.js`
30
+
-*Note that we are interacting with the model through an OpenAI endpoint (chat/completions) from within the backend container*
31
+
- Take down services with `docker compose down`
32
+
-*How can we learn more about models?*
33
+
- In a separate terminal, run `docker model ls`
34
+
- See you can run a model using `docker model run ai/llama3.2`
35
+
- Exit with `/bye`
36
+
-:red_circle: NAVIGATE BACK TO SLIDES
37
+
38
+
39
+
## Let's scan what we built our image and test our code!
40
+
### Topics: Scout, TCC 🐳
41
+
- Navigate to Docker Desktop and search for image build from compose
42
+
- Run analysis for vulnerabilities with Docker Scout
43
+
- Navigate back to VS Code
26
44
- Split VS Code and Docker Desktop
27
45
- Navigate to tests/server.test.js and show TestContainers logic
28
46
- Run `npm test` and watch test run, containers appear in DD
29
47
- Switch to TestContainers cloud and re-run `npm test`, notice the containers do not appear in DD
30
48
- View results in [TCC dashboard](https://app.testcontainers.cloud/accounts/9926/dashboard)
31
-
32
-
33
-
## Let's build and scan our image!
34
-
### Topics: Build, Build Cloud, and Scout 🐳
35
-
- Let's try to build this locally: `docker build -t samanthamorris684/catbot:nobc . --platform="linux/amd64"`
36
-
-*Note: This will only leverage local caching!*
37
-
- We can also use build cloud remote bulder: `docker buildx build --builder cloud-demonstrationorg-default -t samanthamorris684/catbot:bc . --platform="linux/amd64"`
38
-
- Subsequent builds of this image will use the shared build cache on different machines, making builds faster! [Take a look.](https://app.docker.com/build/accounts/demonstrationorg/builds)
39
-
-*Note: We will also make use of build cloud in the CI pipeline.*
40
-
- Navigate to Docker Desktop and search for image build
41
-
- Run analysis for vulnerabilities with Docker Scout
42
-
43
-
## How can we start up and tear down all these services together, and use containers for all?
44
-
45
-
### Topics: Docker Compose 🐳
46
-
47
-
- Navigate to the compose.yaml file
48
-
- Two different containers/services, port mapping to access entry of app on port 3000
49
-
-*Note: These containers will be able to talk to each other via their exposed ports*
50
-
- Run `docker compose up --build`
51
-
- Navigate to localhost:3000
52
-
- When done, run `docker compose down`
49
+
-:red_circle: NAVIGATE BACK TO SLIDES
53
50
54
51
## Bonus: How can we automate this?
55
52
56
53
- You can use a pipeline to automate this process, in this case we use GitHub Actions
57
54
- Let's make a quick PR.
58
55
- Edit line 213 of App.js to a different cat name
59
-
- Quick preview of a frontend change by running `dotenv -e .env.dev -- npm run start:dev`
56
+
- Quick preview of change by running `docker compose up --build`
0 commit comments