Skip to content

Commit 2dcdf92

Browse files
more aws services done
1 parent d85c568 commit 2dcdf92

20 files changed

+39
-39
lines changed

src/content/docs/aws/services/codebuild.mdx

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -47,7 +47,7 @@ Let us walk through these files.
4747
It does nothing more than print a salutation message.
4848
Create a `MessageUtil.java` file and save it into the `src/main/java` directory.
4949

50-
```java
50+
```java showLineNumbers
5151
public class MessageUtil {
5252
private String message;
5353

@@ -71,7 +71,7 @@ public class MessageUtil {
7171
Every build needs to be tested.
7272
Therefore, create the `TestMessageUtil.java` file in the `src/test/java` directory.
7373

74-
```java
74+
```java showLineNumbers
7575
import org.junit.Test;
7676
import org.junit.Ignore;
7777
import static org.junit.Assert.assertEquals;
@@ -101,7 +101,7 @@ This small suite simply verifies that the greeting message is built correctly.
101101
Finally, we need a `pom.xml` file to instruct Maven about what to build and which artifact needs to be produced.
102102
Create this file at the root of your directory.
103103

104-
```xml
104+
```xml showLineNumbers
105105
<project xmlns="http://maven.apache.org/POM/4.0.0"
106106
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
107107
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
@@ -140,7 +140,7 @@ A `buildspec` file is a collection of settings and commands, specified in YAML f
140140

141141
Create this `buildspec.yml` file in the root directory.
142142

143-
```yaml
143+
```yaml showLineNumbers
144144
version: 0.2
145145

146146
phases:
@@ -200,7 +200,7 @@ awslocal s3 cp MessageUtil.zip s3://codebuild-demo-input
200200
To properly work, AWS CodeBuild needs access to other AWS services, e.g., to retrieve the source code from a S3 bucket.
201201
Create a `create-role.json` file with following content:
202202

203-
```json
203+
```json showLineNumbers
204204
{
205205
"Version": "2012-10-17",
206206
"Statement": [
@@ -227,7 +227,7 @@ it will be needed to create the CodeBuild project later on.
227227
Let us now define a policy for the created role.
228228
Create a `put-role-policy.json` file with the following content:
229229

230-
```json
230+
```json showLineNumbers
231231
{
232232
"Version": "2012-10-17",
233233
"Statement": [
@@ -302,7 +302,7 @@ awslocal codebuild create-project --generate-cli-skeleton
302302
From the generated file, change the source and the artifact location to match the S3 bucket names you just created.
303303
Similarly, fill in the ARN of the CodeBuild service role.
304304

305-
```json {hl_lines=[5,9,16]}
305+
```json {hl_lines=[5,9,16]} showLineNumbers
306306
{
307307
"name": "codebuild-demo-project",
308308
"source": {

src/content/docs/aws/services/codepipeline.mdx

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -61,7 +61,7 @@ This requires a properly configured IAM role that our pipeline can assume.
6161

6262
Create the role and make note of the role ARN:
6363

64-
```json
64+
```json showLineNumbers
6565
# role.json
6666
{
6767
"Version": "2012-10-17",
@@ -85,7 +85,7 @@ awslocal iam create-role --role-name role --assume-role-policy-document file://r
8585

8686
Now add a permissions policy to this role that permits read and write access to S3.
8787

88-
```json
88+
```json showLineNumbers
8989
# policy.json
9090
{
9191
"Version": "2012-10-17",
@@ -121,7 +121,7 @@ This is a deploy action which uploads the file to the target bucket.
121121
Pay special attention to `roleArn`, `artifactStore.location` as well as `S3Bucket`, `S3ObjectKey`, and `BucketName`.
122122
These correspond to the resources we created earlier.
123123

124-
```json {hl_lines=[6,9,26,27,52]}
124+
```json {hl_lines=[6,9,26,27,52]} showLineNumbers
125125
# declaration.json
126126
{
127127
"name": "pipeline",

src/content/docs/aws/services/cognito.mdx

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -237,7 +237,7 @@ Cognito offers a variety of lifecycle hooks called Cognito Lambda triggers, whic
237237
To illustrate, suppose you wish to define a _user migration_ Lambda trigger in order to migrate users from your existing user directory into Amazon Cognito user pools at sign-in.
238238
In this case, you can start by creating a Lambda function, let's say named `"migrate_users"`, responsible for performing the migration by creating a new file `index.js` with the following code:
239239
240-
```javascript
240+
```javascript showLineNumbers
241241
const validUsers = {
242242
belladonna: { password: "12345678Aa!", emailAddress: "[email protected]" },
243243
};
@@ -379,7 +379,7 @@ awslocal cognito-idp create-resource-server \
379379
380380
You can retrieve the token from your application using the specified endpoint: `http://cognito-idp.localhost.localstack.cloud:4566/_aws/cognito-idp/oauth2/token`.
381381
382-
```javascript
382+
```javascript showLineNumbers
383383
require('dotenv').config();
384384
const axios = require('axios');
385385
@@ -419,7 +419,7 @@ Furthermore, you have the option to combine Cognito and LocalStack seamlessly wi
419419
420420
For instance, consider this snippet from a `serverless.yml` configuration:
421421
422-
```yaml
422+
```yaml showLineNumbers
423423
service: test
424424
425425
plugins:

src/content/docs/aws/services/docdb.mdx

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -249,7 +249,7 @@ npm install [email protected]
249249
250250
Next, copy the following code into a new file named `index.js` in the `resources` folder:
251251
252-
```javascript
252+
```javascript showLineNumbers
253253
const AWS = require('aws-sdk');
254254
const RDS = AWS.RDS;
255255
const { MongoClient } = require('mongodb');
@@ -340,7 +340,7 @@ Secrets follow a [well-defined pattern](https://docs.aws.amazon.com/secretsmanag
340340
For the lambda function, you can pass the secret arn as `SECRET_NAME`.
341341
In the lambda, you can then retrieve the secret details like this:
342342
343-
```javascript
343+
```javascript showLineNumbers
344344
const AWS = require('aws-sdk');
345345
const { MongoClient } = require('mongodb');
346346

src/content/docs/aws/services/dynamodbstreams.mdx

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -55,7 +55,7 @@ You can notice that in the `LatestStreamArn` field of the response:
5555
You can now create a Lambda function (`publishNewBark`) to process stream records from `BarkTable`.
5656
Create a new file named `index.js` with the following code:
5757
58-
```javascript
58+
```javascript showLineNumbers
5959
'use strict';
6060
var AWS = require("aws-sdk");
6161
@@ -98,7 +98,7 @@ awslocal lambda create-function \
9898
To test the Lambda function, you can invoke it using the [`Invoke`](https://docs.aws.amazon.com/lambda/latest/dg/API_Invoke.html) API.
9999
Create a new file named `payload.json` with the following content:
100100
101-
```json
101+
```json showLineNumbers
102102
{
103103
"Records": [
104104
{

src/content/docs/aws/services/ecs.mdx

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -61,7 +61,7 @@ awslocal ecs create-cluster --cluster-name mycluster
6161
Containers within tasks are defined by a task definition that is managed outside of the context of a cluster.
6262
To create a task definition that runs an `ubuntu` container forever (by running an infinite loop printing "Running" on startup), create the following file as `task_definition.json`:
6363

64-
```json
64+
```json showLineNumbers
6565
{
6666
"containerDefinitions": [
6767
{
@@ -296,7 +296,7 @@ ecs_client.register_task_definition(
296296
297297
The same functionality can be achieved with the AWS CDK following this (Python) example:
298298
299-
```python
299+
```python showLineNumbers
300300
task_definition = ecs.TaskDefinition(
301301
...
302302
volumes=[
@@ -322,7 +322,7 @@ Your file paths might differ, so check Docker's documentation on [Environment Va
322322
323323
Here is a Docker Compose example:
324324
325-
```yaml
325+
```yaml showLineNumbers
326326
services:
327327
localstack:
328328
container_name: "${LOCALSTACK_DOCKER_NAME:-localstack-main}"

src/content/docs/aws/services/eks.mdx

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -307,7 +307,7 @@ To enable HTTPS for your endpoints, you can configure Kubernetes to use SSL/TLS
307307

308308
The local EKS cluster comes pre-configured with a secret named `ls-secret-tls`, which can be conveniently utilized to define the `tls` section in the ingress configuration:
309309

310-
```yaml
310+
```yaml showLineNumbers
311311
apiVersion: networking.k8s.io/v1
312312
kind: Ingress
313313
metadata:
@@ -425,7 +425,7 @@ In such cases, path-based routing may not be ideal if you need the services to b
425425

426426
To address this requirement, we recommend utilizing host-based routing rules, as demonstrated in the example below:
427427

428-
```bash
428+
```bash showLineNumbers
429429
cat <<EOF | kubectl apply -f -
430430
apiVersion: networking.k8s.io/v1
431431
kind: Ingress
@@ -532,7 +532,7 @@ As a result, the tag name `__k3d_volume_mount__` is considered deprecated and wi
532532
After creating your cluster with the `_volume_mount_` tag, you can create your path with volume mounts as usual.
533533
The configuration for the volume mounts can be set up similar to this:
534534

535-
```yaml
535+
```yaml showLineNumbers
536536
apiVersion: v1
537537
kind: Pod
538538
metadata:

src/content/docs/aws/services/elementalmediaconvert.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ We will demonstrate how to create a MediaConvert job, list jobs, create a queue,
2929

3030
Create a new file named `job.json` on your local directory:
3131

32-
```json
32+
```json showLineNumbers
3333
{
3434
"Role": "arn:aws:iam::000000000000:role/MediaConvert_Default_Role",
3535
"Settings": {

src/content/docs/aws/services/es.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -219,7 +219,7 @@ Note that only a single backend can be configured, meaning that you will get a s
219219

220220
The following shows a sample docker-compose file that contains a single-noded elasticsearch cluster and a basic localstack setp.
221221

222-
```yaml
222+
```yaml showLineNumbers
223223
services:
224224
elasticsearch:
225225
container_name: elasticsearch

src/content/docs/aws/services/events.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,7 @@ We will demonstrate creating an EventBridge rule to run a Lambda function on a s
3333

3434
To create a new Lambda function, create a new file called `index.js` with the following code:
3535

36-
```js
36+
```js showLineNumbers
3737
'use strict';
3838

3939
exports.handler = (event, context, callback) => {

0 commit comments

Comments
 (0)