Skip to content

Commit 57c0863

Browse files
authored
Merge branch 'main' into rm-watsonx-api
2 parents a2943f8 + 64e2a2a commit 57c0863

File tree

2 files changed

+5
-3
lines changed

2 files changed

+5
-3
lines changed

solutions/observability/apps/upstream-opentelemetry-collectors-language-sdks.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -262,6 +262,8 @@ Many L7 load balancers handle HTTP and gRPC traffic separately and rely on expli
262262
* Use the `otlp` exporter in the OTel collector. Set annotation `nginx.ingress.kubernetes.io/backend-protocol: "GRPC"` on the K8s Ingress object proxying to APM Server.
263263
* Use the `otlphttp` exporter in the OTel collector. Set annotation `nginx.ingress.kubernetes.io/backend-protocol: "HTTP"` (or `"HTTPS"` if APM Server expects TLS) on the K8s Ingress object proxying to APM Server.
264264

265+
The preferred approach is to deploy a L4 (TCP) load balancer (e.g. [NLB](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/introduction.html) on AWS) in front of APM Server, which forwards raw TCP traffic transparently without protocol inspection.
266+
265267
For more information on how to configure an AWS ALB to support gRPC, see this AWS blog post: [Application Load Balancer Support for End-to-End HTTP/2 and gRPC](https://aws.amazon.com/blogs/aws/new-application-load-balancer-support-for-end-to-end-http-2-and-grpc/).
266268

267269
For more information on how APM Server services gRPC requests, see [Muxing gRPC and HTTP/1.1](https://github.com/elastic/apm-server/blob/main/dev_docs/otel.md#muxing-grpc-and-http11).

solutions/security/ai/connect-to-google-vertex.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ mapped_urls:
1313
% - [x] ./raw-migrated-files/security-docs/security/connect-to-vertex.md
1414
% - [ ] ./raw-migrated-files/docs-content/serverless/security-connect-to-google-vertex.md
1515

16-
This page provides step-by-step instructions for setting up a Google Vertex AI connector for the first time. This connector type enables you to leverage Vertex AI’s large language models (LLMs) within {{elastic-sec}}. You’ll first need to enable Vertex AI, then generate an API key, and finally configure the connector in your {{elastic-sec}} project.
16+
This page provides step-by-step instructions for setting up a Google Vertex AI connector for the first time. This connector type enables you to leverage Vertex AI’s large language models (LLMs) within {{elastic-sec}}. You’ll first need to enable Vertex AI, then generate a key, and finally configure the connector in your {{elastic-sec}} project.
1717

1818
::::{important}
1919
Before continuing, you should have an active project in one of Google Vertex AI’s [supported regions](https://cloud.google.com/vertex-ai/docs/general/locations#feature-availability).
@@ -74,7 +74,7 @@ The following video demonstrates these steps.
7474

7575

7676

77-
## Generate an API key [_generate_an_api_key]
77+
## Generate a key [_generate_an_api_key]
7878

7979
1. Return to Vertex AI’s **Credentials** menu and click **Manage service accounts**.
8080
2. Search for the service account you just created, select it, then click the link that appears under **Email**.
@@ -108,7 +108,7 @@ Finally, configure the connector in your Elastic deployment:
108108
4. Under **URL**, enter the URL for your region.
109109
5. Enter your **GCP Region** and **GCP Project ID**.
110110
6. Under **Default model**, specify either `gemini-1.5.pro` or `gemini-1.5-flash`. [Learn more about the models](https://cloud.google.com/vertex-ai/generative-ai/docs/learn/models).
111-
7. Under **Authentication**, enter your API key.
111+
7. Under **Authentication**, enter your credentials JSON.
112112
8. Click **Save**.
113113

114114
The following video demonstrates these steps.

0 commit comments

Comments
 (0)