Skip to content

Commit 8706429

Browse files
committed
Update from SAP DITA CMS (squashed):
commit 2614fd13d9d38cb5d67b5b6e739368de1c114801 Author: REDACTED Date: Mon Nov 17 15:39:34 2025 +0000 Update from SAP DITA CMS 2025-11-17 15:39:34 Project: dita-all/bex1621329160251 Project map: d3e749bbac3d4f728c12228db6629c45.ditamap Output: loiodaa66b2ef49f48539fa2882d82d5b619 Language: en-US Builddable map: f17fa8568d0448c685f2a0301061a6ee.ditamap commit 08cd793f69d540971a48928573343475309d2537 Author: REDACTED Date: Mon Nov 17 15:28:21 2025 +0000 Update from SAP DITA CMS 2025-11-17 15:28:21 Project: dita-all/ayn1620809929290 Project map: 8a3894192639433b9ba1f87efe78bfd1.ditamap Output: loio62559d01add8414eb0c4d76d2d9f48bd Language: en-US Builddable map: 038a6194f65c4ef68885f6f16360dbc4.ditamap commit b5cb1aca295877b0168063e380ffe7a6d97e2118 Author: REDACTED Date: Mon Nov 17 15:14:32 2025 +0000 Update from SAP DITA CMS 2025-11-17 15:14:32 Project: dita-all/bex1621329160251 Project map: d3e749bbac3d4f728c12228db6629c45.ditamap ################################################## [Remaining squash message was removed before commit...]
1 parent 8fc10ec commit 8706429

File tree

64 files changed

+1525
-521
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

64 files changed

+1525
-521
lines changed

docs/sap-ai-core/apis-and-api-extensions-0cb7275.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -89,7 +89,7 @@ Enhance content generation with additional capabilities for business AI scenario
8989
</td>
9090
<td valign="top">
9191

92-
[Orchestration API](https://api.sap.com/api/ORCHESTRATION_API/overview)
92+
[Orchestration API](https://api.sap.com/api/ORCHESTRATION_API_v2/overview)
9393

9494
</td>
9595
</tr>

docs/sap-ai-core/choose-an-instance-57f4f19.md

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -18,12 +18,14 @@ Within SAP AI Core, the instances are selected via the `ai.sap.com/resourcePlan`
1818

1919
There are limits to the default disk storage size for all these nodes. Datasets that are loaded to the nodes consume disk space. If you have large data sets \(larger than 30 GB\), or have large models, you may have to increase the disk size. To do so, use the persistent volume claim in Argo Workflows to specify the required disk size \(see [Volumes](https://argoproj.github.io/argo-workflows/walk-through/volumes/)\).
2020

21+
Instance types are recommended because they provide access to newer hardware and higher performance specifications compared to resource plans.
22+
2123

2224

2325
## Instance Types
2426

2527
> ### Restriction:
26-
> Instance types are only available as part of the “extended” service plan. For more information on instances that can be used with the “standard” service plan.
28+
> Instance types are only available as part of the “extended” service plan. For more information on instances that can be used with the “standard” service plan, see [Resource Plans](https://help.sap.com/docs/sap-ai-core/sap-ai-core-service-guide/choose-resource-plan-train?version=CLOUD&q=instance+types#resource-plans).
2729
2830
Within SAP AI Core, the instances are selected via the `ai.sap.com/instanceType` label in your workflow templates. It maps the selected resource and takes a string value, which is the ID of the instance
2931

docs/sap-ai-core/choose-an-instance-abd672f.md

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -18,12 +18,14 @@ Within SAP AI Core, the instances are selected via the `ai.sap.com/resourcePlan`
1818

1919
There are limits to the default disk storage size for all these nodes. Datasets that are loaded to the nodes consume disk space. If you have large data sets \(larger than 30 GB\), or have large models, you may have to increase the disk size. To do so, use the persistent volume claim in Argo Workflows to specify the required disk size \(see [Volumes](https://argoproj.github.io/argo-workflows/walk-through/volumes/)\).
2020

21+
Instance types are recommended because they provide access to newer hardware and higher performance specifications compared to resource plans.
22+
2123

2224

2325
## Instance Types
2426

2527
> ### Restriction:
26-
> Instance types are only available as part of the “extended” service plan. For more information on instances that can be used with the “standard” service plan.
28+
> Instance types are only available as part of the “extended” service plan. For more information on instances that can be used with the “standard” service plan, see [Resource Plans](https://help.sap.com/docs/sap-ai-core/sap-ai-core-service-guide/choose-resource-plan-train?version=CLOUD&q=instance+types#resource-plans).
2729
2830
Within SAP AI Core, the instances are selected via the `ai.sap.com/instanceType` label in your workflow templates. It maps the selected resource and takes a string value, which is the ID of the instance
2931

docs/sap-ai-core/choose-an-instance-c58d4e5.md

Lines changed: 0 additions & 283 deletions
This file was deleted.

docs/sap-ai-core/connect-your-data-9508bdb.md

Lines changed: 3 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -4,13 +4,9 @@
44

55
Use cloud storage with SAP AI Core to store AI assets such as datasets and model files. You use Artifacts in SAP AI Core to reference to your AI Assets.
66

7-
- **[Manage Files](manage-files-386ba71.md "An artifact
8-
refers
9-
to data or a file that is produced or consumed by
10-
executions or
11-
deployments.
12-
They are managed through SAP AI Core and your connected object
13-
store.")**
7+
- **[Manage Files](manage-files-386ba71.md "An artifact refers to data or a file that is produced or consumed by executions or
8+
deployments. They are managed through SAP AI Core and your
9+
connected object store.")**
1410
An artifact refers to data or a file that is produced or consumed by executions or deployments. They are managed through SAP AI Core and your connected object store.
1511
- **[Manage Files Using the Dataset API](manage-files-using-the-dataset-api-ba8ac5c.md "Where direct access to files in the object store is not possible or desirable (for example, in Content as a Service Scenarios, where the
1612
Service Consumers might not be the owners of the object store) you can upload, download, and delete files from the pre-registered object store

docs/sap-ai-core/create-a-document-grounding-pipeline-using-the-pipelines-api-0a13e1c.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -23,6 +23,8 @@ This API call creates a pipeline for indexing documents for a resource group.
2323

2424
You can schedule automatic content updates by using a `cronExpression` when you create your pipeline. For more information, see [Cron Expressions](cron-expressions-6175008.md).
2525

26+
For repositories that support it, you can include metadata in your pipeline. Metadata is additional information associated with data that can help in identifying, categorizing, and retrieving relevant content.
27+
2628
> ### Tip:
2729
> If you use the pipelines API, you do not need to call the Vector API separately. After the data is embedded, you can directly use the Retrieval API to query the vector store for relevant sections.
2830
@@ -62,4 +64,6 @@ You can manually restart a pipeline. For more information, see [Manually Restart
6264

6365
- **[Cron Expressions](cron-expressions-515e839.md "")**
6466

67+
- **[Update Metadata](update-metadata-3df4881.md "")**
68+
6569

docs/sap-ai-core/create-a-document-grounding-pipeline-using-the-pipelines-api-d32b146.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -23,6 +23,8 @@ This API call creates a pipeline for indexing documents for a resource group.
2323

2424
You can schedule automatic content updates by using a `cronExpression` when you create your pipeline. For more information, see [Cron Expressions](cron-expressions-6175008.md).
2525

26+
For repositories that support it, you can include metadata in your pipeline. Metadata is additional information associated with data that can help in identifying, categorizing, and retrieving relevant content.
27+
2628
> ### Tip:
2729
> If you use the pipelines API, you do not need to call the Vector API separately. After the data is embedded, you can directly use the Retrieval API to query the vector store for relevant sections.
2830
@@ -62,4 +64,6 @@ You can manually restart a pipeline. For more information, see [Manually Restart
6264

6365
- **[Cron Expressions](cron-expressions-6175008.md "")**
6466

67+
- **[Update Metadata](update-metadata-fb69180.md "")**
68+
6569

docs/sap-ai-core/create-a-pipeline-with-aws-s3-7f97adf.md

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -120,6 +120,8 @@ Name of the generic secret created for AWS S3
120120
121121
## With Metadata
122122
123+
Metadata is additional information associated with data that can help in identifying, categorizing, and retrieving relevant content.
124+
123125
The metadata attribute is optional. It accepts the destination name, which is used to connect to the Microsoft SharePoint metadata server to retrieve metadata for document indexing.
124126
125127
The metadata attribute should be used only if a metadata server is configured. To create a grounding pipeline without metadata, see [Create a Document Grounding Pipeline Using the Pipelines API](create-a-document-grounding-pipeline-using-the-pipelines-api-d32b146.md).
@@ -228,7 +230,7 @@ The name of the generic secret created for the AWS S3 Metadata Server
228230
> }'
229231
> ```
230232
231-
To make your pipeline searchable, add `dataRepositoryMetadata` to the `metadata` field. For example:
233+
You can search a pipeline later in your workflow by using the `dataRepositoryMetadata` attribute in the `metadata` field. Metadata organizes information using categories \(keys\) paired with their related values. For example:
232234
233235
```
234236
...
@@ -368,3 +370,5 @@ For more information, see [Cron Expressions](cron-expressions-6175008.md).
368370
369371
After preparing your vectors, you can use the Retrieval API for chunks relevant to a query, or use the grounding module as part of an orchestration workflow for information retrieval and LLM interaction. For more information, see [Retrieval API](retrieval-api-281e8cf.md) or [Using the Grounding Module](using-the-grounding-module-e1c4dd1.md).
370372
373+
You can add metadata to your pipeline, or update existing metadata by sending a patch request. For more information, see [Update Metadata](update-metadata-fb69180.md).
374+

0 commit comments

Comments
 (0)