From 56f4c28798538667514b387a29d8e86a6ca48186 Mon Sep 17 00:00:00 2001 From: Damian Kopyto Date: Fri, 7 Nov 2025 17:36:24 +0000 Subject: [PATCH 1/6] EIM NB API and CLi decomp --- .../eim-nbapi-cli-decomposition.md | 175 ++++++++++++++++++ 1 file changed, 175 insertions(+) create mode 100644 design-proposals/eim-nbapi-cli-decomposition.md diff --git a/design-proposals/eim-nbapi-cli-decomposition.md b/design-proposals/eim-nbapi-cli-decomposition.md new file mode 100644 index 000000000..a9322e84a --- /dev/null +++ b/design-proposals/eim-nbapi-cli-decomposition.md @@ -0,0 +1,175 @@ +# Design Proposal: Exposing only the required North Bound APIs and CLI commands for the workflow as part of EIM decomposition + +Author(s) EIM-Core team + +Last updated: 7/11/25 + +## Abstract + +In context of EIM decomposition the North Bound API service should be treated as an independent interchangeable module. +The [EIM proposal for modular decomposition](https://github.com/open-edge-platform/edge-manageability-framework/blob/main/design-proposals/eim-modular-decomposition.md) calls out a need for exposing both a full set of EMF APIs, and a need for exposing only a subset of APIs as required by individual workflows taking advantage of a modular architecture. This proposal will explore, how the APIs can be decomposed and how the decomposed output can be used as version of API service module. + +## Background and Context + +In EMF 2025.2 the API service is deployed via a helm chart deployed by Argo CD. The API service is run and deployed in a container kick-started from the API service container image. The API is build using the OpenAPI spec. There are multiple levels of APIs currently available with individual specs available for each domain in [orch-utils](https://github.com/open-edge-platform/orch-utils/tree/main/tenancy-api-mapping/openapispecs/generated) + +The list of domain APIs include: + +- Catalog and Catalog utilities APIs +- App deployment manager and app resource manager APIs +- Cluster APIs +- EIM APIs +- Alert Monitoring APIs +- MPS and RPS APIs +- Metadata broker and Tenancy APIs + +There are two levels to the API decomposition + +- Decomposition of above domain levels +- Decomposition within domain (ie. separation at EIM domain level, where overall set of APIS includes onboarding/provisioning/day2 APIs but another workflow may support only onboarding/provisioning without day2 support ) + +The following questions must be answered and investigated: + +- How the API service is build currently +- How the API service container image is build currently +- How the API service helm charts are build currently +- What level of decomposition is needed from the required workflows +- How to decomposition API at domain level +- How to decomposition API within domain level +- How to build various API service version as per desired workflows using the modular APIs +- How to deliver the various API service versions as per desired workflows +- How to expose the list of available APIs for client consumption (orch-cli) + +Uncertainties: + +- How does potential removal of the API gateway affect the exposing of the APIs to the client +- How will the decomposition and availability of APIs within the API service be mapped back to the Inventory and the set of SB APIs. + +### Decomposing the release of API service as a module + +Once the investigation is completed on how the API service is create today decisions must be done on a the service will be build and released as a module. + +- The build of the API service itself will depend on the results of top2bottom and bottom2top decomposition investigations. +- The individual versions of API service can be packaged as versioned container images: + - apiv2-emf:x.x.x + - apiv2-workflow1:x.x.x + - apiv2-workflow2:x.x.x +- Alternatively if the decomposition does not result in multiple version of the API service the service could be released as same docker image but managed by flags provided to container that alter the behavior of the API service in runtime. +- The API service itself should still be packaged for deployment as a helmchart regardless of deployment via ArgoCD or other medium/technique. Decision should be made if common helmchart is used with override values for container image and other related values (preferred) or individual helmcharts need to be released. + +### Decomposing the API service + +An investigation needs to be conducted into how the API service can be decomposed to be rebuilt as various flavours of same API service providing different set of APIs. + +- Preferably the total set of APIs serves as the main source of the API service, and other flavours/subsets are automatically derived from this based on the required functionality. Making the maintenance of the API simple and in one place. +- The APIs service should be decomposed at the domain level meaning that all domains or subset of domains should be available as part of the API service flavour. This should allows us to provide as an example EIM related APIs only as needed by workflow. We know that currently the domains have separate generated OpenAPI specs available as consumed by orch-cli. +- The APIs service should be decomposed within the domain level meaning that only subset of the available APIs may need to be released and/or exposed at API service level. As an example within the EIM domain we may not want to expose the Day 2 functionality for some workflows which currently part of the EIM OpenAPI spec. + +The following are the usual options to decomposing or exposing subsets of APIs. + +- ~~API Gateway that would only expose certain endpoints to user~~ - this is a no go for us as we plan to remove the existing API Gateway and it does not actually solve the problem of releasing only specific flavours of EMF. +- Maintain multiple OpenAPI specification - while possible to create multiple OpenAPI specs, the maintenance of same APIs across specs will be a large burden - still let's keep this option in consideration in terms of auto generating multiple specs from top spec. +- ~~Authentication & Authorization Based Filtering ~~ - this is a no go for us as we do not control the end users of the EMF, and we want to provide tailored modular product for each workflow. +- ~~API Versioning strategy~~ - Creating different API versions for each use-case - too much overhead without benefits similar to maintaining multiple OpenAPI specs. +- ~~Proxy/Middleware Layer~~ - Similar to API Gateway - does not fit our use cases +- OpenAPI Spec Manipulation - This approach uses OpenAPI's extension mechanism (properties starting with x-) to add metadata that describes which audiences, use cases, or clients should have access to specific endpoints, operations, or schemas. This approach is worth investigating to see if it can give use the automated approach for creating individual OpenAPI specs for workflows based on labels. +- Other approach to manipulate how a flavour of OpenAPIs spec can be generated from main spec, or how the API service can be build conditionally using same spec. + +### Consuming the APIs from the CLI + +The best approach would be for the EMF to provide a service/endpoint that will communicate which endpoints/APIs are currently supported by the deployed API service. The CLI would then request that information on login, save the configuration and prevent from using non-supported APIs/commands. + +# Appendix: OpenAPI Spec Manipulation with Extensions + +This approach uses OpenAPI's extension mechanism (properties starting with `x-`) to add metadata that describes which audiences, use cases, or clients should have access to specific endpoints, operations, or schemas. + +## How It Works + +### 1. Adding Custom Extensions to Your OpenAPI Spec + +```yaml +openapi: 3.0.0 +info: + title: My API + version: 1.0.0 + +paths: + /users: + get: + summary: Get all users + x-audience: ["public", "partner"] + x-use-case: ["user-management", "reporting"] + x-access-level: "read" + responses: + '200': + description: Success + + /users/{id}: + get: + summary: Get user by ID + x-audience: ["public", "partner", "internal"] + x-use-case: ["user-management"] + responses: + '200': + description: Success + delete: + summary: Delete user + x-audience: ["internal"] + x-use-case: ["admin"] + x-access-level: "write" + responses: + '204': + description: Deleted + + /admin/analytics: + get: + summary: Get analytics data + x-audience: ["internal"] + x-use-case: ["analytics", "reporting"] + x-sensitive: true + responses: + '200': + description: Analytics data + +components: + schemas: + User: + type: object + x-audience: ["public", "partner", "internal"] + properties: + id: + type: string + name: + type: string + email: + type: string + x-audience: ["internal"] # Email only for internal use + ssn: + type: string + x-audience: ["internal"] + x-sensitive: true + +# Audience-based filtering +x-audience: ["public", "partner", "internal", "admin"] + +# Use case categorization +x-use-case: ["user-management", "reporting", "analytics", "billing"] + +# Access level requirements +x-access-level: "read" | "write" | "admin" + +# Sensitivity marking +x-sensitive: true + +# Client-specific +x-client-type: ["mobile", "web", "api"] + +# Environment restrictions +x-environment: ["production", "staging", "development"] + +# Rate limiting categories +x-rate-limit-tier: "basic" | "premium" | "enterprise" + +# Deprecation info +x-deprecated-for: ["internal"] +x-sunset-date: "2024-12-31" \ No newline at end of file From cd5a0a1a2672793c880750329c0dc18f500e8db0 Mon Sep 17 00:00:00 2001 From: Damian Kopyto Date: Fri, 7 Nov 2025 17:38:33 +0000 Subject: [PATCH 2/6] Update eim-nbapi-cli-decomposition.md --- design-proposals/eim-nbapi-cli-decomposition.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/design-proposals/eim-nbapi-cli-decomposition.md b/design-proposals/eim-nbapi-cli-decomposition.md index a9322e84a..46336aca5 100644 --- a/design-proposals/eim-nbapi-cli-decomposition.md +++ b/design-proposals/eim-nbapi-cli-decomposition.md @@ -69,7 +69,7 @@ The following are the usual options to decomposing or exposing subsets of APIs. - ~~API Gateway that would only expose certain endpoints to user~~ - this is a no go for us as we plan to remove the existing API Gateway and it does not actually solve the problem of releasing only specific flavours of EMF. - Maintain multiple OpenAPI specification - while possible to create multiple OpenAPI specs, the maintenance of same APIs across specs will be a large burden - still let's keep this option in consideration in terms of auto generating multiple specs from top spec. -- ~~Authentication & Authorization Based Filtering ~~ - this is a no go for us as we do not control the end users of the EMF, and we want to provide tailored modular product for each workflow. +- ~~Authentication & Authorization Based Filtering~~ - this is a no go for us as we do not control the end users of the EMF, and we want to provide tailored modular product for each workflow. - ~~API Versioning strategy~~ - Creating different API versions for each use-case - too much overhead without benefits similar to maintaining multiple OpenAPI specs. - ~~Proxy/Middleware Layer~~ - Similar to API Gateway - does not fit our use cases - OpenAPI Spec Manipulation - This approach uses OpenAPI's extension mechanism (properties starting with x-) to add metadata that describes which audiences, use cases, or clients should have access to specific endpoints, operations, or schemas. This approach is worth investigating to see if it can give use the automated approach for creating individual OpenAPI specs for workflows based on labels. From 3cb93a58451b0eaf9cc1463df43bac28493e1b32 Mon Sep 17 00:00:00 2001 From: Joanna Kossakowska Date: Wed, 12 Nov 2025 02:40:49 -0800 Subject: [PATCH 3/6] Adding possible improvements to building API spec per scenario --- .../eim-nbapi-cli-decomposition.md | 176 +++++++++++++++++- 1 file changed, 174 insertions(+), 2 deletions(-) diff --git a/design-proposals/eim-nbapi-cli-decomposition.md b/design-proposals/eim-nbapi-cli-decomposition.md index 46336aca5..cc383055c 100644 --- a/design-proposals/eim-nbapi-cli-decomposition.md +++ b/design-proposals/eim-nbapi-cli-decomposition.md @@ -143,7 +143,7 @@ components: type: string email: type: string - x-audience: ["internal"] # Email only for internal use + x-audience: ["internal"] ssn: type: string x-audience: ["internal"] @@ -172,4 +172,176 @@ x-rate-limit-tier: "basic" | "premium" | "enterprise" # Deprecation info x-deprecated-for: ["internal"] -x-sunset-date: "2024-12-31" \ No newline at end of file +x-sunset-date: "2024-12-31" +``` + +## How NB API is Currently Built + +Currently apiv2 (infra-core repository) stores REST API definitions of services in protocol buffer files (.proto) and uses protoc-gen-connect-openapi to autogenerate the openapi spec - openapi.yaml . + +Content of api/proto Directory - two folders: +services - API Operations (Service Layer) - this is one file services.yaml that contains API operation on all the available resources. +resources - Data Models (DTOs/Entities) - seperate file per each resource. + +Protoc-gen-connect-openapi is the tool that is indirectly used to build the openapi spec - it is configured as a plugin in buf (buf.gen.yaml). User calls "buf generate" within the "make generate" or "make buf-gen" target. This plugin generates OpenAPI 3.0 specifications directly from .proto files in api/proto/ directory. + +The following it the current, full buf configuration:# (buf.gen.yaml) + +```yaml +plugins: + # go - https://pkg.go.dev/google.golang.org/protobuf + - name: go + out: internal/pbapi + opt: + - paths=source_relative + + # go grpc - https://pkg.go.dev/google.golang.org/grpc + - name: go-grpc + out: internal/pbapi + opt: + - paths=source_relative + - require_unimplemented_servers=false + + # go install github.com/sudorandom/protoc-gen-connect-openapi@v0.17.0 + - name: connect-openapi + path: protoc-gen-connect-openapi + out: api/openapi + strategy: all + opt: + - format=yaml + - short-service-tags + - short-operation-ids + - path=openapi.yaml + + # grpc-gateway - https://grpc-ecosystem.github.io/grpc-gateway/ + - name: grpc-gateway + out: internal/pbapi + opt: + - paths=source_relative + + # docs - https://github.com/pseudomuto/protoc-gen-doc + - plugin: doc + out: docs + opt: markdown,proto.md + strategy: all + + - plugin: go-const + out: internal/pbapi + path: ["go", "run", "./cmd/protoc-gen-go-const"] + opt: + - paths=source_relative +``` + +The plugin takes as an input one full openapi spec that includes all services (services.proto). + +Key Items: +- Input: api/proto/**/*.proto +- Config: buf.gen.yaml, buf.work.yaml, buf.yaml +- Output: openapi.yaml +- Tool: protoc-gen-connect-openapi + +Buf also generates: +- the Go code ( Go structs, gRPC clients/services) in internal/pbapi +- gRPC gateway: REST to gRPC proxy code - HTTP handlers that proxy REST calls to gRPC (in internal/pbapi/**/*.pb.gw.go ) +- documentation: docs/proto.md + +Next, targets "oapi-patch" and "oapi-banner" are executed on the generated openapi.yaml file: + +"make oapi-patch" - post-process: cleans up the generated OpenAPI by removing verbose proto package prefixes (e.g.: resources.compute.v1.HostResource → HostResource) + +## Solution 1 + +Split services.yaml file into multiple files per service, then change make buf-gen target to process only services used by the scenario, example: + +```bash +bug generate --path api/proto/services/instance/v1 api/proto/services/os/v1 +``` + +This generates the openapi spec openapi.yaml only for the services supported by particular scenario. + +## Soultion 2 - more robust + +- Generate full openapi.yaml file with "buf generate" same way it is done now. buf already generates the spec with option 'short-service-tags'. This means it adds a tag to each service in the openapi spec matching its service name. +- Write a small filter that will parse the spec and select only operations per service with a certain service-tag and generate a new spec supporting only the particular scenario. + +We can add some manifest that will store the list of services per scenario. + +## Solution 3 + +No splitting of service.yaml . +This approach uses custom annotations/options to connect services to scenarios. + +- define custom option/annotations by extending google.protobuf.ServiceOptions and google.protobuf.MethodOptions, example: + +```go +syntax = "proto3"; +package annotations.common.v1; + +import "google/protobuf/descriptor.proto"; + +// Service-level: applies to the whole service (default) +extend google.protobuf.ServiceOptions { + repeated string scenario = 50001; // e.g., ["scenario-1", "scenario-2"] +} + +// Method-level: override/add per RPC if needed +extend google.protobuf.MethodOptions { + repeated string scenario = 50011; +} +``` +Add the file to api/proto/annotations. + +Use it in api/proto/services/services.proto: + +Service level selection per scenario: +```go +(...) +import "annotations/scenario_annotations.proto"; +(...) +service OSUpdateRun { + option (annotations.common.v1.scenario) = "scenario-1"; + // Get a list of OS Update Runs. + rpc ListOSUpdateRun(ListOSUpdateRunRequest) returns (ListOSUpdateRunResponse) { + option (google.api.http) = {get: "/edge-infra.orchestrator.apis/v2/os_update_run"}; + } + // Get a specific OS Update Run. + rpc GetOSUpdateRun(GetOSUpdateRunRequest) returns (resources.compute.v1.OSUpdateRun) { + option (google.api.http) = {get: "/edge-infra.orchestrator.apis/v2/os_update_run/{resourceId}"}; + } + // Delete a OS Update Run. + rpc DeleteOSUpdateRun(DeleteOSUpdateRunRequest) returns (DeleteOSUpdateRunResponse) { + option (google.api.http) = {delete: "/edge-infra.orchestrator.apis/v2/os_update_run/{resourceId}"}; + } +} +(...) +``` + +Or: + +Method level selection per scenario: +```go +(...) +import "annotations/scenario_annotations.proto"; +(...) +service OSUpdateRun { + // Get a list of OS Update Runs. + rpc ListOSUpdateRun(ListOSUpdateRunRequest) returns (ListOSUpdateRunResponse) { + option (annotations.common.v1.scenario) = "scenario-1"; + option (google.api.http) = {get: "/edge-infra.orchestrator.apis/v2/os_update_run"}; + } + // Get a specific OS Update Run. + rpc GetOSUpdateRun(GetOSUpdateRunRequest) returns (resources.compute.v1.OSUpdateRun) { + option (google.api.http) = {get: "/edge-infra.orchestrator.apis/v2/os_update_run/{resourceId}"}; + } + // Delete a OS Update Run. + rpc DeleteOSUpdateRun(DeleteOSUpdateRunRequest) returns (DeleteOSUpdateRunResponse) { + option (google.api.http) = {delete: "/edge-infra.orchestrator.apis/v2/os_update_run/{resourceId}"}; + } +} +(...) +``` + +- Use buf generate to generate the full openapi.yaml spec. + +- Create and run a filter that reads generated .pb file that contains new service annotations, takes openapi.yaml as input and removes all services without the scenario annotation. The filter also takes as input the scenario name and returns a scenario specific openapi spec. +- OR patch the spec generating tool ( protoc-gen-connect-openapi) so it supports new annotations and includes them in the new full spec - so it reads your annotations directly and writes x-* fields into the OpenAPI. It will create an openapi spec with fields and services annotatted by a specific scenario. This requires literally creating a custom plugin that takes scenario as input and generates openapi spec per scenario only - wrapper of protoc-gen-connect-openapi. From 3e4e27ccb893cc232aded22f5eb324d298a734e1 Mon Sep 17 00:00:00 2001 From: Damian Kopyto Date: Thu, 13 Nov 2025 16:03:00 +0000 Subject: [PATCH 4/6] Typos --- design-proposals/eim-nbapi-cli-decomposition.md | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/design-proposals/eim-nbapi-cli-decomposition.md b/design-proposals/eim-nbapi-cli-decomposition.md index cc383055c..aa7e49554 100644 --- a/design-proposals/eim-nbapi-cli-decomposition.md +++ b/design-proposals/eim-nbapi-cli-decomposition.md @@ -47,14 +47,14 @@ Uncertainties: ### Decomposing the release of API service as a module -Once the investigation is completed on how the API service is create today decisions must be done on a the service will be build and released as a module. +Once the investigation is completed on how the API service is created today decisions must be done on how the service will be build and released as a module. -- The build of the API service itself will depend on the results of top2bottom and bottom2top decomposition investigations. +- The build of the API service itself will depend on the results of "top to bottom" and "bottom to top" decomposition investigations. - The individual versions of API service can be packaged as versioned container images: - apiv2-emf:x.x.x - apiv2-workflow1:x.x.x - apiv2-workflow2:x.x.x -- Alternatively if the decomposition does not result in multiple version of the API service the service could be released as same docker image but managed by flags provided to container that alter the behavior of the API service in runtime. +- Alternatively if the decomposition does not result in multiple version of the API service the service could be released as same docker image but managed by flags provided to container that alter the behaviour of the API service in runtime. - The API service itself should still be packaged for deployment as a helmchart regardless of deployment via ArgoCD or other medium/technique. Decision should be made if common helmchart is used with override values for container image and other related values (preferred) or individual helmcharts need to be released. ### Decomposing the API service @@ -63,7 +63,7 @@ An investigation needs to be conducted into how the API service can be decompose - Preferably the total set of APIs serves as the main source of the API service, and other flavours/subsets are automatically derived from this based on the required functionality. Making the maintenance of the API simple and in one place. - The APIs service should be decomposed at the domain level meaning that all domains or subset of domains should be available as part of the API service flavour. This should allows us to provide as an example EIM related APIs only as needed by workflow. We know that currently the domains have separate generated OpenAPI specs available as consumed by orch-cli. -- The APIs service should be decomposed within the domain level meaning that only subset of the available APIs may need to be released and/or exposed at API service level. As an example within the EIM domain we may not want to expose the Day 2 functionality for some workflows which currently part of the EIM OpenAPI spec. +- The APIs service should be decomposed within the domain level meaning that only subset of the available APIs may need to be released and/or exposed at API service level. As an example within the EIM domain we may not want to expose the Day 2 functionality for some workflows which currently are part of the EIM OpenAPI spec. The following are the usual options to decomposing or exposing subsets of APIs. @@ -72,7 +72,7 @@ The following are the usual options to decomposing or exposing subsets of APIs. - ~~Authentication & Authorization Based Filtering~~ - this is a no go for us as we do not control the end users of the EMF, and we want to provide tailored modular product for each workflow. - ~~API Versioning strategy~~ - Creating different API versions for each use-case - too much overhead without benefits similar to maintaining multiple OpenAPI specs. - ~~Proxy/Middleware Layer~~ - Similar to API Gateway - does not fit our use cases -- OpenAPI Spec Manipulation - This approach uses OpenAPI's extension mechanism (properties starting with x-) to add metadata that describes which audiences, use cases, or clients should have access to specific endpoints, operations, or schemas. This approach is worth investigating to see if it can give use the automated approach for creating individual OpenAPI specs for workflows based on labels. +- OpenAPI Spec Manipulation - This approach uses OpenAPI's extension mechanism (properties starting with x-) to add metadata that describes which audiences, use cases, or clients should have access to specific endpoints, operations, or schemas. This approach is worth investigating to see if it can give us the automated approach for creating individual OpenAPI specs for workflows based on labels. - Other approach to manipulate how a flavour of OpenAPIs spec can be generated from main spec, or how the API service can be build conditionally using same spec. ### Consuming the APIs from the CLI @@ -181,11 +181,11 @@ Currently apiv2 (infra-core repository) stores REST API definitions of services Content of api/proto Directory - two folders: services - API Operations (Service Layer) - this is one file services.yaml that contains API operation on all the available resources. -resources - Data Models (DTOs/Entities) - seperate file per each resource. +resources - Data Models (DTOs/Entities) - separate file per each resource. Protoc-gen-connect-openapi is the tool that is indirectly used to build the openapi spec - it is configured as a plugin in buf (buf.gen.yaml). User calls "buf generate" within the "make generate" or "make buf-gen" target. This plugin generates OpenAPI 3.0 specifications directly from .proto files in api/proto/ directory. -The following it the current, full buf configuration:# (buf.gen.yaml) +The following is the current, full buf configuration:# (buf.gen.yaml) ```yaml plugins: From 8b12d627a2800ac0cc513c1368d1c0cb8ed554fc Mon Sep 17 00:00:00 2001 From: Damian Kopyto Date: Fri, 14 Nov 2025 16:00:03 +0000 Subject: [PATCH 5/6] Update --- .../eim-nbapi-cli-decomposition.md | 106 ++---------------- 1 file changed, 10 insertions(+), 96 deletions(-) diff --git a/design-proposals/eim-nbapi-cli-decomposition.md b/design-proposals/eim-nbapi-cli-decomposition.md index aa7e49554..53a226b0c 100644 --- a/design-proposals/eim-nbapi-cli-decomposition.md +++ b/design-proposals/eim-nbapi-cli-decomposition.md @@ -26,15 +26,17 @@ The list of domain APIs include: There are two levels to the API decomposition - Decomposition of above domain levels -- Decomposition within domain (ie. separation at EIM domain level, where overall set of APIS includes onboarding/provisioning/day2 APIs but another workflow may support only onboarding/provisioning without day2 support ) +- Decomposition within domain (ie. separation at EIM domain level, where overall set of APIs includes onboarding/provisioning/day2 APIs but another workflow may support only onboarding/provisioning without day2 support ) The following questions must be answered and investigated: - How the API service is build currently + - It is build from a proto definition and code is autogenerated by "buf" tool - [See How NB API is Currently Built](#how-nb-api-is-currently-built) - How the API service container image is build currently - How the API service helm charts are build currently - What level of decomposition is needed from the required workflows - How to decomposition API at domain level + - At domain level the APIs are deployed as separate services - How to decomposition API within domain level - How to build various API service version as per desired workflows using the modular APIs - How to deliver the various API service versions as per desired workflows @@ -62,8 +64,9 @@ Once the investigation is completed on how the API service is created today deci An investigation needs to be conducted into how the API service can be decomposed to be rebuilt as various flavours of same API service providing different set of APIs. - Preferably the total set of APIs serves as the main source of the API service, and other flavours/subsets are automatically derived from this based on the required functionality. Making the maintenance of the API simple and in one place. -- The APIs service should be decomposed at the domain level meaning that all domains or subset of domains should be available as part of the API service flavour. This should allows us to provide as an example EIM related APIs only as needed by workflow. We know that currently the domains have separate generated OpenAPI specs available as consumed by orch-cli. +- The APIs service should be decomposed at the domain level meaning that all domains or subset of domains should be available as part of the EMF - they are already decomposed/modular at this level and deployed as separate services. - The APIs service should be decomposed within the domain level meaning that only subset of the available APIs may need to be released and/or exposed at API service level. As an example within the EIM domain we may not want to expose the Day 2 functionality for some workflows which currently are part of the EIM OpenAPI spec. +- The APIs service may also need to be decomposed at individual internal service level ie host resource may need to ha different data model across use cases. The following are the usual options to decomposing or exposing subsets of APIs. @@ -77,103 +80,14 @@ The following are the usual options to decomposing or exposing subsets of APIs. ### Consuming the APIs from the CLI -The best approach would be for the EMF to provide a service/endpoint that will communicate which endpoints/APIs are currently supported by the deployed API service. The CLI would then request that information on login, save the configuration and prevent from using non-supported APIs/commands. +The best approach would be for the EMF to provide a service/endpoint that will communicate which endpoints/APIs are currently supported by the deployed API service. The CLI would then request that information on login, save the configuration and prevent from using non-supported APIs/commands. The prevention could happen at command call level where a configuration would be checked before a RUNe command is called for a given command. -# Appendix: OpenAPI Spec Manipulation with Extensions -This approach uses OpenAPI's extension mechanism (properties starting with `x-`) to add metadata that describes which audiences, use cases, or clients should have access to specific endpoints, operations, or schemas. +## Summary -## How It Works - -### 1. Adding Custom Extensions to Your OpenAPI Spec - -```yaml -openapi: 3.0.0 -info: - title: My API - version: 1.0.0 - -paths: - /users: - get: - summary: Get all users - x-audience: ["public", "partner"] - x-use-case: ["user-management", "reporting"] - x-access-level: "read" - responses: - '200': - description: Success - - /users/{id}: - get: - summary: Get user by ID - x-audience: ["public", "partner", "internal"] - x-use-case: ["user-management"] - responses: - '200': - description: Success - delete: - summary: Delete user - x-audience: ["internal"] - x-use-case: ["admin"] - x-access-level: "write" - responses: - '204': - description: Deleted - - /admin/analytics: - get: - summary: Get analytics data - x-audience: ["internal"] - x-use-case: ["analytics", "reporting"] - x-sensitive: true - responses: - '200': - description: Analytics data - -components: - schemas: - User: - type: object - x-audience: ["public", "partner", "internal"] - properties: - id: - type: string - name: - type: string - email: - type: string - x-audience: ["internal"] - ssn: - type: string - x-audience: ["internal"] - x-sensitive: true - -# Audience-based filtering -x-audience: ["public", "partner", "internal", "admin"] - -# Use case categorization -x-use-case: ["user-management", "reporting", "analytics", "billing"] - -# Access level requirements -x-access-level: "read" | "write" | "admin" - -# Sensitivity marking -x-sensitive: true - -# Client-specific -x-client-type: ["mobile", "web", "api"] - -# Environment restrictions -x-environment: ["production", "staging", "development"] - -# Rate limiting categories -x-rate-limit-tier: "basic" | "premium" | "enterprise" - -# Deprecation info -x-deprecated-for: ["internal"] -x-sunset-date: "2024-12-31" -``` +1. Assuming that in phase 1 we will retain Traefik for all workflows, we need to check how the Traefik->EIM mapping will behave and needs to behave when EIM only supports subset of APIs, and establish if the set of API calls supported by Treafik API Gateway maps to the supported APIs in EIM API service subset. +2. We need to make sure that our API supports specific usecases and on the other hand it needs to keep compatibility with other workflows - to achieve that, we may need to make code changes in data models. As an example we need to make sure that mandatory fields are supported accordingly across usecases ie. instance creation will require OSprofile for general usecase, but this may not be true for self installed OSes/Edge Nodes. Collaboration with teams/ADR owners is needed to establish what changes are needed at Resource Manager/Inventory levels to accommodate workflows and how will the changes impact the APIs. +3. We need to understand all the scenarios and required services to be supported. And define the APIs per scenario. ## How NB API is Currently Built From 947dd13a3001f305ae9f09e0e86827092f3818bb Mon Sep 17 00:00:00 2001 From: Joanna Kossakowska Date: Fri, 14 Nov 2025 10:08:12 -0800 Subject: [PATCH 6/6] REST API per scenario --- .../eim-nbapi-cli-decomposition.md | 125 ++++-------------- 1 file changed, 23 insertions(+), 102 deletions(-) diff --git a/design-proposals/eim-nbapi-cli-decomposition.md b/design-proposals/eim-nbapi-cli-decomposition.md index 53a226b0c..e2969014a 100644 --- a/design-proposals/eim-nbapi-cli-decomposition.md +++ b/design-proposals/eim-nbapi-cli-decomposition.md @@ -82,7 +82,6 @@ The following are the usual options to decomposing or exposing subsets of APIs. The best approach would be for the EMF to provide a service/endpoint that will communicate which endpoints/APIs are currently supported by the deployed API service. The CLI would then request that information on login, save the configuration and prevent from using non-supported APIs/commands. The prevention could happen at command call level where a configuration would be checked before a RUNe command is called for a given command. - ## Summary 1. Assuming that in phase 1 we will retain Traefik for all workflows, we need to check how the Traefik->EIM mapping will behave and needs to behave when EIM only supports subset of APIs, and establish if the set of API calls supported by Treafik API Gateway maps to the supported APIs in EIM API service subset. @@ -91,15 +90,21 @@ The best approach would be for the EMF to provide a service/endpoint that will c ## How NB API is Currently Built -Currently apiv2 (infra-core repository) stores REST API definitions of services in protocol buffer files (.proto) and uses protoc-gen-connect-openapi to autogenerate the openapi spec - openapi.yaml . +Currently, apiv2 (infra-core repository) holds definition of REST API services in protocol buffer files (.proto) and uses protoc-gen-connect-openapi to autogenerate the openapi spec - openapi.yaml . + +The input to protoc-gen-connect-openapi comes from: +api/proto/services directory - one file (services.yaml) containing API pperations on all the available resources (Service Layer). +api/proto/resources directory - multiple files with data models - separate file with data model per single inventory resource. + +Protoc-gen-connect-openapi is the tool that is indirectly used to build the openapi spec. It is configured as a plugin within buf (buf.gen.yaml). -Content of api/proto Directory - two folders: -services - API Operations (Service Layer) - this is one file services.yaml that contains API operation on all the available resources. -resources - Data Models (DTOs/Entities) - separate file per each resource. +### What is Buf -Protoc-gen-connect-openapi is the tool that is indirectly used to build the openapi spec - it is configured as a plugin in buf (buf.gen.yaml). User calls "buf generate" within the "make generate" or "make buf-gen" target. This plugin generates OpenAPI 3.0 specifications directly from .proto files in api/proto/ directory. +Buf is a replacement for protoc (the standard Protocol Buffers compiler). It makes working with .proto files easier as it replaces messy protoc commands with clean config file. It is a all-in-one tool as it provides compiling, linting, breaking change detection, and dependency management. -The following is the current, full buf configuration:# (buf.gen.yaml) +In infra-core/apiv2, "buf generate" command is executed within the "make generate" or "make buf-gen" target to generate the OpenAPI 3.0 spec directly from .proto files in api/proto/ directory. + +The following is the current, full buf configuration (buf.gen.yaml): ```yaml plugins: @@ -146,7 +151,7 @@ plugins: - paths=source_relative ``` -The plugin takes as an input one full openapi spec that includes all services (services.proto). +Protoc-gen-connect-openapi plugin takes as an input one full openapi spec that includes all services (services.proto) and outputs the openapi spec in api/openapi. Key Items: - Input: api/proto/**/*.proto @@ -154,108 +159,24 @@ Key Items: - Output: openapi.yaml - Tool: protoc-gen-connect-openapi -Buf also generates: +Based on the content of api/proto/ , buf also generates: - the Go code ( Go structs, gRPC clients/services) in internal/pbapi - gRPC gateway: REST to gRPC proxy code - HTTP handlers that proxy REST calls to gRPC (in internal/pbapi/**/*.pb.gw.go ) - documentation: docs/proto.md -Next, targets "oapi-patch" and "oapi-banner" are executed on the generated openapi.yaml file: - -"make oapi-patch" - post-process: cleans up the generated OpenAPI by removing verbose proto package prefixes (e.g.: resources.compute.v1.HostResource → HostResource) +## Building REST API Spec per Scenario -## Solution 1 +The following is the proposed solution (draft) to the requirement for decomposistion of EMF, where the exposed REST API is limited to support specific scenario and maintains comatibility with other scenarios. -Split services.yaml file into multiple files per service, then change make buf-gen target to process only services used by the scenario, example: +1. Split services.yaml file into multiple folders/files per service. +2. Maintain a manifest that lists names of REST API services suported by scenario. +3. Expose a new endpoint that list supported services in current scenario. +4. Change "buf-gen" make target to process only services used by the scenario, by using additional parameter "path", list of services need to come from the manifest in step 2). Example to use service1 and service2 services: ```bash -bug generate --path api/proto/services/instance/v1 api/proto/services/os/v1 -``` - -This generates the openapi spec openapi.yaml only for the services supported by particular scenario. - -## Soultion 2 - more robust - -- Generate full openapi.yaml file with "buf generate" same way it is done now. buf already generates the spec with option 'short-service-tags'. This means it adds a tag to each service in the openapi spec matching its service name. -- Write a small filter that will parse the spec and select only operations per service with a certain service-tag and generate a new spec supporting only the particular scenario. - -We can add some manifest that will store the list of services per scenario. - -## Solution 3 - -No splitting of service.yaml . -This approach uses custom annotations/options to connect services to scenarios. - -- define custom option/annotations by extending google.protobuf.ServiceOptions and google.protobuf.MethodOptions, example: - -```go -syntax = "proto3"; -package annotations.common.v1; - -import "google/protobuf/descriptor.proto"; - -// Service-level: applies to the whole service (default) -extend google.protobuf.ServiceOptions { - repeated string scenario = 50001; // e.g., ["scenario-1", "scenario-2"] -} - -// Method-level: override/add per RPC if needed -extend google.protobuf.MethodOptions { - repeated string scenario = 50011; -} -``` -Add the file to api/proto/annotations. - -Use it in api/proto/services/services.proto: - -Service level selection per scenario: -```go -(...) -import "annotations/scenario_annotations.proto"; -(...) -service OSUpdateRun { - option (annotations.common.v1.scenario) = "scenario-1"; - // Get a list of OS Update Runs. - rpc ListOSUpdateRun(ListOSUpdateRunRequest) returns (ListOSUpdateRunResponse) { - option (google.api.http) = {get: "/edge-infra.orchestrator.apis/v2/os_update_run"}; - } - // Get a specific OS Update Run. - rpc GetOSUpdateRun(GetOSUpdateRunRequest) returns (resources.compute.v1.OSUpdateRun) { - option (google.api.http) = {get: "/edge-infra.orchestrator.apis/v2/os_update_run/{resourceId}"}; - } - // Delete a OS Update Run. - rpc DeleteOSUpdateRun(DeleteOSUpdateRunRequest) returns (DeleteOSUpdateRunResponse) { - option (google.api.http) = {delete: "/edge-infra.orchestrator.apis/v2/os_update_run/{resourceId}"}; - } -} -(...) -``` - -Or: - -Method level selection per scenario: -```go -(...) -import "annotations/scenario_annotations.proto"; -(...) -service OSUpdateRun { - // Get a list of OS Update Runs. - rpc ListOSUpdateRun(ListOSUpdateRunRequest) returns (ListOSUpdateRunResponse) { - option (annotations.common.v1.scenario) = "scenario-1"; - option (google.api.http) = {get: "/edge-infra.orchestrator.apis/v2/os_update_run"}; - } - // Get a specific OS Update Run. - rpc GetOSUpdateRun(GetOSUpdateRunRequest) returns (resources.compute.v1.OSUpdateRun) { - option (google.api.http) = {get: "/edge-infra.orchestrator.apis/v2/os_update_run/{resourceId}"}; - } - // Delete a OS Update Run. - rpc DeleteOSUpdateRun(DeleteOSUpdateRunRequest) returns (DeleteOSUpdateRunResponse) { - option (google.api.http) = {delete: "/edge-infra.orchestrator.apis/v2/os_update_run/{resourceId}"}; - } -} -(...) +bug generate --path api/proto/services/service1/v1 api/proto/services/service2/v1 ``` -- Use buf generate to generate the full openapi.yaml spec. +5. Step 4 generated the openapi spec openapi.yaml only for the services supported by particular scenario. +6. CLI is built based on the full REST API spec (also built earlier), but gets the list of supported services from the new API andpoint (step 3) and adjust its internal logic so it calls only supported REST API services/endpoints. When simple curl calls are used to unsupported endpoints, - default message about unsupported service is returned. -- Create and run a filter that reads generated .pb file that contains new service annotations, takes openapi.yaml as input and removes all services without the scenario annotation. The filter also takes as input the scenario name and returns a scenario specific openapi spec. -- OR patch the spec generating tool ( protoc-gen-connect-openapi) so it supports new annotations and includes them in the new full spec - so it reads your annotations directly and writes x-* fields into the OpenAPI. It will create an openapi spec with fields and services annotatted by a specific scenario. This requires literally creating a custom plugin that takes scenario as input and generates openapi spec per scenario only - wrapper of protoc-gen-connect-openapi.