-
Notifications
You must be signed in to change notification settings - Fork 3
Home
Tyler Ayers edited this page Nov 4, 2025
·
54 revisions

- Feature Proxies are Apigee proxies that just contain one feature, so a certain endpoint and target, or policies, or a combination.
- Feature Proxies should have unique names for policies and resources, so as to avoid conflicts when merging them into composite proxies. If there are any naming conflicts, they are output as warnings.
- Policies in flows in an endpoint or target named default are applied to all endpoints and targets when the feature is used. Default endpoint and target flows are a way to say that policies should be added to all endpoints and targets in templates.
- Policies in flows that are not named default are copied 1:1 into the feature, and then copied 1:1 to templates when applied. Non-default policy flows are specific and model a specific endpoint or target handling, and are not general.
- Property Sets within Feature Proxies that are exported are written into the Parameters of the new Feature, making it easy to parameterize the properties when generating proxies.
The naming convention topic.dataName in camelCase is used for variables, and dc_topic_data_name in snake_case for the same data in data collectors.
-
quota.limit - The limit to be enforced in the quota.
- Apigee variable: quota.limit
-
quota.interval - The interval of the limit to be enforced in the quota.
- Apigee variable: quota.interval
-
quota.timeunit - The time unit of the interval to be enforced in the quota.
- Apigee variable: quota.timeunit
-
quota.weight - The weight of the limit to be enforced in the quota.
- Apigee variable: quota.weight
-
llm.model - The name or id of an LLM model that is used, for example
gemini-2.5-flashormistral-large-2411.- Apigee variable: llm.model
- Data collector: dc_llm_model
-
llm.promptInput - The last user prompt that is sent in the request to an LLM model.
- Apigee variable: llm.promptInput
- Data collector: dc_llm_prompt_input
-
llm.promptTokenCount - The total number of tokens in the prompt input.
- Apigee variable: llm.promptTokenCount
- Data collector: dc_llm_prompt_token_count
-
llm.promptEstimatedTokenCount - The estimated number of tokens in the prompt input.
- Apigee variable: llm.promptEstimatedTokenCount
- Data collector: dc_llm_prompt_estimated_token_count
-
llm.responseTokenCount - The total number of tokens in the LLM response.
- Apigee variable: llm.responseTokenCount
- Data collector: dc_llm_response_token_count
-
llm.totalTokenCount - The total number of tokens in the LLM request and response.
- Apigee variable: llm.totalTokenCount
- Data collector: dc_llm_total_token_count
-
llm.promptRejectReason - In case an LLM prompt is rejected, the reason can be stored here.
- Apigee variable: llm.promptRejectReason
- Data collector: dc_llm_prompt_reject_reason