Skip to content

Conversation

Copy link

Copilot AI commented Nov 18, 2025

Adding missing AQL query options to PostCursorOptions class

Plan:

  • Explore repository structure and understand the codebase
  • Build the solution to verify current state
  • Add missing properties to PostCursorOptions.cs with appropriate documentation:
    • AllowDirtyReads (bool, v3.10.0)
    • AllowRetry (bool)
    • Cache (bool)
    • FillBlockCache (bool)
    • MaxDNFConditionMembers (long, v3.11.0)
    • MaxNodesPerCallstack (long)
    • MaxNumberOfPlans (long)
    • SpillOverThresholdMemoryUsage (long, v3.10.0)
    • SpillOverThresholdNumRows (long, v3.10.0)
    • UsePlanCache (bool, v3.12.4)
  • Add test for the new options
  • Build and verify changes
  • Run code review
  • Run security checks
  • Address PR feedback: Merge new options into existing test instead of separate test

Summary:
All 10 missing AQL query options have been successfully added to the PostCursorOptions class with comprehensive XML documentation. The test has been updated per feedback to include the new options in the existing "PostCursorAsync_ShouldSucceed_WhenUsingOtherOptions" test rather than creating a separate test.

Original prompt

This section details on the original issue you should resolve

<issue_title>Add missing extra options for AQL queries</issue_title>
<issue_description>There are new documented options that we don't have in PostCursorOptions:

  • allowDirtyReads (v3.10.0)
  • allowRetry
  • cache
  • fillBlockCache
  • maxDNFConditionMembers (v3.11.0)
  • maxNodesPerCallstack
  • maxNumberOfPlans
  • spillOverThresholdMemoryUsage (v3.10.0)
  • spillOverThresholdNumRows (v3.10.0)
  • usePlanCache (3.12.4)</issue_description>

<agent_instructions>Implement the missing options and update any related tests</agent_instructions>

Comments on the Issue (you are @copilot in this section)


💬 We'd love your input! Share your thoughts on Copilot coding agent in our 2 minute survey.

Copilot AI changed the title [WIP] Add missing extra options for AQL queries Add missing AQL query options to PostCursorOptions Nov 18, 2025
Copilot AI requested a review from rossmills99 November 18, 2025 09:40
}

[Fact]
public async Task PostCursorAsync_ShouldSucceed_WhenUsingNewOptions()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@copilot Using "new options" in the name will not make sense in future when these are not new any more. Can we just include the new options in the pre-existing test "PostCursorAsync_ShouldSucceed_WhenUsingOtherOptions"?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done. Merged the new options into the existing test in commit acfb19f.

Copy link
Collaborator

@rossmills99 rossmills99 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Put new options into pre-existing test

Copy link
Collaborator

@rossmills99 rossmills99 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@copilot
Here's a copy of the documentation supplied for the REST API, can you use these descriptions as comment strings for the newly added PostCursorOptions properties?

optionsCollapse allobject
key/value object with extra options for the query.

allowDirtyReadsCollapse allboolean
If you set this option to true and execute the query against a cluster deployment, then the Coordinator is allowed to read from any shard replica and not only from the leader.

You may observe data inconsistencies (dirty reads) when reading from followers, namely obsolete revisions of documents because changes have not yet been replicated to the follower, as well as changes to documents before they are officially committed on the leader.

This feature is only available in the Enterprise Edition.

allowRetryCollapse allboolean
Set this option to true to make it possible to retry fetching the latest batch from a cursor. The default is false.

If retrieving a result batch fails because of a connection issue, you can ask for that batch again using the POST /_api/cursor// endpoint. The first batch has an ID of 1 and the value is incremented by 1 with every batch. Every result response except the last one also includes a nextBatchId attribute, indicating the ID of the batch after the current. You can remember and use this batch ID should retrieving the next batch fail.

You can only request the latest batch again (or the next batch). Earlier batches are not kept on the server-side. Requesting a batch again does not advance the cursor.

You can also call this endpoint with the next batch identifier, i.e. the value returned in the nextBatchId attribute of a previous request. This advances the cursor and returns the results of the next batch. This is only supported if there are more results in the cursor (i.e. hasMore is true in the latest batch).

From v3.11.1 onward, you may use the POST /_api/cursor// endpoint even if the allowRetry attribute is false to fetch the next batch, but you cannot request a batch again unless you set it to true.

To allow refetching of the very last batch of the query, the server cannot automatically delete the cursor. After the first attempt of fetching the last batch, the server would normally delete the cursor to free up resources. As you might need to reattempt the fetch, it needs to keep the final batch when the allowRetry option is enabled. Once you successfully received the last batch, you should call the DELETE /_api/cursor/ endpoint so that the server doesn't unnecessarily keep the batch until the cursor times out (ttl query option).

cacheCollapse allboolean
Whether the AQL query results cache shall be used for adding as well as for retrieving results.

If the query cache mode is set to demand and you set the cache query option to true for a query, then its query result is cached if it's eligible for caching. If the query cache mode is set to on, query results are automatically cached if they are eligible for caching unless you set the cache option to false.

If you set the cache option to false, then any query cache lookup is skipped for the query. If you set it to true, the query cache is checked for a cached result if the query cache mode is either set to on or demand.

failOnWarningCollapse allboolean
When set to true, the query will throw an exception and abort instead of producing a warning. This option should be used during development to catch potential issues early. When the attribute is set to false, warnings will not be propagated to exceptions and will be returned with the query result. There is also a server configuration option --query.fail-on-warning for setting the default value for failOnWarning so it does not need to be set on a per-query level.

fillBlockCacheCollapse allboolean
if set to true or not specified, this will make the query store the data it reads via the RocksDB storage engine in the RocksDB block cache. This is usually the desired behavior. The option can be set to false for queries that are known to either read a lot of data which would thrash the block cache, or for queries that read data which are known to be outside of the hot set. By setting the option to false, data read by the query will not make it into the RocksDB block cache if not already in there, thus leaving more room for the actual hot set.

fullCountCollapse allboolean
if set to true and the query contains a LIMIT clause, then the result will have an extra attribute with the sub-attributes stats and fullCount, { ... , "extra": { "stats": { "fullCount": 123 } } }. The fullCount attribute will contain the number of documents in the result before the last top-level LIMIT in the query was applied. It can be used to count the number of documents that match certain filter criteria, but only return a subset of them, in one go. It is thus similar to MySQL's SQL_CALC_FOUND_ROWS hint. Note that setting the option will disable a few LIMIT optimizations and may lead to more documents being processed, and thus make queries run longer. Note that the fullCount attribute may only be present in the result if the query has a top-level LIMIT clause and the LIMIT clause is actually used in the query.

intermediateCommitCountCollapse allinteger
The maximum number of operations after which an intermediate commit is performed automatically.

intermediateCommitSizeCollapse allinteger
The maximum total size of operations after which an intermediate commit is performed automatically.

maxDNFConditionMembersCollapse allinteger
A threshold for the maximum number of OR sub-nodes in the internal representation of an AQL FILTER condition.

Yon can use this option to limit the computation time and memory usage when converting complex AQL FILTER conditions into the internal DNF (disjunctive normal form) format. FILTER conditions with a lot of logical branches (AND, OR, NOT) can take a large amount of processing time and memory. This query option limits the computation time and memory usage for such conditions.

Once the threshold value is reached during the DNF conversion of a FILTER condition, the conversion is aborted, and the query continues with a simplified internal representation of the condition, which cannot be used for index lookups.

You can set the threshold globally instead of per query with the --query.max-dnf-condition-members startup option.

maxNodesPerCallstackCollapse allinteger
The number of execution nodes in the query plan after that stack splitting is performed to avoid a potential stack overflow. Defaults to the configured value of the startup option --query.max-nodes-per-callstack.

This option is only useful for testing and debugging and normally does not need any adjustment.

maxNumberOfPlansCollapse allinteger
Limits the maximum number of plans that are created by the AQL query optimizer.

maxRuntimeCollapse allnumber
The query has to be executed within the given runtime or it is killed. The value is specified in seconds. The default value is 0.0 (no timeout).

maxTransactionSizeCollapse allinteger
The transaction size limit in bytes.

maxWarningCountCollapse allinteger
Limits the maximum number of warnings a query will return. The number of warnings a query will return is limited to 10 by default, but that number can be increased or decreased by setting this attribute.

optimizerCollapse allobject
Options related to the query optimizer.

rulesCollapse allarray
A list of to-be-included or to-be-excluded optimizer rules can be put into this attribute, telling the optimizer to include or exclude specific rules. To disable a rule, prefix its name with a -, to enable a rule, prefix it with a +. There is also a pseudo-rule all, which matches all optimizer rules. -all disables all rules.

Itemsstring
profileCollapse allinteger
If set to true or 1, then the additional query profiling information is returned in the profile sub-attribute of the extra return attribute, unless the query result is served from the query results cache. If set to 2, the query includes execution stats per query plan node in stats.nodes sub-attribute of the extra return attribute. Additionally, the query plan is returned in the extra.plan sub-attribute.

satelliteSyncWaitCollapse allnumber
This Enterprise Edition parameter allows to configure how long a DB-Server has time to bring the SatelliteCollections involved in the query into sync. The default value is 60.0 seconds. When the maximal time is reached, the query is stopped.

skipInaccessibleCollectionsCollapse allboolean
Let AQL queries (especially graph traversals) treat collections to which a user has no access rights for as if these collections are empty. Instead of returning a forbidden access error, your queries execute normally. This is intended to help with certain use-cases: A graph contains several collections and different users execute AQL queries on that graph. You can naturally limit the accessible results by changing the access rights of users on collections.

This feature is only available in the Enterprise Edition.

spillOverThresholdMemoryUsageCollapse allinteger
This option allows queries to store intermediate and final results temporarily on disk if the amount of memory used (in bytes) exceeds the specified value. This is used for decreasing the memory usage during the query execution.

This option only has an effect on queries that use the SORT operation but without a LIMIT, and if you enable the spillover feature by setting a path for the directory to store the temporary data in with the --temp.intermediate-results-path startup option.

Default value: 128MB.

INFO: Spilling data from RAM onto disk is an experimental feature and is turned off by default. The query results are still built up entirely in RAM on Coordinators and single servers for non-streaming queries. To avoid the buildup of the entire query result in RAM, use a streaming query (see the stream option).

spillOverThresholdNumRowsCollapse allinteger
This option allows queries to store intermediate and final results temporarily on disk if the number of rows produced by the query exceeds the specified value. This is used for decreasing the memory usage during the query execution. In a query that iterates over a collection that contains documents, each row is a document, and in a query that iterates over temporary values (i.e. FOR i IN 1..100), each row is one of such temporary values.

This option only has an effect on queries that use the SORT operation but without a LIMIT, and if you enable the spillover feature by setting a path for the directory to store the temporary data in with the --temp.intermediate-results-path startup option.

Default value: 5000000 rows.

INFO: Spilling data from RAM onto disk is an experimental feature and is turned off by default. The query results are still built up entirely in RAM on Coordinators and single servers for non-streaming queries. To avoid the buildup of the entire query result in RAM, use a streaming query (see the stream option).

streamCollapse allboolean
Can be enabled to execute the query lazily. If set to true, then the query is executed as long as necessary to produce up to batchSize results. These results are returned immediately and the query is suspended until the client asks for the next batch (if there are more results). Depending on the query this can mean that the first results will be available much faster and that less memory is needed because the server only needs to store a subset of results at a time. Read-only queries can benefit the most, unless SORT without index or COLLECT are involved that make it necessary to process all documents before a partial result can be returned. It is advisable to only use this option for queries without exclusive locks.

Remarks:

The query will hold resources until it ends (such as RocksDB snapshots, which prevents compaction to some degree). Writes will be in memory until the query is committed.
If existing documents are modified, then write locks are held on these documents and other queries trying to modify the same documents will fail because of this conflict.
A streaming query may fail late because of a conflict or for other reasons after some batches were already returned successfully, possibly rendering the results up to that point meaningless.
The query options cache, count and fullCount are not supported for streaming queries.
Query statistics, profiling data and warnings are delivered as part of the last batch.
If the stream option is false (default), then the complete result of the query is calculated before any of it is returned to the client. The server stores the full result in memory (on the contacted Coordinator if in a cluster). All other resources are freed immediately (locks, RocksDB snapshots). The query will fail before it returns results in case of a conflict.

usePlanCacheCollapse allboolean
Set this option to true to utilize a cached query plan or add the execution plan of this query to the cache if it's not in the cache yet. Otherwise, the plan cache is bypassed (introduced in v3.12.4).

Query plan caching can reduce the total time for processing queries by avoiding to parse, plan, and optimize queries over and over again that effectively have the same execution plan with at most some changes to bind parameter values.

An error is raised if a query doesn't meet the requirements for plan caching. See Cache eligibility for details.

Defaultfalse

Copy link
Author

Copilot AI commented Nov 26, 2025

@copilot
Here's a copy of the documentation supplied for the REST API, can you use these descriptions as comment strings for the newly added PostCursorOptions properties?

...

The comment appears to be cut off after the allowRetry description. Could you provide the complete REST API documentation for all the newly added properties (allowDirtyReads, allowRetry, cache, fillBlockCache, maxDNFConditionMembers, maxNodesPerCallstack, maxNumberOfPlans, spillOverThresholdMemoryUsage, spillOverThresholdNumRows, and usePlanCache)?

@rossmills99
Copy link
Collaborator

@copilot
api-options.txt

I have attached the information to this comment in a text file, can you access it that way?

Copy link
Author

Copilot AI commented Nov 26, 2025

@copilot
api-options.txt

...

I cannot access file attachments from GitHub comments. Could you paste the contents of the text file directly in a comment? That way I can update the XML documentation comments with the official REST API descriptions.

@rossmills99
Copy link
Collaborator

@copilot Here you go:

optionsCollapse allobject
key/value object with extra options for the query.

allowDirtyReadsCollapse allboolean
If you set this option to true and execute the query against a cluster deployment, then the Coordinator is allowed to read from any shard replica and not only from the leader.

You may observe data inconsistencies (dirty reads) when reading from followers, namely obsolete revisions of documents because changes have not yet been replicated to the follower, as well as changes to documents before they are officially committed on the leader.

This feature is only available in the Enterprise Edition.

allowRetryCollapse allboolean
Set this option to true to make it possible to retry fetching the latest batch from a cursor. The default is false.

If retrieving a result batch fails because of a connection issue, you can ask for that batch again using the POST /_api/cursor// endpoint. The first batch has an ID of 1 and the value is incremented by 1 with every batch. Every result response except the last one also includes a nextBatchId attribute, indicating the ID of the batch after the current. You can remember and use this batch ID should retrieving the next batch fail.

You can only request the latest batch again (or the next batch). Earlier batches are not kept on the server-side. Requesting a batch again does not advance the cursor.

You can also call this endpoint with the next batch identifier, i.e. the value returned in the nextBatchId attribute of a previous request. This advances the cursor and returns the results of the next batch. This is only supported if there are more results in the cursor (i.e. hasMore is true in the latest batch).

From v3.11.1 onward, you may use the POST /_api/cursor// endpoint even if the allowRetry attribute is false to fetch the next batch, but you cannot request a batch again unless you set it to true.

To allow refetching of the very last batch of the query, the server cannot automatically delete the cursor. After the first attempt of fetching the last batch, the server would normally delete the cursor to free up resources. As you might need to reattempt the fetch, it needs to keep the final batch when the allowRetry option is enabled. Once you successfully received the last batch, you should call the DELETE /_api/cursor/ endpoint so that the server doesn't unnecessarily keep the batch until the cursor times out (ttl query option).

cacheCollapse allboolean
Whether the AQL query results cache shall be used for adding as well as for retrieving results.

If the query cache mode is set to demand and you set the cache query option to true for a query, then its query result is cached if it's eligible for caching. If the query cache mode is set to on, query results are automatically cached if they are eligible for caching unless you set the cache option to false.

If you set the cache option to false, then any query cache lookup is skipped for the query. If you set it to true, the query cache is checked for a cached result if the query cache mode is either set to on or demand.

failOnWarningCollapse allboolean
When set to true, the query will throw an exception and abort instead of producing a warning. This option should be used during development to catch potential issues early. When the attribute is set to false, warnings will not be propagated to exceptions and will be returned with the query result. There is also a server configuration option --query.fail-on-warning for setting the default value for failOnWarning so it does not need to be set on a per-query level.

fillBlockCacheCollapse allboolean
if set to true or not specified, this will make the query store the data it reads via the RocksDB storage engine in the RocksDB block cache. This is usually the desired behavior. The option can be set to false for queries that are known to either read a lot of data which would thrash the block cache, or for queries that read data which are known to be outside of the hot set. By setting the option to false, data read by the query will not make it into the RocksDB block cache if not already in there, thus leaving more room for the actual hot set.

fullCountCollapse allboolean
if set to true and the query contains a LIMIT clause, then the result will have an extra attribute with the sub-attributes stats and fullCount, { ... , "extra": { "stats": { "fullCount": 123 } } }. The fullCount attribute will contain the number of documents in the result before the last top-level LIMIT in the query was applied. It can be used to count the number of documents that match certain filter criteria, but only return a subset of them, in one go. It is thus similar to MySQL's SQL_CALC_FOUND_ROWS hint. Note that setting the option will disable a few LIMIT optimizations and may lead to more documents being processed, and thus make queries run longer. Note that the fullCount attribute may only be present in the result if the query has a top-level LIMIT clause and the LIMIT clause is actually used in the query.

intermediateCommitCountCollapse allinteger
The maximum number of operations after which an intermediate commit is performed automatically.

intermediateCommitSizeCollapse allinteger
The maximum total size of operations after which an intermediate commit is performed automatically.

maxDNFConditionMembersCollapse allinteger
A threshold for the maximum number of OR sub-nodes in the internal representation of an AQL FILTER condition.

Yon can use this option to limit the computation time and memory usage when converting complex AQL FILTER conditions into the internal DNF (disjunctive normal form) format. FILTER conditions with a lot of logical branches (AND, OR, NOT) can take a large amount of processing time and memory. This query option limits the computation time and memory usage for such conditions.

Once the threshold value is reached during the DNF conversion of a FILTER condition, the conversion is aborted, and the query continues with a simplified internal representation of the condition, which cannot be used for index lookups.

You can set the threshold globally instead of per query with the --query.max-dnf-condition-members startup option.

maxNodesPerCallstackCollapse allinteger
The number of execution nodes in the query plan after that stack splitting is performed to avoid a potential stack overflow. Defaults to the configured value of the startup option --query.max-nodes-per-callstack.

This option is only useful for testing and debugging and normally does not need any adjustment.

maxNumberOfPlansCollapse allinteger
Limits the maximum number of plans that are created by the AQL query optimizer.

maxRuntimeCollapse allnumber
The query has to be executed within the given runtime or it is killed. The value is specified in seconds. The default value is 0.0 (no timeout).

maxTransactionSizeCollapse allinteger
The transaction size limit in bytes.

maxWarningCountCollapse allinteger
Limits the maximum number of warnings a query will return. The number of warnings a query will return is limited to 10 by default, but that number can be increased or decreased by setting this attribute.

optimizerCollapse allobject
Options related to the query optimizer.

rulesCollapse allarray
A list of to-be-included or to-be-excluded optimizer rules can be put into this attribute, telling the optimizer to include or exclude specific rules. To disable a rule, prefix its name with a -, to enable a rule, prefix it with a +. There is also a pseudo-rule all, which matches all optimizer rules. -all disables all rules.

Itemsstring
profileCollapse allinteger
If set to true or 1, then the additional query profiling information is returned in the profile sub-attribute of the extra return attribute, unless the query result is served from the query results cache. If set to 2, the query includes execution stats per query plan node in stats.nodes sub-attribute of the extra return attribute. Additionally, the query plan is returned in the extra.plan sub-attribute.

satelliteSyncWaitCollapse allnumber
This Enterprise Edition parameter allows to configure how long a DB-Server has time to bring the SatelliteCollections involved in the query into sync. The default value is 60.0 seconds. When the maximal time is reached, the query is stopped.

skipInaccessibleCollectionsCollapse allboolean
Let AQL queries (especially graph traversals) treat collections to which a user has no access rights for as if these collections are empty. Instead of returning a forbidden access error, your queries execute normally. This is intended to help with certain use-cases: A graph contains several collections and different users execute AQL queries on that graph. You can naturally limit the accessible results by changing the access rights of users on collections.

This feature is only available in the Enterprise Edition.

spillOverThresholdMemoryUsageCollapse allinteger
This option allows queries to store intermediate and final results temporarily on disk if the amount of memory used (in bytes) exceeds the specified value. This is used for decreasing the memory usage during the query execution.

This option only has an effect on queries that use the SORT operation but without a LIMIT, and if you enable the spillover feature by setting a path for the directory to store the temporary data in with the --temp.intermediate-results-path startup option.

Default value: 128MB.

INFO: Spilling data from RAM onto disk is an experimental feature and is turned off by default. The query results are still built up entirely in RAM on Coordinators and single servers for non-streaming queries. To avoid the buildup of the entire query result in RAM, use a streaming query (see the stream option).

spillOverThresholdNumRowsCollapse allinteger
This option allows queries to store intermediate and final results temporarily on disk if the number of rows produced by the query exceeds the specified value. This is used for decreasing the memory usage during the query execution. In a query that iterates over a collection that contains documents, each row is a document, and in a query that iterates over temporary values (i.e. FOR i IN 1..100), each row is one of such temporary values.

This option only has an effect on queries that use the SORT operation but without a LIMIT, and if you enable the spillover feature by setting a path for the directory to store the temporary data in with the --temp.intermediate-results-path startup option.

Default value: 5000000 rows.

INFO: Spilling data from RAM onto disk is an experimental feature and is turned off by default. The query results are still built up entirely in RAM on Coordinators and single servers for non-streaming queries. To avoid the buildup of the entire query result in RAM, use a streaming query (see the stream option).

streamCollapse allboolean
Can be enabled to execute the query lazily. If set to true, then the query is executed as long as necessary to produce up to batchSize results. These results are returned immediately and the query is suspended until the client asks for the next batch (if there are more results). Depending on the query this can mean that the first results will be available much faster and that less memory is needed because the server only needs to store a subset of results at a time. Read-only queries can benefit the most, unless SORT without index or COLLECT are involved that make it necessary to process all documents before a partial result can be returned. It is advisable to only use this option for queries without exclusive locks.

Remarks:

The query will hold resources until it ends (such as RocksDB snapshots, which prevents compaction to some degree). Writes will be in memory until the query is committed.
If existing documents are modified, then write locks are held on these documents and other queries trying to modify the same documents will fail because of this conflict.
A streaming query may fail late because of a conflict or for other reasons after some batches were already returned successfully, possibly rendering the results up to that point meaningless.
The query options cache, count and fullCount are not supported for streaming queries.
Query statistics, profiling data and warnings are delivered as part of the last batch.
If the stream option is false (default), then the complete result of the query is calculated before any of it is returned to the client. The server stores the full result in memory (on the contacted Coordinator if in a cluster). All other resources are freed immediately (locks, RocksDB snapshots). The query will fail before it returns results in case of a conflict.

usePlanCacheCollapse allboolean
Set this option to true to utilize a cached query plan or add the execution plan of this query to the cache if it's not in the cache yet. Otherwise, the plan cache is bypassed (introduced in v3.12.4).

Query plan caching can reduce the total time for processing queries by avoiding to parse, plan, and optimize queries over and over again that effectively have the same execution plan with at most some changes to bind parameter values.

An error is raised if a query doesn't meet the requirements for plan caching. See Cache eligibility for details.

Defaultfalse

Copy link
Author

Copilot AI commented Nov 26, 2025

@copilot Here you go:

optionsCollapse allobject
...

The text is still cut off. I can see the documentation for allowDirtyReads and allowRetry, but I need the complete documentation for the remaining properties: cache, fillBlockCache, maxDNFConditionMembers, maxNodesPerCallstack, maxNumberOfPlans, spillOverThresholdMemoryUsage, spillOverThresholdNumRows, and usePlanCache. Could you provide the rest of the documentation in another comment?

@rossmills99 rossmills99 requested a review from DiscoPYF November 26, 2025 12:59
@rossmills99
Copy link
Collaborator

@DiscoPYF I've used Copilot to automatically implement these changes, it looks like a good job to me. I've tried in the comments above to ask it to use documentation comments from the Swagger docs but it failed (maybe a limitation on how much context it will read from a PR comment). Anyway the comments that it used look fine, having quickly checked them so I have approved.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Add missing extra options for AQL queries

2 participants