Skip to content

Merge/upstream main 2025 08 28#15

Merged
xdecentralix merged 12 commits intomainfrom
merge/upstream-main-2025-08-28
Aug 28, 2025
Merged

Merge/upstream main 2025 08 28#15
xdecentralix merged 12 commits intomainfrom
merge/upstream-main-2025-08-28

Conversation

@xdecentralix
Copy link
Collaborator

Description

Changes

  • ...
  • ...

How to test

squadgazzz and others added 12 commits August 25, 2025 07:26
# Description
This is a some kind of follow-up to cowprotocol#3578, where I noticed that
liquidity tick net is not used when working with concentrated univ3
liquidity sources in the Baseline solver, while this data might be heavy
and contain thousands of records for each pool, which can potentially
lead to unreasonable memory consumption.

# Changes
Do not store the liquidity tick net received from driver. It would be
great to not even parse it, but a shared struct is being used here, so
the parsed auction is now consumed by the `into_domain` function to be
dropped as soon as possible to avoid storing it in memory for the whole
`solve` operation.

## How to test
No tests so far for univ3 liquidity due to annoying issues with its
fetching delays in e2e tests.
# Description
Even after cowprotocol#3499, a memory leak [was
noticed](cowprotocol#3554) when the
UniV3 liquidity fetching is enabled in the Baseline solver. Looking at
the code, I don't see an obvious reason for it other than the
UniswapV3QuoterV2 contract instance is being extensively cloned on each
`/solve` request.

# Changes

- Use `Arc`'ed UniswapV3QuoterV2 contract instance.

## How to test
I've tried to resurrect
[this](cowprotocol@a9ff88f)
e2e test, but the liquidity fetching form subgraph sometimes takes a
very long time, so the test is too flaky to enable it.

## Follow-ups

In any case, I need to find a way to make the Jemalloc profiler
work[cowprotocol#3533] to collect memory dumps later in case the memory leak
happens again.

## Related Issues

cowprotocol#3554
# Description

cowprotocol#3532 set
persist-credentials to false leading the to the tag not being pushed as
there is no github creds stored to do so.
This PR adds Trivy (A Docker vulnerability scanner) to the repository.

---------

Co-authored-by: ilya <ilya@cow.fi>
Co-authored-by: Martin Magnus <martin.beckmann@protonmail.com>
# Description

We fixed foundry version for driver tests in
cowprotocol#3561 because high_gas_limit
started failing. This PR fixes the test with the latest foundy version
and adds comments that make it clear what we're testing. I also added a
test that makes sure we discard solutions that take more gas than half
of the block gas limit.
# Description
For an order to be put into the current auction the autopilot needs to
know the native price of the sell and the buy token. Any delays in
fetching the native prices can result in orders having "to sit out" an
auction which immediately adds at least 12s to the execution time. But
we also have to be mindful of the time we spend fetching native prices
because that delays the entire auction.

Also in order to reduce cost we fetch native prices in stages (first the
cheap quoters, then the expensive ones).

# Changes
So far we evenly divided the time for each stage due to its simplicity.
However, we are currently dealing with issues where the later stages
just need a little bit more time. To get the most out of the allocated
time and use it fully this PR adjusts the way we compute the timeout for
the current stage.
Now the timeout of the current stage will adjust dynamically. The
current stage always gets `total_time_left / stages_left`. This does not
change anything for the very first stage but subsequent stages would get
more time whenever some stage does not use the entire allocated time
budget.
For the native price estimation code to know how many stages are left we
build a new `Context` struct in the `CompetitionEstimator` that contains
that information.

I also `async-ified` the other tests which turned out to be unnecessary.
I think the new way is cleaner but LMK if you prefer to have that in a
separate PR.

## How to test
Added a unit test to assert that the timeouts are what we'd expect them
to be (left code comments to explain the subtle details)
# Description
Follow up PR from cowprotocol#3569.

> [!CAUTION]
> Given the parsing logic is changed this PR will need to be
synchronized with an infrastructure change upon merge.

# Changes
- [x] reduces the number of choices we have to configure our gas
estimators in the API and autopilot to reflect current reality (GasNot,
Blocknative, etc don't really exist anymore and are not used in
prodcution)
- [x] changes the configuration to not require two orthogonal parameters
(gas_estimator & gas_estimator_driver_url) and clumsy logic to ensure
consistency and collapses them into a single type that can be parsed
- [x] remove some otherwise unneeded code

## How to test
Existing tests
# Description
Some of our logs have become pretty hard to read IMO.
Take this log for example:
`
2025-08-27T07:38:26.694Z INFO autopilot_startup{args.shared.node_url=Url
{ scheme: "http", cannot_be_a_base: false, username: "", password: None,
host: Some(Domain("mainnet-proxy.rpc-nodes.svc.cluster.local")), port:
None, path: "/", query: None, fragment: None
}}:ethereum{chain=Mainnet}:current_block_stream: ethrpc::block_stream:
noticed a new block number=23231015
hash=0xa65cfa24ed5be0f2f3d8c63639f423c4dcaaa50ebeba05f218349d5be8f30ebd
trace_id=83ee7db2a5fdfac5e413ffbb4eba4652
`

AFAICS roughly half of the log is not really useful
(`args.shared.node_url=Url { scheme: "http", cannot_be_a_base: false,
username: "", password: None, host:
Some(Domain("mainnet-proxy.rpc-nodes.svc.cluster.local")), port: None,
path: "/", query: None, fragment: None }}:ethereum{chain=Mainnet`).

# Changes
1. stop logging the node_url on every log during autopilot startup
2. stop logging the current network as this is already part of the
elasticsearch document
3. added startup tracing for driver and orderbook as I think it's
overall nice to easily see all logs that happen during the startup
sequence

@fafk will this interfere with the functionality of `tempo`? I'm mostly
concerned about `2` in case it's somehow used to aggregate data at the
moment.
# Description

Pancake v2 pool digest on BNB is different. I found the correct one by
going to
https://bscscan.com/address/0xca143ce32fe78f1f7019d7d551a6402fc5350c73#readContract
and reading the `INIT_CODE_PAIR_HASH` constant. I was able to get a
quote with this hash.
@github-actions
Copy link


Thank you for your submission, we really appreciate it. Like many open-source projects, we ask that you all sign our Contributor License Agreement before we can accept your contribution. You can sign the CLA by just posting a Pull Request Comment same as the below format.


I have read the CLA Document and I hereby sign the CLA


0 out of 7 committers have signed the CLA.
@squadgazzz
@fafk
@avivkeller
@MartinquaXD
@fleupold
@jmg-duarte
@xdecentralix
You can retrigger this bot by commenting recheck in this Pull Request. Posted by the CLA Assistant Lite bot.

@xdecentralix xdecentralix merged commit 21d9066 into main Aug 28, 2025
11 of 14 checks passed
@github-actions github-actions bot locked and limited conversation to collaborators Aug 28, 2025
@xdecentralix xdecentralix deleted the merge/upstream-main-2025-08-28 branch August 28, 2025 10:10
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants