-
Notifications
You must be signed in to change notification settings - Fork 23
Description
Why
Currently stored procedures which are executed by triggers receive a props map like so:
-type sproc_props() :: #{on_action := [create | update | delete],
path := khepri_path:native_path()}.But the path key might not be enough information to trigger something useful alone. It would be useful to be passed the created or updated payload, or the payload as it was before it was deleted for deletion triggers, if it exists.
Example usecase...
In rabbitmq-stream-s3 we track stream names in Khepri. Imagine a tree like so:
rabbitmq
├── rabbitmq_stream_s3
│ └── <<"__sq_1745252105682952932">>
├── vhosts
│ └── <<"/">>
│ ├── queues
│ │ └── <<"sq">>
│ └── ...
└── ...
We could set a deletion trigger on:
?RABBITMQ_KHEPRI_QUEUE_PATH(?KHEPRI_WILDCARD_STAR, ?KHEPRI_WILDCARD_STAR)With the current sproc props we would get the path ([rabbitmq, vhosts, <<"/">>, queues, <<"sq">>]) but we couldn't map that to the stream name (<<"__sq_1745252105682952932">>). If the payload were included then we could get the name from the type state in #amqqueue{}, and cleanup objects in S3 using that info.
As a workaround we can set a keep-while condition on the rabbitmq_stream_s3 path on the ?RABBITMQ_KHEPRI_QUEUE_PATH and instead set a deletion trigger on [rabbitmq, rabbitmq_stream_s3, ?KHEPRI_WILDCARD_STAR]. But this seems less direct than it could be. Ideally deleting a stream queue would kick off a task to delete objects, and that would delete the rabbitmq_stream_s3 entry when the deletion completes successfully (tolerant to retries).
How
When listing triggered stored procedures we can find the tree node for the trigger and include the payload in the sproc props if it is there in the tree. For insertions and updates we would pass the payload of the new/updated tree node and for deletions we would pass the payload as it was in the old tree node.
What do you think?