feat(storage): add configuration for external object storage provider#247
feat(storage): add configuration for external object storage provider#247andrewazores merged 18 commits intocryostatio:mainfrom
Conversation
|
Would using a config map or secret make sense? Maybe allow users to specify a local file with S3 environment variables and use |
|
+1 Agreed! Allowing user-provider configmap/secret sounds good to me! I guess it'd be like Andrew suggestion. To extend that idea, I also think it would be a good idea to allow env vars defined via Helm values. How about the below proposal? Reference: I took inspiration from the s3 ack chart here [0] and [1]. Proposed changes to the
|
|
Really good idea @tthvo , I like that a lot. Would you like to proceed with a PR for that? I can then rebase my work on top of that. |
|
Cool! @andrewazores I opened #248 to allow storage containers for now. If it looks good, I will circle back next week for the rest of the containers 😄 |
|
@cryostatio/reviewers ping |
|
@reviewers ping |
|
Tested Scenario 1, works well. For Scenario 2 I seem to be unable to run it, I followed the testing instructions on a fresh crc instance and ran into this
It's possible something is misconfigured somewhere? Running crc-linux-2.54.0-amd64 |
This looks like a mismatch between the Helm Chart and Cryostat relating to the switch from Endpoints to EndpointSlices. This log message indicates that the Cryostat container is still trying to watch/list Endpoints objects, but the Helm version that deployed it is newer and so the RBAC is set up to grant Cryostat permissions for EndpointSlices, not Endpoints. I'm guessing it's because of this part of the original testing instructions: That image was prepared when I opened this PR back in May, which predates cryostatio/cryostat#740 . Since that and cryostatio/cryostat#927 have been merged in the meantime, I think these Helm value overrides can just be left out (ie allow the chart to install the default |
See #246
See cryostatio/cryostat#927
Allows the user to configure an alternate S3 object storage provider, rather than requiring the use of cryostat-storage.
Manual Testing with unmanaged
cryostat-storagecryostat-storageagain, but deployed independently. This assumes an OpenShift cluster is available andocis logged in.Deploy cryostat-storage
$ oc new-project objectstorage $ # set credentials to secure the S3 endpoint $ oc create secret generic s3cred \ --from-literal=STORAGE_ACCESS_KEY_ID=cryostat \ --from-literal=STORAGE_ACCESS_KEY=verySecretKey1 $ oc create -f seaweed.yamlseaweed.yaml$ oc new-project cryostat-helm $ # create the same Secret again so that Cryostat can be configured to use these credentials for access $ oc create secret generic s3cred \ --from-literal=STORAGE_ACCESS_KEY_ID=cryostat \ --from-literal=STORAGE_ACCESS_KEY=verySecretKey1 $ helm install \ --set storage.storageSecretName=s3cred \ --set storage.provider.url=http://seaweed.objectstorage:8333 \ --set storage.provider.region=us-east-1 \ --set core.route.enabled=true \ cryostat ./charts/cryostatjfr-datasourceor create alocalhost:0custom target or deploy another sample application, then archive the recording and ensure that all archiving functionality works as expected.TODO
test using a different S3-compatible storage implementationthe S3 client has a lot of configuration parameters. Some are required or very basic and should be implemented directly as Helm values (such as the ones already done here), but others are much more specific and may not always be needed. Solving [Request] Add ability to add additional env variables #203 would probably be the right way to go about this.See [Request] Add ability to add additional env variables #203 feat(envs): add configurations to include extra envs for containers #248Manual Testing with external commercial object storage provider
step 0: sign up for a Backblaze B2 free tier account. At the time of writing this allows for an account with no expiry date, but a 10GB data limit. That's enough for basic functionality testing. Create an account and a new Application Key. Create buckets
MYPREFIX-metadata,MYPREFIX-archivedrecordings,MYPREFIX-archivedreports,MYPREFIX-eventtemplates, andMYPREFIX-probes. You can select your own bucket settings, but I would suggest private and storing only the latest copy of each file. I also setarchivedrecordingsto be encrypted and the others unencrypted. SubstituteMYPREFIXfor some unique identifier - the bucket names must be globally unique across all of B2, since they become subdomain names.pushdin the first two steps. These assume that you are in some parent directory which contains both repositories.abcd1234s in thes3credcreation with the key parts from your B2 account and a new Application Key created for this purpose. The name "key ID" is used the same way, and the "application key" is the "access key".storage.provider.urlmatches the "endpoint" for your B2 buckets, adjust if needed. Ensure that thestorage.provider.regionmatches.storage.provider.usePathStyleAccess. B2 also supports DNS subdomain access, which should be more performant. Either should work.object-tagging-alt-4image, which contains feat(s3): file metadata storage modes cryostat#927 . Thestorage.provider.metadata.storageMode=bucketrelies on this, and configures Cryostat to use a separate storage bucket and JSON files for metadata, rather than object Tags (which are not supported in B2).helm upgrade --reuse-values --set storage.provider.metadata.storageMode=metadata cryostat ./charts/cryostat. This uses the "native" S3 object metadata facility, which has fewer size constraints than Tags but which are immutable. For archived recording labels this means that the labels cannot be modified after the recording has been created, which is not so bad since this is not a major feature. Feel free to try it out and observe the nice error message that appears.