Replies: 1 comment
-
Come to think of it, Payload has a powerful REST API, and both environments have it running. I could maybe build my migration script in such a way that it takes data from On top of that, I can be sure that nothing in the database will get messed up (e.g. relationships, versions) because I'm not manually fiddling with the database — I'm using Payload itself and it'll update documents accordingly. As an added bonus, I won't have to rely on SSH tunneling to connect to Amazon DocumentDB outside the VPC or do any other shenanigans that involve exposing the database. It's going to be more secure too. The only downside I see is that everything will happen over HTTP and is going to be slower, but I guess performance doesn't matter that much in this situation. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
We have a staging and a production environment for Payload CMS that runs on Amazon DocumentDB and stores files in an S3 bucket.
Up to this point, we've been using a file-based CMS with content tracked in Git. Whenever someone made changes in the staging environment, we would later upload them live with a simple
git merge
. However, this no longer appears to be possible, since databases and buckets can't simply be "merged."Now I'm wondering what would be the most reasonable workflow. Should I attempt to get close to what we've done in the past and create scripts that sync changes between the environments? Or the risk of overwriting data is too high and we should just get used to the fact that we have two separate environments and we should sync content by hand when we have to?
Beta Was this translation helpful? Give feedback.
All reactions