Scaling IDS connector deployments #107
-
I am publishing resources from our backend to an IDS connector. This (mostly) works (pending some bugs that still have to be fixed). I am now thinking about what will happen if the connector is bombarded by so many consumers it cannot cope with the load. The first (naive) approach would be to deploy multiple connectors and publish our resources to each, then put a load balancer in front of the connectors, to distribute load to the connector "farm". However, this will not work as each connector will have unique ids for the entities, thus a consumer that starts talking to a connector has to continue talking to that connector. The next approach would be to have the connectors use and external database and point all of them to the same database, in the hope that they will now all have the same entities, with the same ids. However, I do not think this is feasible, unless the connector was designed and built with this use case in mind. Help?!? |
Beta Was this translation helpful? Give feedback.
Replies: 0 comments 1 reply
-
Hi, Please also take a look at our helm charts. They already have an option for enabling autoscaling. Here some hopefully helpful links: |
Beta Was this translation helpful? Give feedback.
Hi,
the DSC was designed with the usage case you described in mind.
You need to run the DSC with an external database (preferred Postgresql) and then run multiple instances of the DSC behind a loadbalancer.
Please also take a look at our helm charts. They already have an option for enabling autoscaling.
Here some hopefully helpful links: