Conversation
colincoleman
left a comment
There was a problem hiding this comment.
I did this for the original xqb terraform scripts - and it is probably an improvement for most of the cases we see. However both methods have their benefits. Memory reservation is much better for the use case where an executable has been wrapped in a docker container for ease of deployment but you want the service to run as best it can.
On the other hand if the use case is for something that runs nicely in parallel (parallel build workers etc) then being able to specify a memory limit that is a whole fraction of a total instance gives you the best chance of maximising the use of your resources
But the Relevant quote from the documentation:
Edit:
With the last quote in mind, I think we can skip the task-level Last edit: |
|
For java programs you can control the total memory either in the startup parameters or in the manifest file. When set Java regulates its garbage collection to stay under the limit you choose. (Of course if you set this too low then the program performs terribly!) |
While moving a copy of the ECS modules to a new repository, I discovered that the container definitions used for
ecs/servicespecifiesmemoryrather thanmemoryReservation. I believe this to have been a mistake from mixing up the documentation for a task definition and a container definition (which is a part of task definitions):Currently, the
task_definition_ramsetsmemoryin the container definition, which imposes a hard limit on the amount of memory that can be consumed by a running container, if the limit is exceeded it will be killed. I think this is not the intended behaviour for any users of this module, and this PR aims to fix that.Having read the documentation, I believe what we want is the following:
memoryReservationspecified in the container definition, so that we reserve some memory but allow the container to exceed the reservation.memoryandcpuspecified in the task definition. The documentation does not give a lot of details about what this actually does (outside fargate), but I presume it is used by ECS to work out container placement in the cluster?By not specifying
memoryon the container definition, we run the risk of having leaky applications eat up all the memory. Personally I think its better to deal with the underlying problem (fix the memory leak), than have a hard limit on memory which restarts the application whenever it reaches the hard limit. Not sure if there are any other uses for this parameter?PS: This should be tested before it is merged, and I was hoping that perhaps @neugeeug could test it on the NEO dev/staging environment?