Conversation
OpenAI DDMCS ReLU GELU GoLU singlelayer default
…re actually skipped
…ping more predictable
beardyFace
requested changes
Dec 14, 2025
scripts/batch_coordinator.py
Outdated
Comment on lines
+14
to
+95
| from cares_reinforcement_learning.util.configurations import ( | ||
| FunctionLayer, | ||
| MLPConfig, | ||
| TrainableLayer, | ||
| ) | ||
|
|
||
| # MARK: ACTIVATION LAYERS | ||
|
|
||
| # GoLU | ||
| golu_a: MLPConfig = MLPConfig( | ||
| layers=[ | ||
| TrainableLayer(layer_type="Linear", out_features=256), | ||
| FunctionLayer(layer_type="GoLU"), | ||
| ] | ||
| ) | ||
| golu_c: MLPConfig = MLPConfig( | ||
| layers=[ | ||
| TrainableLayer(layer_type="Linear", out_features=256), | ||
| FunctionLayer(layer_type="GoLU"), | ||
| TrainableLayer(layer_type="Linear", in_features=256, out_features=1), | ||
| ] | ||
| ) | ||
|
|
||
| # GELU | ||
| gelu_a: MLPConfig = MLPConfig( | ||
| layers=[ | ||
| TrainableLayer(layer_type="Linear", out_features=256), | ||
| FunctionLayer(layer_type="GELU"), | ||
| ] | ||
| ) | ||
| gelu_c: MLPConfig = MLPConfig( | ||
| layers=[ | ||
| TrainableLayer(layer_type="Linear", out_features=256), | ||
| FunctionLayer(layer_type="GELU"), | ||
| TrainableLayer(layer_type="Linear", in_features=256, out_features=1), | ||
| ] | ||
| ) | ||
|
|
||
| # ReLU | ||
| relu_a: MLPConfig = MLPConfig( | ||
| layers=[ | ||
| TrainableLayer(layer_type="Linear", out_features=256), | ||
| FunctionLayer(layer_type="ReLU"), | ||
| ] | ||
| ) | ||
| relu_c: MLPConfig = MLPConfig( | ||
| layers=[ | ||
| TrainableLayer(layer_type="Linear", out_features=256), | ||
| FunctionLayer(layer_type="ReLU"), | ||
| TrainableLayer(layer_type="Linear", in_features=256, out_features=1), | ||
| ] | ||
| ) | ||
|
|
||
| # Leaky ReLU | ||
| leaky_a: MLPConfig = MLPConfig( | ||
| layers=[ | ||
| TrainableLayer(layer_type="Linear", out_features=256), | ||
| FunctionLayer(layer_type="LeakyReLU"), | ||
| ] | ||
| ) | ||
| leaky_c: MLPConfig = MLPConfig( | ||
| layers=[ | ||
| TrainableLayer(layer_type="Linear", out_features=256), | ||
| FunctionLayer(layer_type="LeakyReLU"), | ||
| TrainableLayer(layer_type="Linear", in_features=256, out_features=1), | ||
| ] | ||
| ) | ||
|
|
||
| # PReLU | ||
| prelu_a: MLPConfig = MLPConfig( | ||
| layers=[ | ||
| TrainableLayer(layer_type="Linear", out_features=256), | ||
| FunctionLayer(layer_type="PReLU"), | ||
| ] | ||
| ) | ||
| prelu_c: MLPConfig = MLPConfig( | ||
| layers=[ | ||
| TrainableLayer(layer_type="Linear", out_features=256), | ||
| FunctionLayer(layer_type="PReLU"), | ||
| TrainableLayer(layer_type="Linear", in_features=256, out_features=1), | ||
| ] | ||
| ) |
Member
There was a problem hiding this comment.
Why are these activation functions here?
Contributor
Author
There was a problem hiding this comment.
Ah, so that's how you configure the batches - you need to write them in code (I considered JSON, but I think that doesn't bring any benefit). I've been using this for Hoda's activation functions which is why these are here, but ty for the suggestion - I'll strip them down to a more generic example for this PR
Also adds some dependencies needed to run with GPUs on our machines.
Contributor
Author
|
Done! Edited README as well |
This prevents it from running batches when resuming
Closed
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
You can now run a batch of trainings with different configurations by specifying
--batch. Works with parallel executions. Regular execution is unchanged if--batchis not specified.You can set up the configurations in
gymnasium_envrionments/scripts/batch_coordinator.py. E.g.Will set up 4 runs:
You can edit the
_skip()function for more fine-grained control.--b_startand--b_endallow you to specify only running a range.Edited the readme with this information.