Replies: 2 comments
-
Hey @mwengren, the contents for each nebari/src/_nebari/provider/opentofu.py Lines 55 to 59 in d680ca8 If you need this to persist for debugging, you can replace it with tempfile.NamedTemporaryFile and add delete=False to the arguments; this should prevent the file from being removed (keep in mind that this should only be done for debugging purposes).
Now, regarding your question per se, those files should only contain the variable inputs that exist within each stage as part of their Keep in mind that sometimes, outputs from previous stages are also sent out as inputs to other stages. Depending on what you are changing, it would receive precedence, since nebari/src/_nebari/stages/base.py Lines 281 to 300 in d680ca8 Usually, nebari will only consider data from the nebari/src/_nebari/stages/infrastructure/__init__.py Lines 237 to 255 in d680ca8 while the data forwarded to terraform is consumed here: nebari/src/_nebari/stages/infrastructure/__init__.py Lines 937 to 973 in d680ca8 This means that, if the associated template received a value that was not expecting, and was not catched adequately by the validators, it will then forward that as a variable to terraform, which can lead to issues, if the variable/field/value structure was not correct (sometime an open bracket, or identation) it could lead terraform to improperly assume variables that don't exist. If this helps, I suggest you modify or expand the schema code of each stage from which you want to override values. Additionally, and this is important, redeploying multiple times with different versions of schemas and resources can lead to a scrambled "stages" folder, in the sense that it may refer to variables, keys, and resources that no longer exist in the main code. However, locally they are still mapped -- just check your |
Beta Was this translation helpful? Give feedback.
-
@viniciusdc Thanks this is helpful! I'd found previously the I think I understand your explanation of how things work with the It is a bit confusing to develop against, or possibly for a general user to understand unexpected behavior, because Nebari includes default values for some variables and not others, and those that do seem to include them within the Nebari Pydantic classes such as AmazonWebServicesProvider, rather than within the Terraform code. And I'm not sure those are documented anywhere in the user docs, which would be a nice addition. IMO there should be a page in the docs that lists all Nebari TF/config variables and default values, if any. In my case I was running into the following issue with the Previously, I'd been testing deploying Nebari with a SG I'd manually created (comment) and passing the I modified the Nebari TF code to have it obtain the VPC subnet CIDRs rather than reference the Then I found: https://github.com/nebari-dev/nebari/blob/main/src/_nebari/stages/infrastructure/__init__.py#L573. I found a better way that gets around the issue in the end, but at first in the commented-out line I was confused why 10.10.0.0/16 was being used. Why are the variable defaults set in Pydantic vs Terraform? Thx! |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
I've been editing the Nebari TF code somewhat to try to adapt Nebari to work within my existing VPC and subnets on AWS. Xref to discussions I've created about that:
Part of my testing was to modify the TF variables Nebari accepts. At different times when deploying Nebari from an AWS instance where I have the nebari command/conda configured, I get the following warning message in my TF output. It looks like Nebari has created a
.tfvars.json
file somewhere, and Nebari is still reading variables from this file, rather than mynebari-config.yaml
file that I pass tonebari deploy -c
.The following variables aren't an issue (because they've both been removed from my working version of the Nebari TF code), but I'm fairly certain it's getting values for other variables from the same temp file, rather than
nebari-config.yaml
, which is causing issues for my deployment, because I want those values disregarded (and I've commented them out innebari-config.yaml
).The strange part is I can't find this file on the system where my
nebari
command and conda environment is set up. Any advice? How do I flush this file from my environment?Is this the relevant part of the code: https://github.com/nebari-dev/nebari/blob/main/src/_nebari/provider/opentofu.py#L55-L57?
Beta Was this translation helpful? Give feedback.
All reactions