-
Notifications
You must be signed in to change notification settings - Fork 55
Importing sites numIds #1186
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Importing sites numIds #1186
Conversation
* Few fixings * Fix sequencer out order * Exit prematurely
* Cached change-of-value * Fix JLS order
* Remove fastWrite * Increase timeout
fi | ||
|
||
# Step A: Normalize metadata.json files in the devices directory. | ||
echo "Normalizing metadata for $site_model..." |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we have a separate script for normalizing metadata? Normalizing might introduce a lot of changes and the diff might become difficult to review. Normalizing as a separate step will increase the confidence in review when adding the actual num ids
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The normalization stuff is already captured in a separate commit, so at last the churn will be compartmentalized. I'm not sure what the overall intended flow here is b/c I'm not sure what actual "review" we'd do on a 100 site models (I'm actually not even sure what they are).
Do we have a clearly identified "recipient" for this (is it Vik?), and maybe get them to sign off on what the review flow needs to be?
From a technical perspective I think having them split as separate commits is fine, because then they could be reviewed separately.
OTOH, another reason to split out the script would be that there might be other times when we want to normalize a site model. I just hate the thought of replicating all the common code (extracting site models)... and wouldn't want this "simple thing" to balloon into a bigger mess!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, Vik is the recipient for the same, will check with him and get sign off on review flow.
) | ||
else | ||
echo "Repository not found. Cloning a new one." | ||
rm -rf "$SITE_PATH" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if it's not found then there's no reason to remove it!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You're right that the rm is redundant if the directory doesn't exist at all.
But, I've kept it in as a safeguard for the case where the path might exist but isn't a git repo (e.g., from a previous failed run). This ensures git clone always has a clean target.
fi | ||
|
||
# Step A: Normalize metadata.json files in the devices directory. | ||
echo "Normalizing metadata for $site_model..." |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The normalization stuff is already captured in a separate commit, so at last the churn will be compartmentalized. I'm not sure what the overall intended flow here is b/c I'm not sure what actual "review" we'd do on a 100 site models (I'm actually not even sure what they are).
Do we have a clearly identified "recipient" for this (is it Vik?), and maybe get them to sign off on what the review flow needs to be?
From a technical perspective I think having them split as separate commits is fine, because then they could be reviewed separately.
OTOH, another reason to split out the script would be that there might be other times when we want to normalize a site model. I just hate the thought of replicating all the common code (extracting site models)... and wouldn't want this "simple thing" to balloon into a bigger mess!
bin/import_site_num_ids
Outdated
# Create a new branch for this site model's changes | ||
BRANCH_NAME="feat/import-numids-${site_model}-$(date +%s)" | ||
( | ||
cd "$SITE_PATH" || exit |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
rather than do all these in a subshell I think it would be a lot cleaner just to switch the main PWD to $SITE_PATH and be done with it. All the DEVICES_PATH stuff should be easy enough to do from that working directory... ?
just something to ponder for future scripts. Since this script is likely only something we'd do once or very infrequently, I'm not sure it's worth fixing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the suggestion! That's a good point on the alternative style. For this script, I intentionally used subshell to keep the directory changes isolated and ensure the main loop's state is predictable, especially since it's performing some operations like git clean -fdx
I agree that it's not worth changing for this particular script, but I'll definitely keep that cleaner style in mind for future work.
No description provided.