diff --git a/.github/pull_request_template.md b/.github/pull_request_template.md index f95f739dc4..d581feff06 100644 --- a/.github/pull_request_template.md +++ b/.github/pull_request_template.md @@ -1,27 +1,27 @@ -**Description:** -High level description of what the PR addresses should be put here. Should be detailed enough to communicate to a PO what this PR addresses without diving into the technical nuances +## Description: + -**Technical details:** -The technical details for the knowledge of other developers. Any detailed caveats or specific deployment steps should be outlined here. -**Requirements for PR merge:** + +## Technical Details: + + + + +## Requirements for PR Merge: + 1. [ ] Unit & integration tests updated -2. [ ] API documentation updated -3. [ ] Necessary PR reviewers: - - [ ] Backend - - [ ] Frontend - - [ ] Operations - - [ ] Domain Expert -4. [ ] Matview impact assessment completed -5. [ ] Frontend impact assessment completed -6. [ ] Data validation completed -7. [ ] Appropriate Operations ticket(s) created -8. [ ] Jira Ticket [DEV-123](https://federal-spending-transparency.atlassian.net/browse/DEV-123): - - [ ] Link to this Pull-Request - - [ ] Performance evaluation of affected (API | Script | Download) - - [ ] Before / After data comparison - -**Area for explaining above N/A when needed:** -``` -``` +2. [ ] API documentation updated (examples listed below) + 1. API Contracts + 2. API UI + 3. Comments +3. [ ] Data validation completed (examples listed below) + 1. Does this work well with the current frontend? Or is the frontend aware of a needed change? + 2. Is performance impacted in the changes (e.g., API, pipeline, downloads, etc.)? + 3. Is the expected data returned with the expected format? +4. [ ] Appropriate Operations ticket(s) created +5. [ ] Jira Ticket(s) + 1. [DEV-0](https://federal-spending-transparency.atlassian.net/browse/DEV-0) + +### Explain N/A in above checklist: diff --git a/.github/pull_request_template_future.md b/.github/pull_request_template_future.md deleted file mode 100644 index bc25a8a5f5..0000000000 --- a/.github/pull_request_template_future.md +++ /dev/null @@ -1,18 +0,0 @@ -**Description:** -High level description of what the PR addresses should be put here. Should be detailed enough to communicate to a PO what this PR addresses without diving into the technical nuances - -**Technical details:** -The technical details for the knowledge of other developers. Any detailed caveats or specific deployment steps should be outlined here. - -**Requirements for PR merge:** - -1. [ ] Definition of Done - Development section appropriately satisfied -2. [ ] Necessary PR reviewers: - - [ ] Backend - - [ ] Frontend - - [ ] Operations -3. [ ] Jira Ticket(s) - - [DEV-0](https://federal-spending-transparency.atlassian.net/browse/DEV-0): - - -Click [here](https://github.com/fedspendingtransparency/data-act-documentation/blob/master/agile_practices/story_definition_of_done.md) for Definition of Done diff --git a/.github/workflows/pull-request-and-review-updates.yaml b/.github/workflows/pull-request-and-review-updates.yaml new file mode 100644 index 0000000000..94465c9f6f --- /dev/null +++ b/.github/workflows/pull-request-and-review-updates.yaml @@ -0,0 +1,27 @@ +name: Pull Request and Review Updates + +on: + pull_request: + types: [opened] + pull_request_review: + types: [submitted] + +concurrency: + group: ${{ github.workflow }}-${{ github.event.pull_request.number }}-${{ github.actor_id }} + cancel-in-progress: true + +jobs: + Update-Pull-Request-Assignees: + name: Update Pull Request Assignees + runs-on: ${{ vars.RUNNER_VERSION }} + steps: + - name: Update Assignee + uses: actions/github-script@v7 + with: + script: | + github.rest.issues.addAssignees({ + owner: context.repo.owner, + repo: context.repo.repo, + issue_number: context.issue.number, + assignees: [context.actor] + }); diff --git a/README.md b/README.md index 6e084f8dc4..ad9534d54a 100644 --- a/README.md +++ b/README.md @@ -1,6 +1,6 @@ #

USAspending API

-[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/python/black) [![Pull Request Checks](https://github.com/fedspendingtransparency/usaspending-api/actions/workflows/pull-request-checks.yaml/badge.svg)](https://github.com/fedspendingtransparency/usaspending-api/actions/workflows/pull-request-checks.yaml) [![Test Coverage](https://codeclimate.com/github/fedspendingtransparency/usaspending-api/badges/coverage.svg)](https://codeclimate.com/github/fedspendingtransparency/usaspending-api/coverage) [![Code Climate](https://codeclimate.com/github/fedspendingtransparency/usaspending-api/badges/gpa.svg)](https://codeclimate.com/github/fedspendingtransparency/usaspending-api) +[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/python/black) [![Pull Request Checks](https://github.com/fedspendingtransparency/usaspending-api/actions/workflows/pull-request-checks.yaml/badge.svg?branch=staging)](https://github.com/fedspendingtransparency/usaspending-api/actions/workflows/pull-request-checks.yaml) [![Test Coverage](https://codeclimate.com/github/fedspendingtransparency/usaspending-api/badges/coverage.svg)](https://codeclimate.com/github/fedspendingtransparency/usaspending-api/coverage) [![Code Climate](https://codeclimate.com/github/fedspendingtransparency/usaspending-api/badges/gpa.svg)](https://codeclimate.com/github/fedspendingtransparency/usaspending-api) _This API is utilized by USAspending.gov to obtain all federal spending data which is open source and provided to the public as part of the DATA Act._ diff --git a/usaspending_api/awards/management/commands/generate_unlinked_awards_download.py b/usaspending_api/awards/management/commands/generate_unlinked_awards_download.py index 1288f3092b..ac490d67fb 100644 --- a/usaspending_api/awards/management/commands/generate_unlinked_awards_download.py +++ b/usaspending_api/awards/management/commands/generate_unlinked_awards_download.py @@ -144,10 +144,10 @@ def process_data_copy_jobs(self, zip_file_path): sql_file = None final_path = self._create_data_csv_dest_path(final_name) intermediate_data_file_path = final_path.parent / (final_path.name + "_temp") - data_file_names, count = self.download_to_csv( + download_metadata = self.download_to_csv( sql_file, final_path, final_name, str(intermediate_data_file_path), zip_file_path, df ) - if count <= 0: + if download_metadata.number_of_rows <= 0: logger.warning(f"Empty data file generated: {final_path}!") self.filepaths_to_delete.extend(self.working_dir_path.glob(f"{final_path.stem}*")) diff --git a/usaspending_api/broker/management/commands/update_table_value_from_broker.py b/usaspending_api/broker/management/commands/update_table_value_from_broker.py index 92ab6e9ec8..b8dc25229e 100644 --- a/usaspending_api/broker/management/commands/update_table_value_from_broker.py +++ b/usaspending_api/broker/management/commands/update_table_value_from_broker.py @@ -30,7 +30,7 @@ def add_arguments(self, parser): parser.add_argument( "--load-field-type", type=str, - required=True, + required=False, default="text", help="Postgres data type of the field that will be copied from Broker", ) @@ -122,12 +122,16 @@ def run_update(self, min_id: int, max_id: int) -> None: 'broker_server','( SELECT {self.broker_match_field}, {self.broker_load_field} FROM {self.broker_table_name} + WHERE + {self.broker_match_field} >= {chunk_min_id} + AND {self.broker_match_field} <= {chunk_max_id} )') AS broker_table ( lookup_id bigint, load_field {self.load_field_type} ) - WHERE usas_table.{self.usas_match_field} = broker_table.lookup_id + WHERE + usas_table.{self.usas_match_field} = broker_table.lookup_id ; """ ) @@ -135,11 +139,11 @@ def run_update(self, min_id: int, max_id: int) -> None: row_count = cursor.rowcount total_row_count += row_count ratio = (chunk_max_id - min_id + 1) / estimated_id_count - logging.info( + logger.info( f'Updated {row_count:,d} rows with "{self.usas_match_field}" between {chunk_min_id:,d} and {chunk_max_id:,d}.' f" Estimated time remaining: {timer.estimated_remaining_runtime(ratio)}" ) - logging.info( + logger.info( f'Finished updating {total_row_count:,d} rows for "{self.usas_table_name}"."{self.usas_load_field}" ' f"in {timer}" ) diff --git a/usaspending_api/disaster/management/commands/generate_covid19_download.py b/usaspending_api/disaster/management/commands/generate_covid19_download.py index a6dc01542d..13b793f6e3 100644 --- a/usaspending_api/disaster/management/commands/generate_covid19_download.py +++ b/usaspending_api/disaster/management/commands/generate_covid19_download.py @@ -143,10 +143,11 @@ def process_data_copy_jobs(self): logger.info(f"Creating new COVID-19 download zip file: {self.zip_file_path}") self.filepaths_to_delete.append(self.zip_file_path) - for sql_file, final_name in self.download_file_list: + for source_sql, final_name in self.download_file_list: final_path = self._create_data_csv_dest_path(final_name) intermediate_data_file_path = final_path.parent / (final_path.name + "_temp") - source_sql = read_sql_file_to_text(Path(sql_file)) + if self.compute_type_arg == ComputeTypeEnum.POSTGRES.value: + source_sql = read_sql_file_to_text(Path(source_sql)) download_metadata = self.download_to_csv( source_sql, final_path, final_name, str(intermediate_data_file_path) )