Skip to content

Commit d2b9f73

Browse files
Merge branch '1182-ml-classification-queue' into integrate_classification_queue
2 parents ab9bd9d + 5ff3e26 commit d2b9f73

35 files changed

+2502
-873
lines changed

.github/workflows/run_full_test_suite.yml

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -4,6 +4,8 @@ on:
44
pull_request:
55
branches:
66
- dev
7+
paths-ignore:
8+
- '**/*.md'
79

810
jobs:
911
run-tests:
@@ -33,5 +35,10 @@ jobs:
3335
DJANGO_ENV: test
3436
run: docker-compose -f local.yml run --rm django bash ./init.sh
3537

38+
- name: Generate Coverage Report
39+
env:
40+
DJANGO_ENV: test
41+
run: docker-compose -f local.yml run --rm django bash -c "coverage report"
42+
3643
- name: Cleanup
3744
run: docker-compose -f local.yml down --volumes

CHANGELOG.md

Lines changed: 102 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,106 @@
11
## Overview
22
These are not the release notes, which can be found https://github.com/NASA-IMPACT/COSMOS/releases. Instead, this is a changelog that developers use to log key changes to the codebase with each pull request.
33

4+
## What to Include
5+
For each PR made, an entry should be added to this changelog. It should contain
6+
- a brief description of the deliverable of the feature or bugfix
7+
- exact listing of key changes such as:
8+
- API endpoint modified
9+
- frontend components added
10+
- model updates
11+
- deployment changes needed on the servers
12+
- etc.
13+
414
## Changelog
5-
### 1182-ml-classification-queue
6-
#### Features
7-
-
8-
#### Deployment Changes
9-
- a new env value has been created called `INFERENCE_API_URL`
15+
- 1182-ml-classification-queue
16+
- Changes:
17+
- a new env value has been created called `INFERENCE_API_URL`
18+
19+
- 1052-update-cosmos-to-create-jobs-for-scrapers-and-indexers
20+
- Description: The original automation set up to generate the scrapers and indexers automatically based on a collection workflow status change needed to be updated to more accurately reflect the curation workflow. It would also be good to generate the jobs during this process to streamline the same.
21+
- Changes:
22+
- Updated function nomenclature. Scrapers are Sinequa connector configurations that are used to scrape all the URLs prior to curation. Indexers are Sienqua connector configurations that are used to scrape the URLs post to curation, which would be used to index content on production. Jobs are used to trigger the connectors which are included as parts of joblists.
23+
- Parameterized the convert_template_to_job method to include the job_source to streamline the value added to the `<Collection>` tag in the job XML.
24+
- Updated the fields that are pertinenet to transfer from a scraper to an indexer. Also added a third level of XML processing to facilitate the same.
25+
- scraper_template.xml and indexer_template.xml now contains the templates used for the respective configuration generation.
26+
- Deleted the redundant webcrawler_initial_crawl.xml file.
27+
- Added and updated tests on workflow status triggers.
28+
29+
- 2889-serialize-the-tdamm-tags
30+
- Description: Have TDAMM serialzed in a specific way and exposed via the Curated URLs API to be consumed into SDE Test/Prod
31+
- Changes:
32+
- Changed `get_tdamm_tag` method in the `CuratedURLAPISerializer` to process the TDAMM tags and pass them to the API endpoint
33+
34+
- 960-notifications-add-a-dropdown-with-options-on-the-feedback-form
35+
- Description: Generate an API endpoint and publish all the dropdown options necessary as a list for LRM to consume it.
36+
- Changes:
37+
- Created a new model `FeedbackFormDropdown`
38+
- Added the migration file
39+
- Added the `dropdown_option` field to the `Feedback` model
40+
- Updated the slack notification structure by adding the dropdown option text
41+
- Created a new serializer called `FeedbackFormDropdownSerializer`
42+
- Added a new API endpoint `feedback-form-dropdown-options-api/` where the list is going to be accesible
43+
- Added a list view called `FeedbackFormDropdownListView`
44+
- Added tests
45+
46+
- 1217-add-data-validation-to-the-feedback-form-api-to-restrict-html-content
47+
- Description: The feedback form API does not currently have any form of data validation on the backend which makes it easy for the user with the endpoint to send in data with html tags. We need to have a validation scheme on the backend to protect this from happening.
48+
- Changes:
49+
- Defined a class `HTMLFreeCharField` which inherits `serializers.CharField`
50+
- Used regex to catch any HTML content comming in as an input to form fields
51+
- Called this class within the serializer for necessary fields
52+
53+
- 1030-resolve-0-value-document-type-in-nasa_science
54+
- Description: Around 2000 of the docs coming out of the COSMOS api for nasa_science have a doc type value of 0.
55+
- Changes:
56+
- Added `obj.document_type != 0` as a condition in the `get_document_type` method within the `CuratedURLAPISerializer`
57+
58+
- 1014-add-logs-when-importing-urls-so-we-know-how-many-were-expected-how-many-succeeded-and-how-many-failed
59+
- Description: When URLs of a given collection are imported into COSMOS, a Slack notification is sent. This notification includes the name of the collection imported,count of the existing curated URLs, total URLs count as per the server, URLs successfully imported from the server, delta URLs identified and delta URLs marked for deletion.
60+
- Changes:
61+
- The get_full_texts() function in sde_collections/sinequa_api.py is updated to yeild total_count along with rows.
62+
- fetch_and_replace_full_text() function in sde_collections/tasks.py captures the total_server_count and triggers send_detailed_import_notification().
63+
- Added a function send_detailed_import_notification() in sde_collections/utils/slack_utils.py to structure the notification to be sent.
64+
- Updated the associated tests effected due to inclusion of this functionality.
65+
66+
- 3228-bugfix-preserve-scroll-position--document-type-selection-behavior-on-individual-urls
67+
- Description: Upon selecting a document type on any individual URL, the page refreshes and returns to the top. This is not necessarily a bug but an inconvenience, especially when working at the bottom of the page. Fix the JS code.
68+
- Changes:
69+
- Added a constant `scrollPosition` within `postDocumentTypePatterns` to store the y coordinate postion on the page
70+
- Modified the ajax relaod to navigate to this position upon posting/saving the document type changes.
71+
72+
- 3227-bugfix-title-patterns-selecting-multi-url-pattern-does-nothing
73+
- Description: When selecting options from the match pattern type filter, the system does not filter the results as expected. Instead of displaying only the chosen variety of patterns, it continues to show all patterns.
74+
- Changes:
75+
- In `title_patterns_table` definition, corrected the column reference
76+
- Made `match_pattern_type` searchable
77+
- Corrected the column references and made code consistent on all the other tables, i.e., `exclude_patterns_table`, `include_patterns_table`, `division_patterns_table` and `document_type_patterns_table`
78+
79+
- 1190-add-tests-for-job-generation-pipeline
80+
- Description: Tests have been added to enhance coverage for the config and job creation pipeline, alongside comprehensive tests for XML processing.
81+
- Changes:
82+
- Added config_generation/tests/test_config_generation_pipeline.py which tests the config and job generation pipeline, ensuring all components interact correctly
83+
- config_generation/tests/test_db_to_xml.py is updated to include comprehensive tests for XML Processing
84+
85+
- 1001-tests-for-critical-functionalities
86+
- Description: Critical functionalities have been identified and listed, and critical areas lacking tests listed
87+
- Changes:
88+
- Integrated coverage.py as an indicative tool in the workflow for automated coverage reports on PRs, with separate display from test results.
89+
- Introduced docs/architecture-decisions/testing_strategy.md, which includes the coverage report, lists critical areas, and specifically identifies those critical areas that are untested or under-tested.
90+
91+
- 1192-finalize-the-infrastructure-for-frontend-testing
92+
- Description: Set up comprehensive frontend testing infrastructure using Selenium WebDriver with Chrome, establishing a foundation for automated UI testing.
93+
- Changes:
94+
- Added Selenium testing dependency to `requirements/local.txt`
95+
- Updated Dockerfile to support Chrome and ChromeDriver
96+
- Created BaseTestCase and AuthenticationMixin for reusable test components
97+
- Implemented core authentication test suite
98+
99+
- 1195-implement-unit-test-for-forms-on-the-frontend
100+
- Description: Implemented comprehensive frontend test suite covering authentication, collection management, search functionality, and pattern application forms.
101+
- Changes:
102+
- Added tests for authentication flows
103+
- Implemented collection display and data table tests
104+
- Added universal search functionality tests
105+
- Created search pane filter tests
106+
- Added pattern application form tests with validation checks

compose/local/django/Dockerfile

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -52,6 +52,7 @@ RUN apt-get update && apt-get install --no-install-recommends -y \
5252
&& wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | apt-key add - \
5353
&& apt-get update \
5454
&& apt-get install -y postgresql-15 postgresql-client-15 \
55+
&& apt-get install -y chromium chromium-driver \
5556
# cleaning up unused files
5657
&& apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false \
5758
&& rm -rf /var/lib/apt/lists/*

config_generation/db_to_xml.py

Lines changed: 43 additions & 30 deletions
Original file line numberDiff line numberDiff line change
@@ -148,35 +148,51 @@ def convert_template_to_scraper(self, collection) -> None:
148148
scraper_config = self.update_config_xml()
149149
return scraper_config
150150

151-
def convert_template_to_plugin_indexer(self, scraper_editor) -> None:
151+
def convert_template_to_job(self, collection, job_source) -> None:
152152
"""
153-
assuming this class has been instantiated with the scraper_template.xml
153+
assuming this class has been instantiated with the job_template.xml
154+
"""
155+
self.update_or_add_element_value("Collection", f"/{job_source}/{collection.config_folder}/")
156+
job_config = self.update_config_xml()
157+
return job_config
158+
159+
def convert_template_to_indexer(self, scraper_editor) -> None:
160+
"""
161+
assuming this class has been instantiated with the final_config_template.xml
154162
"""
155163

156164
transfer_fields = [
157-
"KeepHashFragmentInUrl",
158-
"CorrectDomainCookies",
159-
"IgnoreSessionCookies",
160-
"DownloadImages",
161-
"DownloadMedia",
162-
"DownloadCss",
163-
"DownloadFtp",
164-
"DownloadFile",
165-
"IndexJs",
166-
"FollowJs",
167-
"CrawlFlash",
168-
"NormalizeSecureSchemesWhenTestingVisited",
169-
"RetryCount",
170-
"RetryPause",
171-
"AddBaseHref",
172-
"AddMetaContentType",
173-
"NormalizeUrls",
165+
"Throttle",
174166
]
175167

176168
double_transfer_fields = [
177-
("UrlAccess", "AllowXPathCookies"),
178169
("UrlAccess", "UseBrowserForWebRequests"),
179-
("UrlAccess", "UseHttpClientForWebRequests"),
170+
("UrlAccess", "BrowserForWebRequestsReadinessThreshold"),
171+
("UrlAccess", "BrowserForWebRequestsInitialDelay"),
172+
("UrlAccess", "BrowserForWebRequestsMaxTotalDelay"),
173+
("UrlAccess", "BrowserForWebRequestsMaxResourcesDelay"),
174+
("UrlAccess", "BrowserForWebRequestsLogLevel"),
175+
("UrlAccess", "BrowserForWebRequestsViewportWidth"),
176+
("UrlAccess", "BrowserForWebRequestsViewportHeight"),
177+
("UrlAccess", "BrowserForWebRequestsAdditionalJavascript"),
178+
("UrlAccess", "PostLoginUrl"),
179+
("UrlAccess", "PostLoginData"),
180+
("UrlAccess", "GetBeforePostLogin"),
181+
("UrlAccess", "PostLoginAutoRedirect"),
182+
("UrlAccess", "ReLoginCount"),
183+
("UrlAccess", "ReLoginDelay"),
184+
("UrlAccess", "DetectHtmlLoginPattern"),
185+
("IndexerClient", "RetryTimeout"),
186+
("IndexerClient", "RetrySleep"),
187+
]
188+
189+
triple_transfer_fields = [
190+
("UrlAccess", "BrowserLogin", "Activate"),
191+
("UrlAccess", "BrowserLogin", "RemoteDebuggingPort"),
192+
("UrlAccess", "BrowserLogin", "BrowserLogLevel"),
193+
("UrlAccess", "BrowserLogin", "ShowDevTools"),
194+
("UrlAccess", "BrowserLogin", "SuccessCondition"),
195+
("UrlAccess", "BrowserLogin", "CookieFilter"),
180196
]
181197

182198
for field in transfer_fields:
@@ -187,18 +203,15 @@ def convert_template_to_plugin_indexer(self, scraper_editor) -> None:
187203
f"{parent}/{child}", scraper_editor.get_tag_value(f"{parent}/{child}", strict=True)
188204
)
189205

206+
for grandparent, parent, child in triple_transfer_fields:
207+
self.update_or_add_element_value(
208+
f"{grandparent}/{parent}/{child}",
209+
scraper_editor.get_tag_value(f"{grandparent}/{parent}/{child}", strict=True),
210+
)
211+
190212
scraper_config = self.update_config_xml()
191213
return scraper_config
192214

193-
def convert_template_to_indexer(self, collection) -> None:
194-
"""
195-
assuming this class has been instantiated with the indexer_template.xml
196-
"""
197-
self.update_or_add_element_value("Collection", f"/SDE/{collection.config_folder}/")
198-
indexer_config = self.update_config_xml()
199-
200-
return indexer_config
201-
202215
def _mapping_exists(self, new_mapping: ET.Element):
203216
"""
204217
Check if the mapping with given parameters already exists in the XML tree
Lines changed: 90 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,90 @@
1+
from unittest.mock import MagicMock, call, patch
2+
3+
from django.test import TestCase
4+
5+
from sde_collections.models.collection import Collection
6+
from sde_collections.models.collection_choice_fields import WorkflowStatusChoices
7+
8+
"""
9+
Workflow status change → Opens template → Applies XML transformation → Writes to GitHub.
10+
11+
- When the `workflow_status` changes, it triggers the relevant config creation method.
12+
- The method reads an template and processes it using `XmlEditor`.
13+
- `XmlEditor` modifies the template by injecting collection-specific values and transformations.
14+
- The generated XML is passed to `_write_to_github()`, which commits it directly to GitHub.
15+
16+
Note: This test verifies that the correct methods are triggered and XML content is passed to GitHub.
17+
The actual XML structure and correctness are tested separately in `test_db_xml.py`.
18+
"""
19+
20+
21+
class TestConfigCreation(TestCase):
22+
def setUp(self):
23+
self.collection = Collection.objects.create(
24+
name="Test Collection", division="1", workflow_status=WorkflowStatusChoices.RESEARCH_IN_PROGRESS
25+
)
26+
27+
@patch("sde_collections.utils.github_helper.GitHubHandler") # Mock GitHubHandler
28+
@patch("sde_collections.models.collection.Collection._write_to_github")
29+
@patch("sde_collections.models.collection.XmlEditor")
30+
def test_ready_for_engineering_triggers_config_and_job_creation(
31+
self, MockXmlEditor, mock_write_to_github, MockGitHubHandler
32+
):
33+
"""
34+
When the collection's workflow status is updated to READY_FOR_ENGINEERING,
35+
it should trigger the creation of scraper configuration and job files.
36+
"""
37+
# Mock GitHubHandler to avoid actual API calls
38+
mock_github_instance = MockGitHubHandler.return_value
39+
mock_github_instance.create_file.return_value = None
40+
mock_github_instance.create_or_update_file.return_value = None
41+
42+
# Set up the XmlEditor mock for both config and job
43+
mock_editor_instance = MockXmlEditor.return_value
44+
mock_editor_instance.convert_template_to_scraper.return_value = "<scraper_config>config_data</scraper_config>"
45+
mock_editor_instance.convert_template_to_job.return_value = "<scraper_job>job_data</scraper_job>"
46+
47+
# Simulate the status change to READY_FOR_ENGINEERING
48+
self.collection.workflow_status = WorkflowStatusChoices.READY_FOR_ENGINEERING
49+
self.collection.save()
50+
51+
# Verify that the XML for both config and job are generated and written to GitHub
52+
expected_calls = [
53+
call(self.collection._scraper_config_path, "<scraper_config>config_data</scraper_config>", False),
54+
call(self.collection._scraper_job_path, "<scraper_job>job_data</scraper_job>", False),
55+
]
56+
mock_write_to_github.assert_has_calls(expected_calls, any_order=True)
57+
58+
@patch("sde_collections.models.collection.GitHubHandler") # Mock GitHubHandler in the correct module path
59+
@patch("sde_collections.models.collection.Collection._write_to_github")
60+
@patch("sde_collections.models.collection.XmlEditor")
61+
def test_ready_for_curation_triggers_indexer_config_and_job_creation(
62+
self, MockXmlEditor, mock_write_to_github, MockGitHubHandler
63+
):
64+
"""
65+
When the collection's workflow status is updated to READY_FOR_CURATION,
66+
it should trigger indexer config and job creation methods.
67+
"""
68+
# Mock GitHubHandler to avoid actual API calls
69+
mock_github_instance = MockGitHubHandler.return_value
70+
mock_github_instance.check_file_exists.return_value = True # Assume scraper exists
71+
mock_github_instance._get_file_contents.return_value = MagicMock()
72+
mock_github_instance._get_file_contents.return_value.decoded_content = (
73+
b"<scraper_config>Mock Data</scraper_config>"
74+
)
75+
76+
# Set up the XmlEditor mock for both config and job
77+
mock_editor_instance = MockXmlEditor.return_value
78+
mock_editor_instance.convert_template_to_indexer.return_value = "<indexer_config>config_data</indexer_config>"
79+
mock_editor_instance.convert_template_to_job.return_value = "<indexer_job>job_data</indexer_job>"
80+
81+
# Simulate the status change to READY_FOR_CURATION
82+
self.collection.workflow_status = WorkflowStatusChoices.READY_FOR_CURATION
83+
self.collection.save()
84+
85+
# Verify that the XML for both indexer config and job are generated and written to GitHub
86+
expected_calls = [
87+
call(self.collection._indexer_config_path, "<indexer_config>config_data</indexer_config>", True),
88+
call(self.collection._indexer_job_path, "<indexer_job>job_data</indexer_job>", False),
89+
]
90+
mock_write_to_github.assert_has_calls(expected_calls, any_order=True)

0 commit comments

Comments
 (0)