You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
- Description: The original automation set up to generate the scrapers and indexers automatically based on a collection workflow status change needed to be updated to more accurately reflect the curation workflow. It would also be good to generate the jobs during this process to streamline the same.
23
+
- Changes:
24
+
- Updated function nomenclature. Scrapers are Sinequa connector configurations that are used to scrape all the URLs prior to curation. Indexers are Sienqua connector configurations that are used to scrape the URLs post to curation, which would be used to index content on production. Jobs are used to trigger the connectors which are included as parts of joblists.
25
+
- Parameterized the convert_template_to_job method to include the job_source to streamline the value added to the `<Collection>` tag in the job XML.
26
+
- Updated the fields that are pertinenet to transfer from a scraper to an indexer. Also added a third level of XML processing to facilitate the same.
27
+
- scraper_template.xml and indexer_template.xml now contains the templates used for the respective configuration generation.
28
+
- Deleted the redundant webcrawler_initial_crawl.xml file.
29
+
- Added and updated tests on workflow status triggers.
30
+
31
+
- 2889-serialize-the-tdamm-tags
32
+
- Description: Have TDAMM serialzed in a specific way and exposed via the Curated URLs API to be consumed into SDE Test/Prod
33
+
- Changes:
34
+
- Changed `get_tdamm_tag` method in the `CuratedURLAPISerializer` to process the TDAMM tags and pass them to the API endpoint
- Description: The feedback form API does not currently have any form of data validation on the backend which makes it easy for the user with the endpoint to send in data with html tags. We need to have a validation scheme on the backend to protect this from happening.
50
+
- Changes:
51
+
- Defined a class `HTMLFreeCharField` which inherits `serializers.CharField`
52
+
- Used regex to catch any HTML content comming in as an input to form fields
53
+
- Called this class within the serializer for necessary fields
- Description: When URLs of a given collection are imported into COSMOS, a Slack notification is sent. This notification includes the name of the collection imported,count of the existing curated URLs, total URLs count as per the server, URLs successfully imported from the server, delta URLs identified and delta URLs marked for deletion.
62
+
- Changes:
63
+
- The get_full_texts() function in sde_collections/sinequa_api.py is updated to yeild total_count along with rows.
64
+
- fetch_and_replace_full_text() function in sde_collections/tasks.py captures the total_server_count and triggers send_detailed_import_notification().
65
+
- Added a function send_detailed_import_notification() in sde_collections/utils/slack_utils.py to structure the notification to be sent.
66
+
- Updated the associated tests effected due to inclusion of this functionality.
- Description: Upon selecting a document type on any individual URL, the page refreshes and returns to the top. This is not necessarily a bug but an inconvenience, especially when working at the bottom of the page. Fix the JS code.
70
+
- Changes:
71
+
- Added a constant `scrollPosition` within `postDocumentTypePatterns` to store the y coordinate postion on the page
72
+
- Modified the ajax relaod to navigate to this position upon posting/saving the document type changes.
- Description: When selecting options from the match pattern type filter, the system does not filter the results as expected. Instead of displaying only the chosen variety of patterns, it continues to show all patterns.
76
+
- Changes:
77
+
- In `title_patterns_table` definition, corrected the column reference
78
+
- Made `match_pattern_type` searchable
79
+
- Corrected the column references and made code consistent on all the other tables, i.e., `exclude_patterns_table`, `include_patterns_table`, `division_patterns_table` and `document_type_patterns_table`
80
+
81
+
- 1190-add-tests-for-job-generation-pipeline
82
+
- Description: Tests have been added to enhance coverage for the config and job creation pipeline, alongside comprehensive tests for XML processing.
83
+
- Changes:
84
+
- Added config_generation/tests/test_config_generation_pipeline.py which tests the config and job generation pipeline, ensuring all components interact correctly
85
+
- config_generation/tests/test_db_to_xml.py is updated to include comprehensive tests for XML Processing
86
+
87
+
- 1001-tests-for-critical-functionalities
88
+
- Description: Critical functionalities have been identified and listed, and critical areas lacking tests listed
89
+
- Changes:
90
+
- Integrated coverage.py as an indicative tool in the workflow for automated coverage reports on PRs, with separate display from test results.
91
+
- Introduced docs/architecture-decisions/testing_strategy.md, which includes the coverage report, lists critical areas, and specifically identifies those critical areas that are untested or under-tested.
- Description: Set up comprehensive frontend testing infrastructure using Selenium WebDriver with Chrome, establishing a foundation for automated UI testing.
95
+
- Changes:
96
+
- Added Selenium testing dependency to `requirements/local.txt`
97
+
- Updated Dockerfile to support Chrome and ChromeDriver
98
+
- Created BaseTestCase and AuthenticationMixin for reusable test components
0 commit comments