Skip to content

Conversation

@haiqi96
Copy link
Contributor

@haiqi96 haiqi96 commented Jun 20, 2025

Description

This PR refactored the following:

  1. added shared method for getting the names of the tables in the MySQL metadata database.
  2. updated s3_put methods to support specifying a relative path as upload destination.

The PR also fixes a bug where old webui fails to load before the first CLP-S compression job

  • The current design does not create the tables with default dataset until a compression job with dataset = default is created.
  • The old webui querying log statistics from the tables with the default dataset, but those tables are not created when package starts.

As a result, the old webui will shown an empty page before the first compression job is launched. This is fixed by creating the set of tables with dataset = default when the package starts, at here

Checklist

  • The PR satisfies the contribution guidelines.
  • This is a breaking change and that has been indicated in the PR title, OR this isn't a
    breaking change.
  • Necessary docs have been updated, OR no docs need to be updated.

Validation performed

Manually tested:

  1. CLP & CLP-S when using local filesystem as archive storage.
  2. CLP & CLP-S when using S3 as archive storage.

Summary by CodeRabbit

  • Refactor

    • Centralized and standardized database table name construction using dynamic helper functions instead of hardcoded suffixes.
    • Updated dataset handling across multiple modules for improved consistency and modularity, including removal of storage engine parameters and related conditional logic.
    • Simplified and clarified S3 object key construction and dataset path handling without mutating configuration objects.
    • Modified function and class signatures to accept optional dataset parameters where applicable.
    • Encapsulated dataset addition and caching logic for improved dataset management.
    • Introduced conditional dataset assignment in search and query job configurations based on storage engine settings.
    • Added explicit storage engine configuration entry and related constants for unified dataset handling in the web UI.
  • Style

    • Improved type annotations for dataset attributes, allowing them to be optional.
  • Chores

    • Removed unused table suffix constants from configuration files to reduce redundancy.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jun 20, 2025

## Walkthrough

This change removes hardcoded table suffix constants throughout the codebase and replaces them with dynamic table name retrieval functions that accept an optional dataset parameter. It eliminates storage engine-based conditional logic for table naming and path construction, centralizes table name generation, and updates function signatures and usages to propagate dataset information where needed.

## Changes

| Files/Areas                                                                                                   | Change Summary                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                             |
|---------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `clp_py_utils/clp_config.py`                                                                                  | Removed six string constants representing table suffix names for archives, datasets, files, tags, and column metadata.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                             |
| `clp_py_utils/clp_metadata_db_utils.py`                                                                       | Introduced a helper for dynamic table name construction; replaced all table name concatenations with new getter functions; updated function signatures to accept optional dataset parameters; updated `add_dataset` to use `ArchiveOutput`; centralized table name logic.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            |
| `clp_py_utils/s3_utils.py`                                                                                    | Renamed `s3_put` parameter from `dest_file_name` to `dest_path` and clarified S3 key construction in the docstring.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                |
| `clp_package_utils/scripts/native/archive_manager.py`<br>`clp_package_utils/scripts/native/decompress.py`<br>`clp_package_utils/scripts/start_clp.py` | Replaced hardcoded table suffixes with dynamic table name getters; removed storage engine-based table prefix and dataset logic; updated function signatures to accept optional dataset; improved archive directory path handling.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            |
| `job_orchestration/executor/compress/compression_task.py`                                                     | Replaced hardcoded suffixes with table name getter functions; added optional dataset parameter to metadata update functions; centralized S3 upload logic with dataset-aware pathing.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               |
| `job_orchestration/executor/query/extract_stream_task.py`<br>`job_orchestration/executor/query/fs_search_task.py` | Changed S3 object key construction to avoid mutating config; now builds key with dataset as a variable.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    |
| `job_orchestration/scheduler/compress/compression_scheduler.py`                                               | Removed storage engine parameter; simplified dataset registration and tags table naming; added helper for dataset caching; used dynamic table name getters.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        |
| `job_orchestration/scheduler/query/query_scheduler.py`                                                        | Replaced fixed table suffixes with dynamic table name getters; removed storage engine parameter from functions; propagated dataset parameter throughout; updated job handle and query logic to use dataset-aware table names.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               |
| `job_orchestration/scheduler/job_config.py`                                                                   | Changed `dataset` attributes in input config classes from non-optional to optional string, removing default dataset values.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            |
| `clp_package_utils/scripts/native/compress.py`                                                                | Added optional `dataset` parameter to `_generate_clp_io_config` and passed it to input config constructors; set `dataset` based on storage engine condition in `main`.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               |
| `clp_package_utils/scripts/native/search.py`                                                                  | Added optional `dataset` parameter to search-related functions; propagated dataset through job creation and monitoring; assigned dataset based on storage engine condition in `main`.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               |
| `webui/server/settings.json`                                                                                   | Added `"ClpStorageEngine": "clp"` configuration entry.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                             |
| `webui/server/src/configConstants.ts`                                                                          | Added constants `CLP_STORAGE_ENGINE_CLP_S` and `CLP_DEFAULT_DATASET_NAME` for storage engine and default dataset identification.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       |
| `webui/server/src/fastify-v2/routes/api/search/index.ts`                                                       | Added conditional dataset assignment in search query submission based on storage engine setting.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                     |
| `webui/server/src/plugins/DbManager.ts`                                                                        | Added conditional dataset property to job config based on storage engine setting in JSON extraction job submission.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                |

## Sequence Diagram(s)

```mermaid
sequenceDiagram
    participant Scheduler
    participant MetadataDBUtils
    participant Executor
    participant S3
    participant DB

    Scheduler->>MetadataDBUtils: get_tags_table_name(table_prefix, dataset)
    Scheduler->>MetadataDBUtils: add_dataset(..., archive_output)
    MetadataDBUtils->>DB: INSERT INTO datasets_table ...

    Executor->>MetadataDBUtils: get_archives_table_name(table_prefix, dataset)
    Executor->>MetadataDBUtils: get_archive_tags_table_name(table_prefix, dataset)
    Executor->>DB: INSERT/SELECT using dynamic table names

    Executor->>S3: _upload_archive_to_s3(archive_src_path, archive_id, dataset)
    S3-->>Executor: Upload complete

Possibly related PRs

  • y-scope/clp#839: Adds dataset fields to config classes; this PR further refines by making them optional and propagating usage.
  • y-scope/clp#831: Introduces table suffix constants and dataset-aware table creation; this PR builds on and refines that approach with dynamic table name functions.
  • y-scope/clp#868: Introduces dataset-specific table suffix and storage engine logic, which this PR now simplifies and centralizes with dynamic table name functions.

Suggested reviewers

  • kirkrodrigues
  • haiqi96

<!-- walkthrough_end -->
<!-- internal state start -->


<!-- DwQgtGAEAqAWCWBnSTIEMB26CuAXA9mAOYCmGJATmriQCaQDG+Ats2bgFyQAOFk+AIwBWJBrngA3EsgEBPRvlqU0AgfFwA6NPEgQAfACgjoCEYDEZyAAUASpETZWaCrKNwSPbABsvkCiQBHbGlcSHFcLzpIACIAM3gADwAKbjQGAGs0UgBKLhsSWLSCPlpqNEQSUNhMWi94DCIULFxYD1SMrJINaMgAd3L7bAFmdRp6OUhsCr5q+AD4AE4ANnRkW0gMRwFKSABGAAYAJgBmfmbWvxJufER1fBdIWTBEJm4SAHoGL24NGAQ1uz1XAURTYBjSewkKRUXzpEjyBjVBrSDhGXa/ACSoTQtFoyDQ9mq/nobBaikgsXul2B8Ch9UaLQ8GDQbGQ+FiYRUkWQ9TCFwAsrIAMoARQAMpBSTiypBSrgVOUugZDpjQthuHKIYzIAADRDHAD63DwOsllVgimQBEG3GuFFCiDeDHgsVk9PQly81EkbWosFWfI86q8+BxspC9W9+AwGgMx1VFMSEIJAmwjV6rX8gf4XnovRIAkgAFUMRTtF4rfhICGw9tKVntfEKIhQgBhMVWMBChTMXjSW7RyBCQS/OCoJAODz4BgMbAUYmQbYMNBTDza2fz9jh25ELDRrwI/zULVciG9dT+7VKQreUJyxXY2I0PgEpi9/yIAdYYeF88tXX3hUuAALzXiuXi4KaqAMEeYy/EK9Tgtm+C5n0BbFqWQSULSyAho0Lbei28AMMgsQgswgYVJyAjcgANCgDoWr0+JYCQva4PIqSkCgHLavKNEQrQVYYPgoQkAkSDYqE7SZNxBH2uqo4XPECSMLBWoXEB/C8aeyB/v6OqAZUoEFOBkF9K05y+h0cnyvaiCxgY7jWICGBfNgSj4j23CRGMjCtBkdQtgoGBNiMDToLQmZkEh1pMBgNKpuIg5ENg8BKHU5CIPRImLke6TuoimCkNl6AYPQuX+EE8ALkJs5sAlUZYOqmoOZAABqaB1HK8CDv0yBvBQ9ZsCSmDYF1B6LvINBERFAiif67ZWGV9BLV2fT3OksQhsxky3BFIbLr48TcrILZsStkBCqcAzOIiPr2MUnSLmkcLlQ55iWK2LANbgyAOE4LhuBc6z+OCPr0G+v2keRe07AVFDpCCtAUPAqUQlVwSzY0JBevNVDJXuVmLmmiaqf4Xp+da2r5oWJZlvAmUMlWNbjAU9xrspNXBWt3Zvn2n69d+giXeUBURfW2aIqI6RBaE1rbImPhRPgeDwU6LrERNsj0QjSOKKj6P4ogcL0JLzAc+G8qMzyxPrJ5MHwNwhPoPNeDZqmjQqZdmMhFE64BbLkneb5QuOc56wrmSFD0bM8zLPRH7XOVUQTC13oRdq9vSI7ztC2EVaIV4HkeEo1uRPQ9TDU1ruq/LFye5d6jIB+KF4GHfylznqN54OIm9JA4k+doGBWtU9dTqhtMYZAWGoxCeGPYR4gkRSsP8dyqyINO8DHnmF7ZmBt6ymUQH0UlfI3GuunoFmImhDBJB75MCWMx6/Mfl+Q4i/pAGn8ZR8IJQRbtgGMfxoIrgqPQbU9NrSICYmVQe7FOLPTrJbTA8gHCIm/r+AYXpQHS1oEpDw3syAOA/Hyag2YN4QklleUyx8jLYizI/Z+VCZLPXkrgdU9E+xSFfpnC4bFnYoO4hOYIjkACCuJ1BC21rHbQ8cVj818v7C40tArB3imFd0LQqFZzsIgQi8QITxUSu3FKaUMr1GkPRVA98PQCHyoVJEpB6I1EoaEXKdVHDsBrunGat8mQkDoHQRyAA5KssQ5yMj4FDdgbIShIFnILaMelKAeDBvcJQ9A0BPh2C0VA4kirIkcvoYw4AoBkHoOyHABBiBkGUH5eJCUuC8H4MIUQ4gpAyARIoZQqh1BaB0OUkwUAxw8hYnUwgpByAEyiC0zgfg0ADwBswZw8gJhMCUFQQZmhtC6DAIYCppgDD82jAkz43wwAcNIGAduFYrncCNK9ToBoHmIHeC8Huf13jMm6R8O6CApAGnWcyUgFANDcFkKiaIcKDAWEgJIjEDS5nPzWRs7S/lirSGBh4EpJVLgWykNmVcWKiS0G2f7U8gxYje3igRBKyAkg6kkTYVsAAJDE7UACiQoDTQEkQAITFDyg0QoiwADFJUYgABo6nojqGVor+WCpFWKiV0q5UKt1Gyzl3KxWCoAOKquFaK8VUqZXyuyE3UeNAwxktoLIZkIwGDUUiBsFkmTKjzwkF1CkBDCbMp1KQXABogU+kQAaGhBoXUkB1SGyoBoTrSGjaeWNXqE2hvDRQe6IL5RECjTGuNOobXwF7PcPyZEWC6i+M86F7zxAVg0HW0FlRpTygNLQAQjabY6mIbqFs9w3lkCIDY00qQqCkh2KgfwxKojVootqHUyb6i0BzXm6QpoPEroypUEgG7gVboDW5IN7jypS2jLQWR0Z/V4WIp49AtpqlqNLv/LxXqC7UJpX2b2AhFQ1OJkOqg3FR02JQMgXG5bIxwUgBiO1T9aD0SvO+iD/Be7Mh8JgyoX6dRLQNAAER5ZKyRRYxTQEI5IwVQoeUUfCZI/kPLTT7nkC6bMwHnpgfIIPIIXVkB4Y7OKqCxN1m8miaeoW57oH/HQ4Tf1TC0OpE/FEISvRmhVkbIGoWbVJG5qPbKGqXT7goJaPiLMOIhBTCplWeK165NYaaO5JQh9UMOAENesGxR5AsZ4i5+UWl7GiUgJE8gdj3z4AkIVNJkkYqsawKkf8DLgRgkJgO0UEo544QpFSCokQxDug8Xu8QEUI09Poim0qHjSvX0LRsfAA8yVXmdSyB9NDPVsBPfltJfQD58R/f4b2hX32/AAPLQgmsh9RriIS3F7HUV0J9/M4Y8W1uNkAkS1HdBMOdEX3RDzqM6B0T1QMNHAxo9ILF6AODpYkEKjK/qJ1xm6CKg5wQJRhPAAAXlEB56h5Dia66PS6+AMPydQ1MTojlPpIogk07TX76FfGcE1NkHIh6VpVnwY0NEH3sFkbigwUBJVacHDuf5c4gwaj3lwFd8RyqHsjUkarUaPNcCsH6ei95/0VANNotGXACNlG5yQeiHHSAGi4yQLgQpjskB5ad0Li2HxcBbDHRcJAx0YGjYgLgQJ6LVO11wDi3B6QaBG6DrwABtIEABdG1RymjmUAEmEupV30+Z0zvTkau01TZxzpXwvefRniEQAXQvFSc/fUb2QJuGhm4t5b1XNvz4a/qIbx3+v6d/Wj7Hog8f7PW4Snb3QehHc6kJ5AYnEnSdo3J1mAJdAaddtxvuhnPTPebpZ776w/uueKiD6FfnkBBf8Qj49Yd4vJfS9l/LzXIuldARV8CTnLeaAGg25ECgAvV8kA5TUTfnOXAGgoKArg80UKQGApXvjJB7el6BKaF3u6d9t+kB3o9Xet895aJH0fPO+eh7D7h4VC/7K6PRq5FYHob6UDb6qJ77lQH6yhH4n4YBn74AX5X6So3535l4GBwrRBGAQBgBGDnLkBMpPI3KvJ3IfJPIvI2QHo0HfJOy/L/I+jvBKAfz9hQowp4HwqIrIqopNJRAYoPC1IEoE7OTlp2ihC1LrhpLygJS6jKp8oCpmoaqWraprw1o6itoNofItrfAD4h6mj9REoRZRAeL+DDzgjjDyCYAoAVr2hYqabV5YCJphoVZpoCQZpsCmiLq1qGF6FNoOStpSj3hdo9ofL9pwbEwA4uzuEGj3BoyrqRAGjpQ6rajpazzBAuAADkVoNKa2SWJ+gOfQAw4hZssMBIHmXSIUy4NArBg4tSOoMav6iQ26F6SqGIKqqh6qFqWq8qX6UwuiFwCRnhxamanWLsv8LR6abRCQHR9AOoIW8aAYzgqU0MX6gg1ssR3gvgq2XqSks6VwXo4IY8HgFKVK0CNK123sv81RzWrq7qTIn6/gNIUI/qpIFoSG6AdKXS7oCCfWAk7WHgxx28XgUglcxMc8sgjk/BsOBM8O1MFwHBXoiJ3WtS6O9omOngOObqeO4gBOKxvBBBhOpypBlydalB9B9ywRtBtyDBdJTBzsnyXCvO3w3BsKfBlgAhsyQhV2jg6yohHI4hiAeK2KyILcbE5h2YUhlaqOl6dqmAf0uq7KXKvKpqfRmqVqixSh3RKhaq5q2pmh/hOhgRsgvazaraABup9Q50YYcpThLhgO/G2azOXhqRJal0YxjMqaExvhWhFEZp9aFp+hoR7a4R3alpiA0R8G2YeodkYatMaUfhJOGAk2bQ/gkWqsuE+AY6bqeioQFs16JiSxrRA27RU0T6bw5UIxXcN4EEC+OGa2qu7oGYZA7Gsug8CuoJ/G+GQowClw1hqc2GuAxWjQBIFQk6x4f8S25kfqqMNK1oAmVghGxGpG5GlG1GtGBo9GjGJhF4dc6ACJ7osx3h8x0RzkWRBxrI2WL4XuPSl0FWTQUsdQW4QE45yAAAUkKCNuEpMFToEs4EyPVh0jsSOYwBNKeW6Q+X6emnGkkOWQUIkKAUBNkLqT6dyB6QeghUhSpKhZUOhZcI6P8VIAeHYghmGGIdGPUWQBnPhNgDdqpElsqZWLqHhe0b8NInZnIlhhmZCBQNCJCGOfSN+b+f+Q3sgP3IPAkD5MROoJNEXCXPGdEK2N8DLhPnLj2dEKaHCKOQjhcAyePiBh4JLsQlRKKYwOwB9t9oqarilvnLUteYUV6sgMMRFDklrH5L9hxFMfDh4kCMjGCKZbJQduoOtvvu6LISiahtOdOpCkYNDpIgiSjgZV3EjuiUDpibJRjjUljkMAdt2eEDhIQcFjZtNmxdqGTtQBTgqViX5FSNjoVQSThLGPgYQeSSwMnJSdctCrSTbPSaGXSeGfKJGZEcEZySSQijySinyfMgKYDN5iKRVeKZZVYacZzJmXSDmYpY4VTDcYxfSvIaxT1v+IdNrOGHTgssdUylxRsCQAPK0N8DsHEfnCutmv6aseOO9qCDYV+sUSls8SCb0nUceKwSVjwBWQkOerJrxU2R+h1kkPcc1CnM2EwP4NkFJh6LcYkEcaRGmWhquNAjZtZV1F9h4B4oyqUBQNerZTefivISUalkYFXi6Y+qwjQM8YkjwJQMNNkdhBCNJY1jJk9YNJdOQKshrPEG6qGs+H5d1iyjBZ3thT4fGoqh9VkEWvBZmurUmtVmmoWirSWrrR4b6Vrd4cbbqNmkwMXMwFrmEWUEbTrVbUmkwubZ6ZmjataNsSPEDXGrbPaTUhyGCuNL4K2RFPFLRY0aPOHKMbzupCkXBQJBhfHU/GvjbY4PbRGY7TQqma4WZpTpqMTegDOFcNiHuBbvDTwMjvFZdMLQzZuI6Feu6LLS9WmaRFSPTR9E5KMRzQem7dhRhbEJUIiBLhJNjF2u+jGfLUDkLVRNqBLZAK3XwHQjFXOQUcCXGo5HGTqDiOukwhkYZTXfuivZ3WvQ+LfJusZc9DUWIMZmhutWkBBfYayrBSNngMaOZIICIGIAOlSjJcCEUOcZffpmLqXIZnfQ8Alv6P4YUmyJ0mIJzpJPSGlPAltpUPmB2ddNfdxEkO5QyBcNg3pZDchQkDah4gtDsGA2EDHhCHg/tBOQZp5vfdA9kAOgpmtqgGgM+inMXXA9Xf+GgiwtGNCOOV+uHV7GfR4DCdXVOifQOnaZQC7DI6uMA4vW7X7Z+svdvW4X3W2qNY7VGTQjGfxUws8FLa1i5eFI0LlCGMiHwMWZrBpB4GeakReeAe6LfRELIAANxNCB2mOxXH1y2oBKZQIyVyWHaTTInBIDz01L2VDPiXa4ap3HhB621Z0GOdq53empNr7jGnj9pGBjbKA+D8WWVvaAN1B01WMM12pM2SZDkbWShjT+qR1g2pW/yi1t3530QN4z32C141UULWhpDgjOxV1xUn16TRT3WhI/EeKOMLa724iT1zmmjWhkov26abrv24Cf3boCCq5FD5ySzVY4PgPMMPCtzFzM2JXwnPipUxPhgZWpXZXSE4lNW46vyElikV7cULpplN4a0W1eopBQ1L5q440JCQsEVLKq6QAAA+ZV5AOBqu5eUAALZsQLLtYa+tBa7tOFYLHFML4BcLkLSLKLt+Je4BGLSKuIgLrhNOStH+TtbAiFcxELZL8NFLyLKxaLwIdLWLM9zLSaGddt+jHaaAbLJAHL55XLqu5L4BlL/LNL6L/zDL2LTLuLazD4hLqtcrbjCrwIArFAQrmrIrOrBToL7LJLsLPLyrfLFypr5rOSlrCRBLMrhrB68x9rTCvLVLLrFeRYgFjLpR1V3C/gTeej1rkQSQUZG428audr4BOBKxj+ruMbZt2F8bPaib9wyGnLpDfrUejrgbNL6bwbobWr4bQzkbUumbCd4rmTUrObCbc4Sbhb8rxbqbFbFyGbK6ejzbkr4RNCubvOHbBbzxRoxrEBpbquab/bVbRdAzEbFONOe9urQE478U6Zsoebk7ybRbKkJbc5qtp7D4r+BoYDPuVzMK3+sAi75AA7m7TCO70Ye77bzYU7Kbir8N573L+tdcn9XAOzR6ezn9T78aU1ZJxBZyXVFy5BVJfVNBuhQ1A1+o0Zk17V/Bs1jS81gwi1WKop4pr1g4eohoBzZRyASztIxd/ga2fECANNsjXqctppnkptHtAZy5nHLyLQl5KJ04Ej1HAFK7cUaJLo0048pKPkoYbrKa+RV0pwxD0EjNKWEF7TDR9Fg6hoABGgelM7pDB5/4hkIQ/HsAOqGYxE/oqADsqM2wuSXk0Dno3oJKzzxD8xicfoBSSIgzc2dhSsLxbACjCUlAmGe0z00VoTwTM6xMR0+xVYFH7JtICUUKeAiRCD5kphFRJd6N16DQ0TVYZKZnLYFnwmgdWKpXXHRLvhvwkS/AsS1YeZD6VIlAIIMwkVEUOX02RCSVKVQaaVLzaJbzaOOV2JeVuJzVPzJVFerNLsa79e1bNO149glHeASQmHAB0uxw30g+RAouuaidfuP+24NXAHC7fbz7kAT+q3mHn9m3enweQ+10e3Ieh3DAx3D7K+ZX0DkLUHwmARIZ0ZXyho+h0K5e7VsHJBCHZBvyP4YA9w0sxzhM7wP4iRemIQmV7w4kogeA9wnwXVn8hP74/YQsaaJs2H3JSKeHaKfkIhS1EpJUq1FVZh7nFwZKtSlx/SApTFd2ChKpZzsFoswvBaZoWTBIXdPsJxT9gYFEv8TqLqljm9rxPqtIfqx07dkAitetsFBt+rXpO6LL3un1paj68ocIi4C036wJ8xosFd9mkzsXkK4CtswIv1yY91A8IOjvhk76E6zvQ3ZHQOOoDe+vCaYf6PDtnaNQ4fWNofobV70faAVnCA2CMXym0mIIaYl46iE02K9QOjuoYfBLh9FNovmtmjHWdnjxWsDm/MeAP2DDOr+LmtMrJnufLjJLGFB9WN2oijThKjFQY8NfSvBZtT8E0GaJ5Fxfif+tyfpoqjLfsFBvkx1ooa2YzOVfpctfCX/2VIA/dzkiXv62uMYtwfruwY8nV7BA17xwg5e9/sVYMU3DDglMm1ynAFrMzX+Z9E1n2C8xZxp/044wZ84LnX+ChjnIglfMvAXbDknYYQMH4E0Nislyo6Swr+OIArCvyCRNMn6+8RLMgIMr2IHqp/Z6mrjCZ1knecjZ8EX2XQoF2S3APOoDn4p+91mkABcrvGBLfVZEZNb7JUW0LWlnueeeoJ/Q0AH0664TBlDFgSgHgB0fCXqFMFBracXYzRbvqdU75V1ji86IhJ3BAYPRgO7sLxvfS4Y8M3WEA8+lpDcxGDRCGASaKvRcb9lTQ1DMyroKIbwgSG3sWjvUVOZUgFMWg8wjoNbCEDlyYfeftnXlAYVI+ggEdo7Vj6l86sA8JSs5mXTiCpmNA1wacAwGQx8+S/BeiQJXRZCb++AO/qaG6Z8BzBb6KAWkMoADpWBD4B/hWCrBhMn+spFOAkB2BQwAqlFIOtmHkE5ldQIgjLuII4GnhHIJTGEF4HKas9QEUnS6NUNRi2UzGJFZ0NLSr4FYL02DFzvMXpBhZYBkWCKMWW8DOA/sl0dGp5iyi2xxeUrMTseGSbYMsh3dB5nDm6zPNUSyOQbu81yr8B8qeJIqvjj+ZE4Cai3QutTgAgFAbha+Alju0PZdsjWpDdxHr3SiFsiAaRPEERVu7giS+mtaEd+yPbdsT23Lf1pAHNwF4k8CIzdKiPtYFpUROuSAGKEkiF5cAxeB3JW0BGuFBmu4YZiCLGArdMRifKPuEOlZxDsRX7TtjglRGwifWUNZETSPJFHpr28oP6OiLBEcgohPaZPuGizwiiD2uI+iOj3Si64EokoozgSL/ZEiSRvFRPMCGTycgUR6UWkfSJbCMibR+tAiNnmHzERcAVotXJIgwCyBmRpeVkZXiBF1sKcEIxvCqPDFJ9BROIsUSSzlE+gFR1ARAMqLM6qi5+evZPrGJ/bHtEgF7RfMSITxki9BIKN0bSIIyejvR9EX0f6IB5zcCaj/WgLyI5DvI5OOIIoXf0e5GEXuu3IQQmNLFHc/uAjWAHUAEAaB2cp3fWgaMJGlsLRt6K3EnnQowciCMPCtHD0+QI8kerQFHkLDR7RCtxWPJqDj3aGzhig7wGEseMAZiAFRR4ZgBT3SBU8CCuHQQgRwZ7EcVqPdfFNNklCKAnGyAQEoQ1ODf1aiqnZAADTEBRBeQy6UFGgDhAMDr2QeJwO7npxkAJABoBctPWD6/ArAWZBQYgBn5yFnMzAPAM/GXSGd3G1AcxJzWiouMtuQg5jFlyrLcMayBXAhpUIvpVYwgVARmO6HwnlBYAA6Rev/TtL2pIY6nMQF5HOq+ARhwJZLsBOvF6VTQWyGih0zEbagkimuf1DqHIlQ0y+8NPvqMSnG0AO+9gL0PAlsTqCjySzZ7GxO+FoxIwvgbSfCBNGcVXeunTLj/TDSKS0MjIZqOEwGDahiG6xXxIoXbJYA5qOnTIqcEiwKR/UFoc6LQGeAcQPURYGwGKAHTtcqQG+dYQKSOaBBggihToRenAkuw50vtAhL1zhI8kBuSJDTCiVEAjcPhY3D5pNy+b4kZuRJC5G1XhTQ94Oa4y5JuMx47jowe4ntAeOGkYBjxePM8ReNiBRoKgQKe8Y+Omo08Xx6KQUpijEIfjnI4hH8SWSyxyF6mDlJorxEAkdJPJkAIKReiqpDAKgWERQqlIlARSXYUEuOusjgno0D0raKNEVPXSx9UJ6E5wJhLTLYTcJO1HWJeiIkkS/IyXfTjpOM5MTTBlAhTJxMAY8SIofE+BBmT8lqJUAtHKIHby050U/IMxIyRsyrDPSP+2DR6QOgbxiTnMIkxDPzyZpeRF6Mkj1HJKy4GhvJyktyKpMoEaSHJuoeGSpD0lMIDJLjMmeoNMn8SLJ+kKyb+NdCnk6J+3Azs5IvJ40T+nMzydzPhCDlfJ1dTPkN0ekJM6epzEGboMymdcECOU7MB+BkKnSPA96N1GVN5AVScUfXFaclUeaDdXhDU94fDk+ETdvhU3b5sVU6nkBuppJFcX1O6rkFBpyPQBqj3R7jSk5u4l4K0FoDeBKAJPAWJ8k4KpItcGcugNnMhTQouST4mamtPp4bThSTPCQlNhxRSl508Zb6bLglw9kA+1Al6uvFGILS9Mmo9dMXKzmekHq944GeyKqw+BXOfkAbBkjcjSA/6zdR3s7PZqBxIJjsi5t2TnyicVyQmS6AbJKkWF4a5jUQJrDdTUNrBJmf0MLkAydlNKNDN4Nr2wZUgpJZaSDHUHCh7xMQ3QwJlALs6IDJobxX1Aul7mgkMAn9EKO9ybgchwqgWUIMsX7bi0gsXUI8E6hfI6hxIyDBoFu0qAmMGIaGRsXtFPKvt/e0s7UHjizDBkr2Bgr+llzkFgzFBXgmuKoNzELFEZLE5Gahk4bNyAhvwSVJ3SyBgAKYz8LIiDmeGjx+Kboh9ETOjpYpJelfNrNCyG5tY7exxYcrYVlC78LqPteoJQIJbb92Bu8bMIvQ9Zt9TeM9XhOUGb5AkPUdvIbHOR/nxlRMGAJgYTCkVdlJcaGWxtGAhTGVIJWAAkFJPYHHCaUF8MZv2BUyAL/ssMahfpwZIaAb2kuaIkEJ8AoD+5o9WPsPOzmxox5/mC7KaB8X2NDZwDZwT2VY7xVzKX4puf5zkoLYFM/gMdBNIPmKKrGW2eQDtn2GNBbMN6CLk4kwD3Q5oAGM4FvK8WvSPA2SpmBSB2hVSYcPs2qYfH9mZU6q43Bqj8Om7hyARwY9kcCPDFNjIAkAAwAcrTGQhFpWSwhDkolrjz32GAT9jqLFEjVW2CbD9v8WjDdjACFYsQFWKRR+ibRbczSh3LnzT5NKs+GxPRFbRAcP6eAUDm/ShW4B9c49ccrgvdFChKg3ou3KaEOUHKXcWKyMRktgCDzr2Fy0eb0GuXPLblnOe5VO0eVjUB85AQHO8oFyVi/2NYv5YYUhX7NoVSKWFZyvhUyVsFKIt2tLjRWLjMVRyjBRSXjmCBEeQ0tOSNJTmyrsekynOQXPzlE8yeby5VWXNkCQ8epMcyVfD2lWpzseCqxOUquJU5yLxMJIlZnNLnLTnxlMhakKUZ4kdPx9c3hez1BJ7UJBUXDkCpGpTAllFLFJlLakq6Opd+him5hYiwDB9g0xvHpO3xNo39zF2tXwkms9am8k1sbWrvGi9oycIlEzewj7zhp1CgIXctjgUirBlxKA4UD/sxJBC8Bd4nNKXkL07z9iaspUAlljQqyay8kw9CSVdHFD8154lw2NYMGwQDBd6evLBS2HwUJF3S9Ya9k/D0w6ojeuvCkbHxTTXs5KYae0UUzdVTqKRM6v6G4vzi7LUAdMr9EkIpoO8S1qQ53n/zT62c3K4Ta0Dc09UliTwKvDrIrxax79aBoxL8tvAwA8oEgV4wmPAVqCrESpP7PPtPOXL61j109C9dWy/RhN4y4g2BhcB/BQK0YsdFxoDO6hpNMNBNPGYrLo5DFQ22YY9Z4ynrCVpZEtOwSPUznw0A6okrFOYQoC9BUYYjIZOKR1DQEjQ1SekAaGtU/gJ5pRTpUAOoVJLO5FSk+tL2HJgTl5cNVebfJGVlKd59xJhrUXOzOELBOGAWDFBvX0ANe6UJqAwu2qKDV5hZasi+mkzsStIzzVRVDQfrSlIStQ8QUUjA1UAIJ/ApdNhpFgAFLoZmoun4zYz1BQt/FHDXjOcCmwGYFcS6CbCdhvBPZzkATfvgPTiaMKLi09d1ik2tzDCsmufOWtrpdDKuCXWUPVnqZPwKIY6+We7D40GBxhE2KWNMIwCzCFM2UuaB0vc28TPFPZMAOpp6UryWubqSwjLyQjc83WSiw6kmD0gHxf1TxeJlGpdjqbBwRayugpnmHuI/i+WDysAWkY5EssHiHDdASaA8DqmFmyAHt3d7HQdol0K2RFRtkRRXZWAKQedFCkHwiFCmULTXA8SwCuIFmxKl7JqkvC6p6VRqYHOalfDGqBVMOf8NKonLENiKxADcr3a/tl8n6ykamwHarcUdkkNHeSox1sL8xlQLgHOMwzor21OOhdqaCZwWti1842cvUPk3Phsgy4zqv1KlUCAZVZqo8aau3FyrJpWq0ae8vtVVzHVhHZ1e+Kbks8alBW/vt6uaL4YiMJGMjBRgIxUZJENGOjAxiYyPbP5MGUEiqQhzcQAp9UhsqEA17BA7yGGshZRIc6N8XyLQfwPijMnD8ackqRAPBk/pvc0YCaa6L7rwD+6iAq6zoiKByKyAvyggUPbGW6FYptgBWOrBgER6V0RO2mwBNbq6jBBFUXmqSqBWNwWEpkTOiLiJxZTG5TclOrqOirN6Z6GEjZZoum3q5VhKGfAZ2fRD+3uKQ5ZiEEPdtApkb9pYSfrvMvB2LLXmTUiJrDvWUI7fmpVSRGOSd3NraGl0LPXspW5kLeQSOT8EoR90QKQ99E+uf5sHTAhL8N2wTGro3Ka7tduu3cvrvJnsUY8VeosdaLP1BiF9VEtcCvsKwN61Qy3FnWWpfLb7+MQe/fbgDj1H7AyJ+vgFflV3rkNdW5HXTuT3IG7lyleuPNXoXGv6r87+xffACShf6n5P+q3evoAOVBAewB3UJHsoDR7Y9h+3LqaQRawGL98Bzclru3J679yuGdA3nkwPoq39S7KHvqth49VuANyJ4Kh3NIg9BBqs8uVNQdX4d1pRHLaXLrdUlkOQUm/6LdhE7BqVSVhD8HjkzgHU+e/tGeS0OZxwtSoFWZEZVmKltw7aVw+8FAcOkh45wNcQ4ZECqX88Tq+YLMH0KmCTRrwNicYO7GAoJDwwqifARaFCN+jcB4IX6CHOYUuw31uMZ+M7Jb2NdMwCgZzFSGD5k1fKXDPbX5AmD8NOlXUKHKDtH1A4/ZE+6HVPuDlw7fhLVAnFAHyDaDswlIHwPViixKkQ1ppNDlhxkPGFUQByqAKyjVIGpVCJqXokaQ0KypIW5eUY6qX1QakZj6hAYgsfFVjHWwI2MUEWH5DhIDQjGQVOwckRrH+iVqTY0sZ1CnHddmpWYxsdpZbG9SPRQ0uscuNPHrjxqe4+8blSbGhDXOuOb8mQ5UESA/VR5LoVBMg9mSLBNzh8FVUS7VpUut8SoclL8aDQlM+Cb1HeV5asAphfGcXSBDhcLq1aigLWo9ClryDIS4uB4HW14nWgTXbUAySU6abwM+UvjLqA0omVQVkcxwZrMpPmRUATmqsNUV/3eHFCbGZLPPjb3ngqIiC59gOgv67LTCl6lIy3IoWhV5KCC+9d3JfAXpGZYYBvpzTgWjx0oH/LCboIFOmgbdvZQMPFmsXF7EEe9Xpf6mCkJHnmMG5sInut5Ko99fuw/TulAP+n9u0RfkPUFyP3B1kIlCOqz2vVs9/UtSWIElNRgRQzd4CwePOCpCsgjEhKSWMIgKNgH/IMsNqJIW9Vui2ICSNCFmCHg1AWhcZveZfoQO3HkD9+70lyc6A8nViWGupq4cyp7Ts58fIsB1ujAYVMDfhWJTweiINc29P/VrnwCe3dbulrPXw+Avd5ZybCsy72RIsqr1TajGJGHQ0Zn3tTNlpVebmetDFLci6wLRQ+nUMLYmbSEqkQ0h16qgnwT+cwIlCcYK5xYTAKXOZ/G4Ljma0uKk5RiZvNfS7zxQgAkkCGND4looe+iHhCLSQX1Vn4LgI6K9HFjJ0UCHNIWi4DrEsLXQcJK5XaDUsHcQ5oWJbiDMH79u9Eb3cHvANCCbcYqg5daGAurdQLdPcC88nvNCDoLhhbbufqsDwWf+SFxCXnLQsMjMLQMugDhdpH4XpLGgIi6yBIs4FyL0YSi8cHovCW6LYB0PUxe167L3ZyIegMUjLrxnfA7zMKtqbIXzCLINnUTg4iM2KEhGJAPxtqDeJzhmg3+25aJBrg9cPZUBlok/rjxqXXFuGUK1BD2qgLtCk5m1IcoMBJBIkNALgM5F2W7TQmqMIUozGk4U0ZEKgjkDqAisOFpCtqZ8BF0XNVcD6fjXKPti1Ns6dgRCqkFJqhLsYLzXQDnQCbg4Gr3zYhhkm+fpKfmmS35z5KwSkBfJl1iIRE7yTAtOrNpy1VQztNjM/V1zFNE/qXq0k6mK1fAa0MRIghOwPUY6yLShEhIu7NIE1/0LALOK0idCCdWPhbA63FADQ+orXN2gTRCQl1i0hrXiyIBEBGlTUcPUsXev4rBOMXXU4pgbVZBSJsAbPkQA0GVaio9QCik5koHlBnUiIEECJEUE4aOa+cRZtGDuApnGg6Ry08MJz22nrFteAJc4pHi4mXo4TDbSFB4qDhbN52HkJvNZPcZxluG1KH2fZMVhsaFjN1Loe146h2zpATsxoEcHZBwtHIbeHYjVDD8PQa+jhp+hZRwH1dbBm/S2f3KY1MjlAWUybuEr8ZxBy5ZvSTc33nF7TRs55oZGKHA2BmHiaJPaCyOqZ1M2YbG7BHNmuFH1GSAhVw0Fi7gWhy6VFUChj0CAIDIkgZV0DKp6329Y2kOQua66NBoIlU3Qb3ovzbRQKb20pV6j/69YLgzp/K3yGIEDxbLcCdUCVYUzLDz5NcXkPipwTd0R925obm8OWVYp6qnzeHcecR31idlbV0gycr0Z3X8bj156xERuUh4w8f+efB+FvA/S0grQPINIDntBDpYed3MMuBpqiao9VIzWgGxWIp5Nc0actCEBZDcBDRoQJ1orgNziBWQ8oXsBfZVYXI7Eu4DmLzkVBoEUIViloCkWfD73n791XAL0E2jhpcQn8I3MGBIDfLbcT9xXO9aYCgJvrv1jXE1E/u+Ar78+BBwlANByBj7bAHB2CDhBhpbg32R+xg9TGrch79Oe6wTaevRC083aCe0PhHwKgQCxFOe+/elhL23+f0Ve60CVYIsMH69ylM4HXQwld7uFstgffVxH3b7p9h+47lgfz4b7J9giAo6BBKOX7IkfwO/YqBoPv7BKk6H/akcAPyAQDkB3vXAdhBIH0DovJo6q1B5EH4aH639cJhoP7HWDsNLg7kcEOMgSaEhw2w0fkPAeOhJ88CZfM0lJD9aQaxh2Gt/I4T41oFABdPMhjOR9bAe6jbciXUHH+Kg0F9ecfIOiATUJhx8sO2PYeH89rh5AHyAVO+H8+c8BvdEfb2aDEj2kYI6paH208cjtR+fcUdCPuy66bp/fd6dBOOnKAV+zo+XB6OreX94cb/ZgImPFcVAUlbQrQepjMnbqVbkDfOt5PDyGXLIIU+KfkrJ7QBae+U44fLgqnNTlewvfnxEj2n0jhpyI63viPuWBLf+4rm2CyPVHwzsh2M5Ud32z7fz6R7XjftTOG25+SYXM6McLOHnAD5Z4kThVrOQn3Vign1aid0FZIjJWJz8hGsJP8VyT3u7WzSdhi6ZG7RAGjeyfbOgUJTqe6w5nvL2IIlTxe9U8Ze8Pbnwjze2I53tvO97iz+fF866c/OgXfT/51nmFfqPFC/T0F5M4/szOoX0DeZ1/jhdwPkLTjg5646FjuP+nnjnBxaR8epg/HxD8msC/hcrJEXvK5Fzd11AbOqXdt867S9Of0vznTLzhyy+udMu6nAj0/f06edcvmnLgVpx84Fep4tcQzkV6M+kcAv5HIzqV2M5ldfS5XkLgx0q+Dec41X2DjVyg7cfyuPHqsbB945Pu+OiH17E16K+kcIvVn8roiryFCfc7wnvV18xi4ZLQm4no1j4AS4h6c6urYTz5MmXgDjXBKOcj8qJQ0BCBgNFcr2bT35LS65r7qowMfyElCC3DLsayvIB1CqV1KM+bSlwGiB1odKonIhfogbihuOtL2TeT+T/LCVPyQXDIzOcsrWgaNEUPStzRtPpIsw6yJQBUaeFt2ajUO/c/UbWWhzu7c+ivCsW14BwcUZwSaHvVZkkCACK7/OHpTAA2nq6NUF8gSEvf/kR3DQUiL6UfX2X4F78fpMNwDnRg2G3b1cUCb7cFg0og76EF8lzSE99ue3e7A5GzzyHJdM12d3XNdWLuSBKaWtEINY8C92P09UwkQq5t1vqP7wft/R+He5pTQV8hMEEayhhBgH4po7ITepNYwac/ZaACNhsCSIjUYqHlOEiNQYhwkYqRwfRuXJ7vrkiAA9zunVtX7EDt+lA04Jwx2es9OlLw7oYLqAeF0VIMlLjAqDtkswXNqlMLjuqLI6sgSWzYyCog4fasYRrOZE2fj+FXwb5BKM8DNPc3EPg4QT1Vj8NehblLQxWM1dOskI8AFOb99VKqM7nIdZHrKgeaA9tS/hoHzFhaw7tiTejKpBs6wev0cG79jGSFvSAwr6fDPxn0z+Z8s/WfBMQoMbw0BRe9vZPtHgd9MAY+IAmPABET6xQ0Anqo5HVHt/W5o+pgNvlALb0x8KBERXQKHw4O8Gz4zR3g3DC70CneBrpxIB3nXJx6RPceUT81tE26t2mwfEEjpB2ep5sw3UHsuoFz02a1ucGDdznhbwZ6M8meDQZniz1Z4NCODysVRfL32f8/9nPDMReMueKj2mgrAI2IUNAD8B1wLimW5NoZIoCFoGJF05oXwzOtAocEgwYYBOHzjSURtvFRSm5GLieQKTqQhtUo1hJuTAd0vtDMKdh8sGNbQ3pA4j6gibyNwDHI7A/K8XJeRbyXhyGpW4Bi2tKJWm1FGeR5K/VyNPqb+j8x9zecfC3nUH4xlNIB58cC/6F56S6bAfAINt3kFTOLEfayjvFGQHYwAJG6TipXs8IQG1z56I1kygfMJfVRAwpvPkYNGfwjnXRYLj7N/nHE11e5lzdv9815WUtSQ57X5o1soa4PuIdpHtuzljatl/p9wHjr61Uo+xzEOvyOT5t4U8MB3gPkNMHaXeAEYBA/ITAJ0EhQcecOXHzi7Nd4/bTG5yIR3IH4hDg+FSmH8Ste9EpBcD5Gnon2reV+ufmz6v70pN7R8zesf83m32b0y8E+a4RPjw9HbjLLo3MGf30bQAADq2gXAAIooCgbwNGXFvEw7U0C+JFAPO0KRiYfpTch/QCq2XQRQIsB5QbAAAE0DQL8hGwhUAVGQCrAHlA0AeUWVGgAjPVsAowsPU0Bw0i9dxC1lJfcRXtBWMfEAtYbbH8AgN5JTQFl8pfWgLQwhfecRg9w/CMUlNRiY31N9Oza0zJtnDC4HB8ogPUESZRKU0D5tgGYWz3lbfC/wx9ZvbH1x87TMg0FMvfeWCS44fTW2G8PPV31iQDbeWwV9vfXUF98vAKc1b0muDvQzMOuZ7U2wGgeiCpB07fvQHhLKMI0/clQe5nq9i/Ov1btRuQL1aku7NvwjklQAE2OQxkAZ3kV8eauWupWAdgC4BlnHj02Q+kHZBUA1AfZBGQjkAwGiCoYdQBpFj8OkAeoZLLhEOQogypEgAlgAAA4AAFgABWOoIYAlgWIAWAAAdgWBaABYEOBaAWIHaDOghgEOAaghYF2BQkBoJIAeghoLyQGAY4AEBdgQ4A5BRkKoIYBdgJQHaC6gpYC6CGghgH2BjgY4HaCSATYNiABAWICWB2ghgFxA6gwoF2A0AJYH2BSgfYDOD0IZYIgBZQToPaDdgBYBqDdgY4CaDaAW4JqDaAQ4EOBdgJoJUB2ghoOOACwLoNBDdgZoNoAGgpYAEA6gioLyCqgmoNmCFgEgAGC6gmoJqDh6Q4FUBjgBgH6CTg44BqC0AEkNoB2go4CWAhguoNoAag44GWAFgNEOiDEQ2kJUAlgekKaCGgpEIWD6gqEIaCTgloIWAhgsEIuCRgyEJCR9gN1FeCoAFkMpCBACYIYABABgEpD2g3oP2ASAQ4AaD2g/YLqCDQtUP2B8Q2ICOBRAakK1D2QqoIGCGghYDQB2gtABIB9g50NaC0ANAC6CSAXYAEBjgA4GOAngmoMOCjgIENpCUQw4A8AFQweGRDMQhgFpCKQ3EFNClgeEIaCFg44BOB1Q34NiA6go4AEBDgFQAWAjgZYBtC3gp0M2DMQ/YIWA/glkPaClgBoIEADQvYMOBsQuoIWBxQ44FiBfg1sIEB9gBoJqD6wksKgA0AOUJJDmwvJB7CHQ3UIdDmgp0LlCDQ5MNpDwQWIGOBaAHkIaCmQ1EKjDPQ04OTDpgx4OWAjgHMMZD2gpQDqDjgRoMRCSAJYGaD+QokKXCGgwcMgAWQz0MOB2gmoKWA0AesJ9DaAY4DQBFg1YLlDKUOYMWD6g00KfgiQv4KBDHwo4H5C1Q2gDPDSQy4JIAHgn4JzDdgAkPmCzgvtS1D3wsYIBCwQx8M+CGARkJqDDgI4A4IBANsI9D4I+oOxC8wmoP2A7gusJzDRQp4KaDHw+oL6CIwk4BqCQkZkOHpCw3YF+DvQvoLqCPQhgH1C8wpoIGD8Q30MfDvguYKfhdgYiN2AzgtoPVCPwwsM+CzwvMKeD3MfsPQi5g7tDQBNw3IOiD3MBsJ+C3wj8LEiDg0iJFCFga4JaCDgREKNDLg3YCWB4IzoP2A2wx8LOC2w1sKWBLQv8MpCPIq8OhCdQx0LBDfw80JqCBAJYADDpg1oPaD5IuUOQiTI8cPtD8Q0QGxCEokgBFCAwuoJBCUQusM+D4og4OLCtwuoKvDYgZkKWBQQ5CL2CcQWoOBC0AXYE6CrwroNmCLgiSNLpWwyMLMjbQ8MKRCCQz0J/C/g0QG2B+wzsOhDKUdcJJChIn0Oqj+woMLRD0Qt4IKDd1PEGKDIsUoPXRqkCoKAA== -->

<!-- internal state end -->
<!-- finishing_touch_checkbox_start -->

<details open="true">
<summary>✨ Finishing Touches</summary>

- [ ] <!-- {"checkboxId": "7962f53c-55bc-4827-bfbf-6a18da830691"} --> 📝 Generate Docstrings

</details>

<!-- finishing_touch_checkbox_end -->
<!-- tips_start -->

---

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

<details>
<summary>❤️ Share</summary>

- [X](https://twitter.com/intent/tweet?text=I%20just%20used%20%40coderabbitai%20for%20my%20code%20review%2C%20and%20it%27s%20fantastic%21%20It%27s%20free%20for%20OSS%20and%20offers%20a%20free%20trial%20for%20the%20proprietary%20code.%20Check%20it%20out%3A&url=https%3A//coderabbit.ai)
- [Mastodon](https://mastodon.social/share?text=I%20just%20used%20%40coderabbitai%20for%20my%20code%20review%2C%20and%20it%27s%20fantastic%21%20It%27s%20free%20for%20OSS%20and%20offers%20a%20free%20trial%20for%20the%20proprietary%20code.%20Check%20it%20out%3A%20https%3A%2F%2Fcoderabbit.ai)
- [Reddit](https://www.reddit.com/submit?title=Great%20tool%20for%20code%20review%20-%20CodeRabbit&text=I%20just%20used%20CodeRabbit%20for%20my%20code%20review%2C%20and%20it%27s%20fantastic%21%20It%27s%20free%20for%20OSS%20and%20offers%20a%20free%20trial%20for%20proprietary%20code.%20Check%20it%20out%3A%20https%3A//coderabbit.ai)
- [LinkedIn](https://www.linkedin.com/sharing/share-offsite/?url=https%3A%2F%2Fcoderabbit.ai&mini=true&title=Great%20tool%20for%20code%20review%20-%20CodeRabbit&summary=I%20just%20used%20CodeRabbit%20for%20my%20code%20review%2C%20and%20it%27s%20fantastic%21%20It%27s%20free%20for%20OSS%20and%20offers%20a%20free%20trial%20for%20proprietary%20code)

</details>

<details>
<summary>🪧 Tips</summary>

### Chat

There are 3 ways to chat with [CodeRabbit](https://coderabbit.ai?utm_source=oss&utm_medium=github&utm_campaign=y-scope/clp&utm_content=1023):

- Review comments: Directly reply to a review comment made by CodeRabbit. Example:
  - `I pushed a fix in commit <commit_id>, please review it.`
  - `Explain this complex logic.`
  - `Open a follow-up GitHub issue for this discussion.`
- Files and specific lines of code (under the "Files changed" tab): Tag `@coderabbitai` in a new review comment at the desired location with your query. Examples:
  - `@coderabbitai explain this code block.`
  -	`@coderabbitai modularize this function.`
- PR comments: Tag `@coderabbitai` in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
  - `@coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.`
  - `@coderabbitai read src/utils.ts and explain its main purpose.`
  - `@coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.`
  - `@coderabbitai help me debug CodeRabbit configuration file.`

### Support

Need help? Create a ticket on our [support page](https://www.coderabbit.ai/contact-us/support) for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

### CodeRabbit Commands (Invoked using PR comments)

- `@coderabbitai pause` to pause the reviews on a PR.
- `@coderabbitai resume` to resume the paused reviews.
- `@coderabbitai review` to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
- `@coderabbitai full review` to do a full review from scratch and review all the files again.
- `@coderabbitai summary` to regenerate the summary of the PR.
- `@coderabbitai generate docstrings` to [generate docstrings](https://docs.coderabbit.ai/finishing-touches/docstrings) for this PR.
- `@coderabbitai generate sequence diagram` to generate a sequence diagram of the changes in this PR.
- `@coderabbitai resolve` resolve all the CodeRabbit review comments.
- `@coderabbitai configuration` to show the current CodeRabbit configuration for the repository.
- `@coderabbitai help` to get help.

### Other keywords and placeholders

- Add `@coderabbitai ignore` anywhere in the PR description to prevent this PR from being reviewed.
- Add `@coderabbitai summary` to generate the high-level summary at a specific location in the PR description.
- Add `@coderabbitai` anywhere in the PR title to generate the title automatically.

### CodeRabbit Configuration File (`.coderabbit.yaml`)

- You can programmatically configure CodeRabbit by adding a `.coderabbit.yaml` file to the root of your repository.
- Please see the [configuration documentation](https://docs.coderabbit.ai/guides/configure-coderabbit) for more information.
- If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: `# yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json`

### Documentation and Community

- Visit our [Documentation](https://docs.coderabbit.ai) for detailed information on how to use CodeRabbit.
- Join our [Discord Community](http://discord.gg/coderabbit) to get help, request features, and share feedback.
- Follow us on [X/Twitter](https://twitter.com/coderabbitai) for updates and announcements.

</details>

<!-- tips_end -->

@haiqi96 haiqi96 changed the title fix(package): Refactor dataset related code. fix(package): Refactor dataset handling in the package. Jun 20, 2025
@haiqi96 haiqi96 marked this pull request as ready for review June 20, 2025 16:57
@haiqi96 haiqi96 requested a review from a team as a code owner June 20, 2025 16:57
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🔭 Outside diff range comments (2)
components/job-orchestration/job_orchestration/executor/compress/compression_task.py (2)

84-95: Add docstring and consider using a dataclass for parameters.

The function has too many parameters and lacks documentation:

 def update_job_metadata_and_tags(db_cursor, job_id, table_prefix, dataset, tag_ids, archive_stats):
+    """
+    Update job metadata and tags for an archive.
+    
+    :param db_cursor: Database cursor
+    :param job_id: Job identifier
+    :param table_prefix: Table name prefix
+    :param dataset: Optional dataset name
+    :param tag_ids: List of tag IDs
+    :param archive_stats: Archive statistics dictionary
+    """

Consider creating a dataclass to group related parameters and reduce the argument count.


97-110: Add docstring for the update_archive_metadata function.

 def update_archive_metadata(db_cursor, table_prefix, dataset, archive_stats):
+    """
+    Insert archive metadata into the database.
+    
+    :param db_cursor: Database cursor
+    :param table_prefix: Table name prefix
+    :param dataset: Optional dataset name
+    :param archive_stats: Dictionary containing archive statistics
+    """
     archive_stats_defaults = {
♻️ Duplicate comments (1)
components/clp-package-utils/clp_package_utils/scripts/native/archive_manager.py (1)

288-290: Consider extracting the repeated archive output directory logic.

The pattern archives_dir / dataset if dataset is not None else archives_dir appears in multiple places. While simple, extracting it to a helper function would improve maintainability:

+def _get_archive_output_dir(archives_dir: Path, dataset: typing.Optional[str]) -> Path:
+    """Get the archive output directory, optionally prefixed with dataset."""
+    return archives_dir / dataset if dataset is not None else archives_dir

Then use it as:

-archive_output_dir: Path = (
-    archives_dir / dataset if dataset is not None else archives_dir
-)
+archive_output_dir: Path = _get_archive_output_dir(archives_dir, dataset)

Also applies to: 392-392

📜 Review details

Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between bcb7f54 and 68454c6.

📒 Files selected for processing (11)
  • components/clp-package-utils/clp_package_utils/scripts/native/archive_manager.py (10 hunks)
  • components/clp-package-utils/clp_package_utils/scripts/native/decompress.py (0 hunks)
  • components/clp-package-utils/clp_package_utils/scripts/start_clp.py (2 hunks)
  • components/clp-py-utils/clp_py_utils/clp_metadata_db_utils.py (6 hunks)
  • components/clp-py-utils/clp_py_utils/initialize-clp-metadata-db.py (2 hunks)
  • components/clp-py-utils/clp_py_utils/s3_utils.py (2 hunks)
  • components/job-orchestration/job_orchestration/executor/compress/compression_task.py (8 hunks)
  • components/job-orchestration/job_orchestration/executor/query/extract_stream_task.py (1 hunks)
  • components/job-orchestration/job_orchestration/executor/query/fs_search_task.py (1 hunks)
  • components/job-orchestration/job_orchestration/scheduler/compress/compression_scheduler.py (4 hunks)
  • components/job-orchestration/job_orchestration/scheduler/query/query_scheduler.py (11 hunks)
💤 Files with no reviewable changes (1)
  • components/clp-package-utils/clp_package_utils/scripts/native/decompress.py
🧰 Additional context used
🪛 Pylint (3.3.7)
components/clp-package-utils/clp_package_utils/scripts/start_clp.py

[error] 37-37: Unable to import 'clp_py_utils.clp_metadata_db_utils'

(E0401)

components/clp-package-utils/clp_package_utils/scripts/native/archive_manager.py

[error] 15-19: Unable to import 'clp_py_utils.clp_metadata_db_utils'

(E0401)

components/clp-py-utils/clp_py_utils/clp_metadata_db_utils.py

[convention] 25-25: Missing function or method docstring

(C0116)


[convention] 29-29: Missing function or method docstring

(C0116)


[convention] 33-33: Missing function or method docstring

(C0116)


[convention] 37-37: Missing function or method docstring

(C0116)


[convention] 41-41: Missing function or method docstring

(C0116)


[convention] 45-45: Missing function or method docstring

(C0116)

components/job-orchestration/job_orchestration/executor/compress/compression_task.py

[error] 22-25: Unable to import 'clp_py_utils.clp_metadata_db_utils'

(E0401)


[convention] 73-73: Missing function or method docstring

(C0116)


[convention] 84-84: Missing function or method docstring

(C0116)


[refactor] 84-84: Too many arguments (6/5)

(R0913)


[refactor] 84-84: Too many positional arguments (6/5)

(R0917)


[convention] 97-97: Missing function or method docstring

(C0116)

components/job-orchestration/job_orchestration/scheduler/query/query_scheduler.py

[error] 40-46: Unable to import 'clp_py_utils.clp_metadata_db_utils'

(E0401)

🔇 Additional comments (22)
components/clp-py-utils/clp_py_utils/clp_metadata_db_utils.py (4)

98-117: Good refactoring to use centralized table name generation.

The change to accept a dataset parameter and use the get_files_table_name helper function improves consistency and modularity.


120-130: Consistent refactoring pattern applied.

The function now uses the centralized table name helper. Note that this function requires a dataset parameter (not optional), which appears intentional for column metadata tables.


143-186: Proper use of centralized table name helpers.

The changes consistently apply the new table name generation pattern. The path construction at line 182 correctly appends the dataset name for dataset-specific storage organization.


203-224: Excellent refactoring to use centralized table name generation.

The function now consistently uses the new table name getter functions and properly passes the dataset parameter to the table creation functions. This eliminates direct string manipulation of table prefixes.

components/job-orchestration/job_orchestration/executor/query/extract_stream_task.py (1)

100-103: Good refactoring to avoid mutating shared configuration.

The change from mutating s3_config.key_prefix to creating a local s3_object_key variable improves immutability and prevents side effects. This is a more predictable approach that won't affect other uses of the s3_config object.

components/job-orchestration/job_orchestration/executor/query/fs_search_task.py (1)

65-68: Consistent S3 key construction refactoring.

This change follows the same pattern as in extract_stream_task.py, avoiding mutation of the shared s3_config.key_prefix by creating a local s3_object_key variable. Good consistency across related files.

components/clp-py-utils/clp_py_utils/s3_utils.py (1)

282-310: Good semantic improvement with parameter renaming.

Renaming dest_file_name to dest_path better reflects the parameter's purpose, especially with the updated functionality where it can include dataset-specific path components. The documentation update clearly explains the behavior.

components/clp-py-utils/clp_py_utils/initialize-clp-metadata-db.py (1)

11-11: Good fix for web UI loading issue.

The addition of CLP_DEFAULT_DATASET_NAME import and explicit creation of metadata tables with the default dataset for CLP_S storage engine addresses the web UI loading bug mentioned in the PR objectives. The comment clearly explains the rationale.

Also applies to: 59-62

components/clp-package-utils/clp_package_utils/scripts/start_clp.py (1)

37-37: LGTM! Clean refactoring to use centralized table name getters.

The changes properly initialize the dataset parameter based on the storage engine and use the new centralized table name getter functions, improving consistency across the codebase.

Also applies to: 868-877

components/job-orchestration/job_orchestration/scheduler/compress/compression_scheduler.py (2)

196-199: Verify the removal of dataset from archive storage directory path.

The archive storage directory path no longer includes the dataset subdirectory. Please confirm this is intentional and won't affect downstream components that may expect the dataset-prefixed path structure.


189-192: Clean implementation of dataset-aware table naming.

Good use of the Optional type for dataset_name and proper propagation to the table name getter functions.

Also applies to: 280-292

components/clp-package-utils/clp_package_utils/scripts/native/archive_manager.py (1)

15-19: Well-structured refactoring to support dataset-aware table naming.

The changes properly:

  • Import the centralized table name getter functions
  • Initialize dataset based on storage engine
  • Update function signatures to accept optional dataset parameter
  • Use the table name getters consistently throughout

Also applies to: 196-199, 241-241, 268-269, 340-341, 360-362

components/job-orchestration/job_orchestration/executor/compress/compression_task.py (2)

146-156: Good implementation of dataset-aware S3 upload.

The helper function properly handles the optional dataset parameter by prefixing the S3 destination path when a dataset is provided.


297-297: Consistent dataset propagation throughout the compression flow.

The changes properly initialize and propagate the dataset parameter through all relevant function calls, maintaining consistency with the refactored table naming approach.

Also applies to: 371-371, 389-391, 396-396

components/job-orchestration/job_orchestration/scheduler/query/query_scheduler.py (8)

40-46: Import looks correct, static analysis warning likely a false positive.

The import of the new table name helper functions aligns with the refactoring objectives. The static analysis warning about the import error is likely due to the tool not having access to the full package structure.


115-127: Proper propagation of dataset parameter.

The addition of the optional dataset parameter and its propagation to get_archive_and_file_split_ids_for_ir_extraction is consistent with the refactoring pattern.


162-175: Consistent implementation of dataset parameter.

The addition of the optional dataset parameter and its propagation to archive_exists follows the same pattern as other refactored functions.


391-424: Excellent refactoring to use centralized table name functions.

The replacement of hardcoded table suffixes with calls to get_archives_table_name, get_archive_tags_table_name, and get_tags_table_name improves maintainability and follows the centralized table name handling pattern.


450-481: Clean implementation of dataset-aware table name resolution.

The use of get_files_table_name with the dataset parameter properly centralizes table name construction.


484-498: Proper use of centralized table name function.

The implementation correctly uses get_archives_table_name with the dataset parameter.


668-694: Well-implemented dataset extraction and validation logic.

The code correctly extracts the dataset for CLP_S storage engine and validates its existence. The error handling for invalid datasets is appropriate.

One minor observation: Consider initializing dataset to None explicitly for clarity, though the current typing annotation is sufficient.


733-739: Consistent dataset propagation to extraction handlers.

The dataset parameter is properly passed to both IrExtractionHandle and JsonExtractionHandle constructors, maintaining consistency across the refactoring.

Comment on lines 73 to 82
def update_tags(db_cursor, table_prefix, dataset, archive_id, tag_ids):
tags_table_name = get_tags_table_name(table_prefix, dataset)
db_cursor.executemany(
f"""
INSERT INTO {table_prefix}{ARCHIVE_TAGS_TABLE_SUFFIX} (archive_id, tag_id)
INSERT INTO {tags_table_name} (archive_id, tag_id)
VALUES (%s, %s)
""",
[(archive_id, tag_id) for tag_id in tag_ids],
)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add docstring for the update_tags function.

This function lacks documentation. Please add a docstring explaining its purpose and parameters:

 def update_tags(db_cursor, table_prefix, dataset, archive_id, tag_ids):
+    """
+    Insert tags for an archive into the database.
+    
+    :param db_cursor: Database cursor
+    :param table_prefix: Table name prefix
+    :param dataset: Optional dataset name
+    :param archive_id: Archive identifier
+    :param tag_ids: List of tag IDs to associate with the archive
+    """
     tags_table_name = get_tags_table_name(table_prefix, dataset)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
def update_tags(db_cursor, table_prefix, dataset, archive_id, tag_ids):
tags_table_name = get_tags_table_name(table_prefix, dataset)
db_cursor.executemany(
f"""
INSERT INTO {table_prefix}{ARCHIVE_TAGS_TABLE_SUFFIX} (archive_id, tag_id)
INSERT INTO {tags_table_name} (archive_id, tag_id)
VALUES (%s, %s)
""",
[(archive_id, tag_id) for tag_id in tag_ids],
)
def update_tags(db_cursor, table_prefix, dataset, archive_id, tag_ids):
"""
Insert tags for an archive into the database.
:param db_cursor: Database cursor
:param table_prefix: Table name prefix
:param dataset: Optional dataset name
:param archive_id: Archive identifier
:param tag_ids: List of tag IDs to associate with the archive
"""
tags_table_name = get_tags_table_name(table_prefix, dataset)
db_cursor.executemany(
f"""
INSERT INTO {tags_table_name} (archive_id, tag_id)
VALUES (%s, %s)
""",
[(archive_id, tag_id) for tag_id in tag_ids],
)
🧰 Tools
🪛 Pylint (3.3.7)

[convention] 73-73: Missing function or method docstring

(C0116)

🤖 Prompt for AI Agents
In
components/job-orchestration/job_orchestration/executor/compress/compression_task.py
around lines 73 to 82, the update_tags function lacks a docstring. Add a clear
docstring at the start of the function describing its purpose, which is to
insert multiple tag associations for a given archive into the database, and
explain the parameters: db_cursor (database cursor for executing queries),
table_prefix (prefix for table names), dataset (dataset identifier), archive_id
(ID of the archive), and tag_ids (list of tag IDs to associate).

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
components/job-orchestration/job_orchestration/executor/compress/compression_task.py (1)

74-74: Address the missing docstring from previous review.

The function signature has been updated with a new dataset parameter, but the docstring is still missing as noted in the previous review.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between c1de746 and d797198.

📒 Files selected for processing (1)
  • components/job-orchestration/job_orchestration/executor/compress/compression_task.py (8 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (2)
  • GitHub Check: lint-check (ubuntu-latest)
  • GitHub Check: lint-check (macos-latest)
🔇 Additional comments (7)
components/job-orchestration/job_orchestration/executor/compress/compression_task.py (7)

22-26: LGTM! Clean import organization.

The imports for the new utility functions are properly organized and align with the refactoring objectives.


85-87: LGTM! Consistent parameter threading.

The dataset parameter is properly added to the function signature and correctly passed through to the update_tags function.


98-111: LGTM! Clean refactoring of table name handling.

The function properly uses the centralized utility function for table name generation and correctly threads the dataset parameter.


147-157: LGTM! Well-structured helper function.

The _upload_archive_to_s3 helper function properly encapsulates S3 upload logic with dataset-aware path construction. The conditional path prefixing logic is correct.


298-298: LGTM! Clear variable initialization.

The input_dataset variable is properly initialized and will be set appropriately for CLP_S storage engine.


372-372: LGTM! Consistent use of helper function.

The S3 upload call correctly uses the new helper function with the dataset parameter.


390-400: LGTM! Proper dataset parameter handling.

Both function calls correctly pass the input_dataset parameter, maintaining consistency with the refactoring approach.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

♻️ Duplicate comments (1)
components/job-orchestration/job_orchestration/executor/compress/compression_task.py (1)

74-74: Add docstring for the update_tags function.

This function lacks documentation. Please add a docstring explaining its purpose and parameters.

 def update_tags(db_cursor, table_prefix, dataset, archive_id, tag_ids):
+    """
+    Insert tags for an archive into the database.
+    
+    :param db_cursor: Database cursor
+    :param table_prefix: Table name prefix
+    :param dataset: Optional dataset name
+    :param archive_id: Archive identifier
+    :param tag_ids: List of tag IDs to associate with the archive
+    """
     archive_tags_table_name = get_archive_tags_table_name(table_prefix, dataset)
📜 Review details

Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between d797198 and 8c39e77.

📒 Files selected for processing (1)
  • components/job-orchestration/job_orchestration/executor/compress/compression_task.py (8 hunks)
🧰 Additional context used
🪛 Pylint (3.3.7)
components/job-orchestration/job_orchestration/executor/compress/compression_task.py

[error] 22-26: Unable to import 'clp_py_utils.clp_metadata_db_utils'

(E0401)


[warning] 22-26: Unused get_tags_table_name imported from clp_py_utils.clp_metadata_db_utils

(W0611)


[convention] 74-74: Missing function or method docstring

(C0116)


[convention] 85-85: Missing function or method docstring

(C0116)


[refactor] 85-85: Too many arguments (6/5)

(R0913)


[refactor] 85-85: Too many positional arguments (6/5)

(R0917)


[convention] 98-98: Missing function or method docstring

(C0116)

⏰ Context from checks skipped due to timeout of 90000ms (1)
  • GitHub Check: lint-check (ubuntu-latest)
🔇 Additional comments (6)
components/job-orchestration/job_orchestration/executor/compress/compression_task.py (6)

74-82: LGTM! Variable naming issue has been resolved.

The function correctly uses archive_tags_table_name in the SQL query, resolving the previously identified variable naming bug.


147-157: LGTM! Well-structured helper function for S3 uploads.

The new _upload_archive_to_s3 helper function properly encapsulates S3 upload logic with dataset-aware path construction. The implementation correctly handles the optional dataset parameter by prefixing the destination path when a dataset is provided.


298-298: LGTM! Proper initialization of dataset variable.

The input_dataset variable is correctly initialized as None and will be set appropriately for CLP_S storage engine later in the function.


372-372: LGTM! Consistent use of new S3 upload helper.

The call to _upload_archive_to_s3 properly passes all required parameters including the dataset, maintaining consistency with the refactored approach.


390-392: LGTM! Proper dataset parameter propagation.

The call to update_archive_metadata correctly includes the input_dataset parameter, maintaining consistency with the refactored function signature.


397-397: LGTM! Consistent dataset parameter usage.

The call to update_job_metadata_and_tags properly includes the input_dataset parameter, aligning with the updated function signature.

) as metadata_db_cursor:
if StorageEngine.CLP_S == storage_engine:
create_datasets_table(metadata_db_cursor, table_prefix)
# Note: the webui still expect tables with default dataset
Copy link
Contributor Author

@haiqi96 haiqi96 Jun 20, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is still a small issue: the CLP_DEFAULT_DATASET_NAME is not added to the dataset table until the first compression job with dataset = CLP_DEFAULT_DATASET_NAME.
This behavior will not cause any functional bug so I feel it is acceptable, but we can also explicility add the CLP_DEFAULT_DATASET_NAME to the datasets table here by calling add_dataset

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it's added when we compress the first archive that uses the default dataset. You guys can add the set of metadata tables for default dataset here if the webui requires it upon launching.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On a second thought, this change creates the tables for CLP_DEFAULT_DATASET_NAME but does not add CLP_DEFAULT_DATASET_NAME to the clp_datasets table.
It's more proper to use add_dataset( but that would require bringing in clp_config.archive_output and introduce a ripple of change in the argparsers.

@kirkrodrigues what do you think?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we call add_datasets in start_clp.py? That way we wouldn't need to pass in the config file.

@kirkrodrigues
Copy link
Member

Fixed a small bug where webui fails to load before the first CLP-S compression job.

Can you elaborate in the PR description?

Can you fill out the checklist?

@haiqi96
Copy link
Contributor Author

haiqi96 commented Jun 25, 2025

Fixed a small bug where webui fails to load before the first CLP-S compression job.

Can you elaborate in the PR description?

Can you fill out the checklist?

Updated PR description and filled out the checklist.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (3)
components/job-orchestration/job_orchestration/executor/compress/compression_task.py (3)

73-82: Add docstring for the update_tags function.

This function lacks documentation. Please add a docstring explaining its purpose and parameters.


84-94: Add docstring for the update_job_metadata_and_tags function.

This function lacks documentation and should include a docstring explaining its purpose and parameters.


97-123: Add docstring for the update_archive_metadata function.

This function lacks documentation and should include a docstring explaining its purpose and parameters.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 7759a7a and e6b8cc7.

📒 Files selected for processing (1)
  • components/job-orchestration/job_orchestration/executor/compress/compression_task.py (8 hunks)
🧰 Additional context used
🧠 Learnings (2)
📓 Common learnings
Learnt from: gibber9809
PR: y-scope/clp#504
File: components/core/src/clp_s/search/kql/CMakeLists.txt:29-29
Timestamp: 2024-10-22T15:36:04.655Z
Learning: When reviewing pull requests, focus on the changes within the PR and avoid commenting on issues outside the scope of the PR.
Learnt from: Bill-hbrhbr
PR: y-scope/clp#868
File: components/clp-py-utils/clp_py_utils/clp_metadata_db_utils.py:141-144
Timestamp: 2025-05-05T16:32:55.163Z
Learning: The column metadata table (created by `_create_column_metadata_table`) is only needed for dataset-specific workflows in `CLP_S` and is obsolete for non-dataset workflows.
Learnt from: Bill-hbrhbr
PR: y-scope/clp#1004
File: components/clp-package-utils/clp_package_utils/scripts/native/decompress.py:139-144
Timestamp: 2025-06-24T08:54:14.438Z
Learning: In the CLP codebase, the get_orig_file_id function signature was changed after a recent merge to no longer accept a dataset parameter, making previous suggestions that reference this parameter invalid.
Learnt from: gibber9809
PR: y-scope/clp#672
File: components/core/src/clp_s/indexer/MySQLIndexStorage.cpp:30-38
Timestamp: 2025-01-23T17:08:55.566Z
Learning: SQL identifiers (table names, column names) in the codebase are currently from trusted sources and directly interpolated into queries. However, the team prefers to implement sanitization for robustness, even for trusted inputs.
components/job-orchestration/job_orchestration/executor/compress/compression_task.py (6)
Learnt from: haiqi96
PR: y-scope/clp#634
File: components/job-orchestration/job_orchestration/executor/compress/fs_compression_task.py:0-0
Timestamp: 2024-12-16T21:49:06.483Z
Learning: In `fs_compression_task.py`, when refactoring the archive processing loop in Python, the `src_archive_file.unlink()` operation should remain outside of the `process_archive` function.
Learnt from: Bill-hbrhbr
PR: y-scope/clp#1004
File: components/clp-package-utils/clp_package_utils/scripts/native/decompress.py:139-144
Timestamp: 2025-06-24T08:54:14.438Z
Learning: In the CLP codebase, the get_orig_file_id function signature was changed after a recent merge to no longer accept a dataset parameter, making previous suggestions that reference this parameter invalid.
Learnt from: haiqi96
PR: y-scope/clp#594
File: components/clp-package-utils/clp_package_utils/scripts/native/del_archives.py:104-110
Timestamp: 2024-11-15T16:21:52.122Z
Learning: In `clp_package_utils/scripts/native/del_archives.py`, when deleting archives, the `archive` variable retrieved from the database is controlled and is always a single string without path components. Therefore, it's acceptable to skip additional validation checks for directory traversal in this context.
Learnt from: Bill-hbrhbr
PR: y-scope/clp#868
File: components/clp-py-utils/clp_py_utils/clp_metadata_db_utils.py:141-144
Timestamp: 2025-05-05T16:32:55.163Z
Learning: The column metadata table (created by `_create_column_metadata_table`) is only needed for dataset-specific workflows in `CLP_S` and is obsolete for non-dataset workflows.
Learnt from: haiqi96
PR: y-scope/clp#651
File: components/clp-package-utils/clp_package_utils/scripts/compress.py:0-0
Timestamp: 2025-01-16T16:58:43.190Z
Learning: In the clp-package compression flow, path validation and error handling is performed at the scheduler level rather than in the compress.py script to maintain simplicity and avoid code duplication.
Learnt from: haiqi96
PR: y-scope/clp#662
File: components/job-orchestration/job_orchestration/executor/query/extract_stream_task.py:167-186
Timestamp: 2025-01-13T21:18:54.629Z
Learning: The s3_put API in clp_py_utils.s3_utils internally handles timeout and retry mechanisms for S3 operations.
🪛 Pylint (3.3.7)
components/job-orchestration/job_orchestration/executor/compress/compression_task.py

[error] 22-25: Unable to import 'clp_py_utils.clp_metadata_db_utils'

(E0401)


[convention] 73-73: Missing function or method docstring

(C0116)


[convention] 84-84: Missing function or method docstring

(C0116)


[refactor] 84-84: Too many arguments (6/5)

(R0913)


[refactor] 84-84: Too many positional arguments (6/5)

(R0917)


[convention] 97-97: Missing function or method docstring

(C0116)

⏰ Context from checks skipped due to timeout of 90000ms (1)
  • GitHub Check: lint-check (macos-latest)
🔇 Additional comments (5)
components/job-orchestration/job_orchestration/executor/compress/compression_task.py (5)

22-25: LGTM! Import changes support dataset-aware table naming.

The updated imports align with the refactoring objective to replace hardcoded table name constants with centralized getter functions that support dataset parameters.


310-310: LGTM! Proper dataset variable initialization.

The input_dataset variable is correctly initialized as Optional[str] and properly set for CLP_S storage engine workflows.


384-384: LGTM! S3 upload call properly includes dataset parameter.

The call to the new helper function correctly passes the input_dataset parameter for dataset-aware S3 path construction.


402-404: LGTM! Archive metadata update properly includes dataset parameter.

The function call correctly passes the input_dataset parameter to support dataset-aware table naming.


409-409: LGTM! Job metadata update properly includes dataset parameter.

The function call correctly passes the input_dataset parameter to maintain consistency with the dataset-aware refactoring.

Copy link
Contributor

@Bill-hbrhbr Bill-hbrhbr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Three major points:

  1. In the query scheduler, a lot of dataset arg passing is not needed, because one can directly get the dataset from all classes derived from QueryJobConfig
  2. As discussed in previous PRs, ExtractIrJobConfig should never have a non-None dataset. So all dataset handling when we are dealing with IR extraction jobs can be safely removed to reduce confusions.
  3. Make all dataset types in components/job-orchestration/job_orchestration/scheduler/job_config.py optional and with default value of None. In this way we can change all if CLP_S == storage_engine check to if dataset is not None check, which is more robust down the road if we support dataset features in other scenarios.
    I'll follow this design when modifying my dataset interface PR.

) as metadata_db_cursor:
if StorageEngine.CLP_S == storage_engine:
create_datasets_table(metadata_db_cursor, table_prefix)
# Note: the webui still expect tables with default dataset
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On a second thought, this change creates the tables for CLP_DEFAULT_DATASET_NAME but does not add CLP_DEFAULT_DATASET_NAME to the clp_datasets table.
It's more proper to use add_dataset( but that would require bringing in clp_config.archive_output and introduce a ripple of change in the argparsers.

@kirkrodrigues what do you think?

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (1)
components/clp-package-utils/clp_package_utils/scripts/start_clp.py (1)

876-882: Long line in client_settings_json_updates breaks Black / Pylint limits

Static analysis still reports >100-char lines (line 878). Re-wrap the offended entry to unblock CI:

-        "MongoDbSearchResultsMetadataCollectionName": clp_config.webui.results_metadata_collection_name,
+        "MongoDbSearchResultsMetadataCollectionName": (
+            clp_config.webui.results_metadata_collection_name
+        ),

Everything else in this block (switch to get_*_table_name) looks solid—nice use of the new helpers.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 0255cbd and 71c4d82.

📒 Files selected for processing (2)
  • components/clp-package-utils/clp_package_utils/scripts/start_clp.py (2 hunks)
  • components/job-orchestration/job_orchestration/scheduler/compress/compression_scheduler.py (5 hunks)
🧰 Additional context used
🧠 Learnings (3)
📓 Common learnings
Learnt from: gibber9809
PR: y-scope/clp#504
File: components/core/src/clp_s/search/kql/CMakeLists.txt:29-29
Timestamp: 2024-10-22T15:36:04.655Z
Learning: When reviewing pull requests, focus on the changes within the PR and avoid commenting on issues outside the scope of the PR.
Learnt from: Bill-hbrhbr
PR: y-scope/clp#868
File: components/clp-py-utils/clp_py_utils/clp_metadata_db_utils.py:141-144
Timestamp: 2025-05-05T16:32:55.163Z
Learning: The column metadata table (created by `_create_column_metadata_table`) is only needed for dataset-specific workflows in `CLP_S` and is obsolete for non-dataset workflows.
Learnt from: Bill-hbrhbr
PR: y-scope/clp#1004
File: components/clp-package-utils/clp_package_utils/scripts/native/decompress.py:139-144
Timestamp: 2025-06-24T08:54:14.438Z
Learning: In the CLP codebase, the get_orig_file_id function signature was changed after a recent merge to no longer accept a dataset parameter, making previous suggestions that reference this parameter invalid.
Learnt from: gibber9809
PR: y-scope/clp#672
File: components/core/src/clp_s/indexer/MySQLIndexStorage.cpp:30-38
Timestamp: 2025-01-23T17:08:55.566Z
Learning: SQL identifiers (table names, column names) in the codebase are currently from trusted sources and directly interpolated into queries. However, the team prefers to implement sanitization for robustness, even for trusted inputs.
components/clp-package-utils/clp_package_utils/scripts/start_clp.py (6)
Learnt from: Bill-hbrhbr
PR: y-scope/clp#1004
File: components/clp-package-utils/clp_package_utils/scripts/native/decompress.py:139-144
Timestamp: 2025-06-24T08:54:14.438Z
Learning: In the CLP codebase, the get_orig_file_id function signature was changed after a recent merge to no longer accept a dataset parameter, making previous suggestions that reference this parameter invalid.
Learnt from: haiqi96
PR: y-scope/clp#594
File: components/clp-package-utils/clp_package_utils/scripts/native/del_archives.py:104-110
Timestamp: 2024-11-15T16:21:52.122Z
Learning: In `clp_package_utils/scripts/native/del_archives.py`, when deleting archives, the `archive` variable retrieved from the database is controlled and is always a single string without path components. Therefore, it's acceptable to skip additional validation checks for directory traversal in this context.
Learnt from: Bill-hbrhbr
PR: y-scope/clp#868
File: components/clp-py-utils/clp_py_utils/clp_metadata_db_utils.py:141-144
Timestamp: 2025-05-05T16:32:55.163Z
Learning: The column metadata table (created by `_create_column_metadata_table`) is only needed for dataset-specific workflows in `CLP_S` and is obsolete for non-dataset workflows.
Learnt from: haiqi96
PR: y-scope/clp#594
File: components/clp-package-utils/clp_package_utils/scripts/del_archives.py:56-65
Timestamp: 2024-11-18T16:49:20.248Z
Learning: When reviewing wrapper scripts in `components/clp-package-utils/clp_package_utils/scripts/`, note that it's preferred to keep error handling simple without adding extra complexity.
Learnt from: Bill-hbrhbr
PR: y-scope/clp#1050
File: components/clp-package-utils/clp_package_utils/scripts/native/search.py:239-244
Timestamp: 2025-06-29T22:01:05.545Z
Learning: In the CLP codebase, dataset validation is handled at the outer layer in wrapper scripts (like clp_package_utils/scripts/search.py) that call native implementation scripts (like clp_package_utils/scripts/native/search.py). The native scripts are internal components that receive pre-validated parameters, so they don't need their own dataset validation logic.
Learnt from: Bill-hbrhbr
PR: y-scope/clp#1036
File: components/clp-package-utils/clp_package_utils/general.py:564-579
Timestamp: 2025-06-28T07:10:47.237Z
Learning: The validate_dataset function in components/clp-package-utils/clp_package_utils/general.py is designed to be called once upon function startup for dataset validation, not repeatedly during execution, making caching optimizations unnecessary.
components/job-orchestration/job_orchestration/scheduler/compress/compression_scheduler.py (10)
Learnt from: haiqi96
PR: y-scope/clp#651
File: components/clp-package-utils/clp_package_utils/scripts/compress.py:0-0
Timestamp: 2025-01-16T16:58:43.190Z
Learning: In the clp-package compression flow, path validation and error handling is performed at the scheduler level rather than in the compress.py script to maintain simplicity and avoid code duplication.
Learnt from: Bill-hbrhbr
PR: y-scope/clp#1004
File: components/clp-package-utils/clp_package_utils/scripts/native/decompress.py:139-144
Timestamp: 2025-06-24T08:54:14.438Z
Learning: In the CLP codebase, the get_orig_file_id function signature was changed after a recent merge to no longer accept a dataset parameter, making previous suggestions that reference this parameter invalid.
Learnt from: Bill-hbrhbr
PR: y-scope/clp#831
File: components/job-orchestration/job_orchestration/scheduler/compress/compression_scheduler.py:0-0
Timestamp: 2025-04-17T16:55:06.658Z
Learning: In the compression scheduler, the team prefers initializing in-memory caches from the database at startup rather than performing repeated database queries for efficiency reasons. This approach maintains both performance and reliability across process restarts.
Learnt from: haiqi96
PR: y-scope/clp#634
File: components/job-orchestration/job_orchestration/executor/compress/fs_compression_task.py:0-0
Timestamp: 2024-12-16T21:49:06.483Z
Learning: In `fs_compression_task.py`, when refactoring the archive processing loop in Python, the `src_archive_file.unlink()` operation should remain outside of the `process_archive` function.
Learnt from: Bill-hbrhbr
PR: y-scope/clp#868
File: components/clp-py-utils/clp_py_utils/clp_metadata_db_utils.py:141-144
Timestamp: 2025-05-05T16:32:55.163Z
Learning: The column metadata table (created by `_create_column_metadata_table`) is only needed for dataset-specific workflows in `CLP_S` and is obsolete for non-dataset workflows.
Learnt from: haiqi96
PR: y-scope/clp#594
File: components/clp-package-utils/clp_package_utils/scripts/native/del_archives.py:104-110
Timestamp: 2024-11-15T16:21:52.122Z
Learning: In `clp_package_utils/scripts/native/del_archives.py`, when deleting archives, the `archive` variable retrieved from the database is controlled and is always a single string without path components. Therefore, it's acceptable to skip additional validation checks for directory traversal in this context.
Learnt from: Bill-hbrhbr
PR: y-scope/clp#1036
File: components/clp-package-utils/clp_package_utils/general.py:564-579
Timestamp: 2025-06-28T07:10:47.237Z
Learning: The validate_dataset function in components/clp-package-utils/clp_package_utils/general.py is designed to be called once upon function startup for dataset validation, not repeatedly during execution, making caching optimizations unnecessary.
Learnt from: gibber9809
PR: y-scope/clp#955
File: components/core/src/clp_s/search/sql/CMakeLists.txt:8-26
Timestamp: 2025-06-02T18:22:24.060Z
Learning: In the clp project, ANTLR code generation at build time is being removed by another PR. When reviewing CMake files, be aware that some temporary suboptimal configurations may exist to reduce merge conflicts between concurrent PRs, especially around ANTLR_TARGET calls.
Learnt from: Bill-hbrhbr
PR: y-scope/clp#1050
File: components/clp-package-utils/clp_package_utils/scripts/native/search.py:239-244
Timestamp: 2025-06-29T22:01:05.545Z
Learning: In the CLP codebase, dataset validation is handled at the outer layer in wrapper scripts (like clp_package_utils/scripts/search.py) that call native implementation scripts (like clp_package_utils/scripts/native/search.py). The native scripts are internal components that receive pre-validated parameters, so they don't need their own dataset validation logic.
Learnt from: haiqi96
PR: y-scope/clp#594
File: components/clp-package-utils/clp_package_utils/scripts/native/del_archives.py:90-96
Timestamp: 2024-11-15T16:22:33.635Z
Learning: In the CLP codebase, `table_prefix` is a trusted configuration value and can be safely embedded into SQL queries.
🧬 Code Graph Analysis (1)
components/clp-package-utils/clp_package_utils/scripts/start_clp.py (2)
components/clp-py-utils/clp_py_utils/clp_metadata_db_utils.py (2)
  • get_archives_table_name (215-216)
  • get_files_table_name (227-228)
components/clp-py-utils/clp_py_utils/clp_config.py (1)
  • StorageEngine (47-49)
🪛 Pylint (3.3.7)
components/clp-package-utils/clp_package_utils/scripts/start_clp.py

[error] 36-39: Unable to import 'clp_py_utils.clp_metadata_db_utils'

(E0401)


[convention] 878-878: Line too long (104/100)

(C0301)

components/job-orchestration/job_orchestration/scheduler/compress/compression_scheduler.py

[error] 23-23: Unable to import 'clp_py_utils.clp_logging'

(E0401)


[warning] 424-424: TODO: This should be removed once the webui supports aggregating stats from multiple

(W0511)


[refactor] 456-456: Too many arguments (6/5)

(R0913)


[refactor] 456-456: Too many positional arguments (6/5)

(R0917)

🔇 Additional comments (5)
components/job-orchestration/job_orchestration/scheduler/compress/compression_scheduler.py (4)

17-17: LGTM: Import additions support the refactoring.

The new imports for CLP_DEFAULT_DATASET_NAME and get_tags_table_name properly support the dynamic table name generation and default dataset creation logic.

Also applies to: 27-27


187-191: LGTM: Dataset handling logic is correct.

The variable renaming from dataset_name to dataset improves consistency with the input config field, and the conditional logic properly checks for non-null dataset before adding it to the existing datasets.


264-264: LGTM: Dynamic table name generation improves maintainability.

The use of get_tags_table_name() function instead of hardcoded table names centralizes table name logic and supports the dataset-aware table naming approach mentioned in the PR objectives.

Also applies to: 266-266, 271-271


418-434: LGTM: Default dataset creation fixes the web UI bug.

This logic directly addresses the bug described in the PR objectives where the web UI fails to load before the first CLP-S compression job. Creating the default dataset at startup ensures the required tables exist when the web UI queries them.

components/clp-package-utils/clp_package_utils/scripts/start_clp.py (1)

36-39: Verify clp_py_utils.clp_metadata_db_utils is packaged and importable

Please confirm that the clp_metadata_db_utils.py file makes it into your published artifact and can be imported cleanly:

• Build and inspect the wheel:

cd components/clp-py-utils
poetry build -f wheel
unzip -l dist/clp_py_utils-*.whl | grep clp_metadata_db_utils.py

Ensure the path clp_py_utils/clp_metadata_db_utils.py appears in the list.

• Test installation in a fresh venv:

pip install dist/clp_py_utils-*.whl
python - <<’PY’
import clp_py_utils.clp_metadata_db_utils
print("Import OK")
PY

• Check your packaging config and package exports:
– components/clp-py-utils/pyproject.toml
– components/clp-py-utils/clp_py_utils/init.py (make sure you’re not restricting imports via __all__)

If the module is missing, add an explicit include in your pyproject.toml under [tool.poetry.packages], for example:

[tool.poetry.packages]
include = "clp_py_utils"

Comment on lines +873 to +875
dataset: Optional[str] = None
if StorageEngine.CLP_S == clp_config.package.storage_engine:
table_prefix = f"{table_prefix}{CLP_DEFAULT_DATASET_NAME}_"
dataset = CLP_DEFAULT_DATASET_NAME
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Minor nit: collapse the two-step dataset assignment

The explicit type annotation is helpful, but the two-step pattern is verbose. You could inline it without losing clarity:

-    dataset: Optional[str] = None
-    if StorageEngine.CLP_S == clp_config.package.storage_engine:
-        dataset = CLP_DEFAULT_DATASET_NAME
+    dataset: Optional[str] = (
+        CLP_DEFAULT_DATASET_NAME
+        if StorageEngine.CLP_S == clp_config.package.storage_engine
+        else None
+    )

Reduces cognitive load and keeps the flow tight.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
dataset: Optional[str] = None
if StorageEngine.CLP_S == clp_config.package.storage_engine:
table_prefix = f"{table_prefix}{CLP_DEFAULT_DATASET_NAME}_"
dataset = CLP_DEFAULT_DATASET_NAME
dataset: Optional[str] = (
CLP_DEFAULT_DATASET_NAME
if StorageEngine.CLP_S == clp_config.package.storage_engine
else None
)
🤖 Prompt for AI Agents
In components/clp-package-utils/clp_package_utils/scripts/start_clp.py around
lines 873 to 875, the variable 'dataset' is assigned in two steps with an
explicit type annotation followed by a conditional assignment. Simplify this by
combining the declaration and conditional assignment into a single line using a
conditional expression, preserving the Optional[str] type clarity while reducing
verbosity and improving readability.

Bill-hbrhbr
Bill-hbrhbr previously approved these changes Jul 1, 2025
Copy link
Contributor

@Bill-hbrhbr Bill-hbrhbr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

Comment on lines 8 to 9
from sql_adapter import SQL_Adapter

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just curious. did you revert this change because it's not necessary for this PR?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oui

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between bdb7817 and f699496.

📒 Files selected for processing (4)
  • components/clp-package-utils/clp_package_utils/scripts/native/compress.py (4 hunks)
  • components/clp-package-utils/clp_package_utils/scripts/native/search.py (9 hunks)
  • components/job-orchestration/job_orchestration/scheduler/compress/compression_scheduler.py (4 hunks)
  • components/job-orchestration/job_orchestration/scheduler/query/query_scheduler.py (7 hunks)
🧰 Additional context used
🧠 Learnings (5)
📓 Common learnings
Learnt from: gibber9809
PR: y-scope/clp#504
File: components/core/src/clp_s/search/kql/CMakeLists.txt:29-29
Timestamp: 2024-10-22T15:36:04.655Z
Learning: When reviewing pull requests, focus on the changes within the PR and avoid commenting on issues outside the scope of the PR.
Learnt from: Bill-hbrhbr
PR: y-scope/clp#868
File: components/clp-py-utils/clp_py_utils/clp_metadata_db_utils.py:141-144
Timestamp: 2025-05-05T16:32:55.163Z
Learning: The column metadata table (created by `_create_column_metadata_table`) is only needed for dataset-specific workflows in `CLP_S` and is obsolete for non-dataset workflows.
Learnt from: Bill-hbrhbr
PR: y-scope/clp#1004
File: components/clp-package-utils/clp_package_utils/scripts/native/decompress.py:139-144
Timestamp: 2025-06-24T08:54:14.438Z
Learning: In the CLP codebase, the get_orig_file_id function signature was changed after a recent merge to no longer accept a dataset parameter, making previous suggestions that reference this parameter invalid.
Learnt from: gibber9809
PR: y-scope/clp#672
File: components/core/src/clp_s/indexer/MySQLIndexStorage.cpp:30-38
Timestamp: 2025-01-23T17:08:55.566Z
Learning: SQL identifiers (table names, column names) in the codebase are currently from trusted sources and directly interpolated into queries. However, the team prefers to implement sanitization for robustness, even for trusted inputs.
components/clp-package-utils/clp_package_utils/scripts/native/compress.py (4)
Learnt from: Bill-hbrhbr
PR: y-scope/clp#1004
File: components/clp-package-utils/clp_package_utils/scripts/native/decompress.py:139-144
Timestamp: 2025-06-24T08:54:14.438Z
Learning: In the CLP codebase, the get_orig_file_id function signature was changed after a recent merge to no longer accept a dataset parameter, making previous suggestions that reference this parameter invalid.
Learnt from: Bill-hbrhbr
PR: y-scope/clp#1050
File: components/clp-package-utils/clp_package_utils/scripts/native/search.py:239-244
Timestamp: 2025-06-29T22:01:05.569Z
Learning: In the CLP codebase, dataset validation is handled at the outer layer in wrapper scripts (like clp_package_utils/scripts/search.py) that call native implementation scripts (like clp_package_utils/scripts/native/search.py). The native scripts are internal components that receive pre-validated parameters, so they don't need their own dataset validation logic.
Learnt from: haiqi96
PR: y-scope/clp#651
File: components/clp-package-utils/clp_package_utils/scripts/compress.py:0-0
Timestamp: 2025-01-16T16:58:43.190Z
Learning: In the clp-package compression flow, path validation and error handling is performed at the scheduler level rather than in the compress.py script to maintain simplicity and avoid code duplication.
Learnt from: Bill-hbrhbr
PR: y-scope/clp#1036
File: components/clp-package-utils/clp_package_utils/general.py:564-579
Timestamp: 2025-06-28T07:10:47.295Z
Learning: The validate_dataset function in components/clp-package-utils/clp_package_utils/general.py is designed to be called once upon function startup for dataset validation, not repeatedly during execution, making caching optimizations unnecessary.
components/job-orchestration/job_orchestration/scheduler/compress/compression_scheduler.py (10)
Learnt from: haiqi96
PR: y-scope/clp#651
File: components/clp-package-utils/clp_package_utils/scripts/compress.py:0-0
Timestamp: 2025-01-16T16:58:43.190Z
Learning: In the clp-package compression flow, path validation and error handling is performed at the scheduler level rather than in the compress.py script to maintain simplicity and avoid code duplication.
Learnt from: Bill-hbrhbr
PR: y-scope/clp#1004
File: components/clp-package-utils/clp_package_utils/scripts/native/decompress.py:139-144
Timestamp: 2025-06-24T08:54:14.438Z
Learning: In the CLP codebase, the get_orig_file_id function signature was changed after a recent merge to no longer accept a dataset parameter, making previous suggestions that reference this parameter invalid.
Learnt from: haiqi96
PR: y-scope/clp#634
File: components/job-orchestration/job_orchestration/executor/compress/fs_compression_task.py:0-0
Timestamp: 2024-12-16T21:49:06.483Z
Learning: In `fs_compression_task.py`, when refactoring the archive processing loop in Python, the `src_archive_file.unlink()` operation should remain outside of the `process_archive` function.
Learnt from: haiqi96
PR: y-scope/clp#594
File: components/clp-package-utils/clp_package_utils/scripts/native/del_archives.py:104-110
Timestamp: 2024-11-15T16:21:52.122Z
Learning: In `clp_package_utils/scripts/native/del_archives.py`, when deleting archives, the `archive` variable retrieved from the database is controlled and is always a single string without path components. Therefore, it's acceptable to skip additional validation checks for directory traversal in this context.
Learnt from: Bill-hbrhbr
PR: y-scope/clp#831
File: components/job-orchestration/job_orchestration/scheduler/compress/compression_scheduler.py:0-0
Timestamp: 2025-04-17T16:55:06.658Z
Learning: In the compression scheduler, the team prefers initializing in-memory caches from the database at startup rather than performing repeated database queries for efficiency reasons. This approach maintains both performance and reliability across process restarts.
Learnt from: Bill-hbrhbr
PR: y-scope/clp#868
File: components/clp-py-utils/clp_py_utils/clp_metadata_db_utils.py:141-144
Timestamp: 2025-05-05T16:32:55.163Z
Learning: The column metadata table (created by `_create_column_metadata_table`) is only needed for dataset-specific workflows in `CLP_S` and is obsolete for non-dataset workflows.
Learnt from: Bill-hbrhbr
PR: y-scope/clp#1036
File: components/clp-package-utils/clp_package_utils/general.py:564-579
Timestamp: 2025-06-28T07:10:47.295Z
Learning: The validate_dataset function in components/clp-package-utils/clp_package_utils/general.py is designed to be called once upon function startup for dataset validation, not repeatedly during execution, making caching optimizations unnecessary.
Learnt from: gibber9809
PR: y-scope/clp#955
File: components/core/src/clp_s/search/sql/CMakeLists.txt:8-26
Timestamp: 2025-06-02T18:22:24.060Z
Learning: In the clp project, ANTLR code generation at build time is being removed by another PR. When reviewing CMake files, be aware that some temporary suboptimal configurations may exist to reduce merge conflicts between concurrent PRs, especially around ANTLR_TARGET calls.
Learnt from: Bill-hbrhbr
PR: y-scope/clp#1050
File: components/clp-package-utils/clp_package_utils/scripts/native/search.py:239-244
Timestamp: 2025-06-29T22:01:05.569Z
Learning: In the CLP codebase, dataset validation is handled at the outer layer in wrapper scripts (like clp_package_utils/scripts/search.py) that call native implementation scripts (like clp_package_utils/scripts/native/search.py). The native scripts are internal components that receive pre-validated parameters, so they don't need their own dataset validation logic.
Learnt from: haiqi96
PR: y-scope/clp#594
File: components/clp-package-utils/clp_package_utils/scripts/native/del_archives.py:90-96
Timestamp: 2024-11-15T16:22:33.635Z
Learning: In the CLP codebase, `table_prefix` is a trusted configuration value and can be safely embedded into SQL queries.
components/clp-package-utils/clp_package_utils/scripts/native/search.py (2)
Learnt from: Bill-hbrhbr
PR: y-scope/clp#1050
File: components/clp-package-utils/clp_package_utils/scripts/native/search.py:239-244
Timestamp: 2025-06-29T22:01:05.569Z
Learning: In the CLP codebase, dataset validation is handled at the outer layer in wrapper scripts (like clp_package_utils/scripts/search.py) that call native implementation scripts (like clp_package_utils/scripts/native/search.py). The native scripts are internal components that receive pre-validated parameters, so they don't need their own dataset validation logic.
Learnt from: Bill-hbrhbr
PR: y-scope/clp#1004
File: components/clp-package-utils/clp_package_utils/scripts/native/decompress.py:139-144
Timestamp: 2025-06-24T08:54:14.438Z
Learning: In the CLP codebase, the get_orig_file_id function signature was changed after a recent merge to no longer accept a dataset parameter, making previous suggestions that reference this parameter invalid.
components/job-orchestration/job_orchestration/scheduler/query/query_scheduler.py (12)
Learnt from: Bill-hbrhbr
PR: y-scope/clp#1004
File: components/clp-package-utils/clp_package_utils/scripts/native/decompress.py:139-144
Timestamp: 2025-06-24T08:54:14.438Z
Learning: In the CLP codebase, the get_orig_file_id function signature was changed after a recent merge to no longer accept a dataset parameter, making previous suggestions that reference this parameter invalid.
Learnt from: haiqi96
PR: y-scope/clp#569
File: components/job-orchestration/job_orchestration/scheduler/query/query_scheduler.py:380-392
Timestamp: 2024-11-15T20:07:22.256Z
Learning: The current implementation assumes single-threaded execution, so race conditions in functions like `is_target_extracted` in `components/job-orchestration/job_orchestration/scheduler/query/query_scheduler.py` are not currently a concern.
Learnt from: Bill-hbrhbr
PR: y-scope/clp#868
File: components/clp-py-utils/clp_py_utils/clp_metadata_db_utils.py:141-144
Timestamp: 2025-05-05T16:32:55.163Z
Learning: The column metadata table (created by `_create_column_metadata_table`) is only needed for dataset-specific workflows in `CLP_S` and is obsolete for non-dataset workflows.
Learnt from: haiqi96
PR: y-scope/clp#569
File: components/job-orchestration/job_orchestration/scheduler/query/query_scheduler.py:76-77
Timestamp: 2024-11-17T23:24:08.862Z
Learning: The `query_scheduler.py` file operates with single threading, and multithreading is not used.
Learnt from: haiqi96
PR: y-scope/clp#594
File: components/clp-package-utils/clp_package_utils/scripts/native/del_archives.py:90-96
Timestamp: 2024-11-15T16:22:33.635Z
Learning: In the CLP codebase, `table_prefix` is a trusted configuration value and can be safely embedded into SQL queries.
Learnt from: haiqi96
PR: y-scope/clp#594
File: components/clp-package-utils/clp_package_utils/scripts/native/del_archives.py:104-110
Timestamp: 2024-11-15T16:21:52.122Z
Learning: In `clp_package_utils/scripts/native/del_archives.py`, when deleting archives, the `archive` variable retrieved from the database is controlled and is always a single string without path components. Therefore, it's acceptable to skip additional validation checks for directory traversal in this context.
Learnt from: haiqi96
PR: y-scope/clp#634
File: components/job-orchestration/job_orchestration/executor/compress/fs_compression_task.py:0-0
Timestamp: 2024-12-16T21:49:06.483Z
Learning: In `fs_compression_task.py`, when refactoring the archive processing loop in Python, the `src_archive_file.unlink()` operation should remain outside of the `process_archive` function.
Learnt from: Bill-hbrhbr
PR: y-scope/clp#1036
File: components/clp-package-utils/clp_package_utils/general.py:564-579
Timestamp: 2025-06-28T07:10:47.295Z
Learning: The validate_dataset function in components/clp-package-utils/clp_package_utils/general.py is designed to be called once upon function startup for dataset validation, not repeatedly during execution, making caching optimizations unnecessary.
Learnt from: Bill-hbrhbr
PR: y-scope/clp#1050
File: components/clp-package-utils/clp_package_utils/scripts/native/search.py:239-244
Timestamp: 2025-06-29T22:01:05.569Z
Learning: In the CLP codebase, dataset validation is handled at the outer layer in wrapper scripts (like clp_package_utils/scripts/search.py) that call native implementation scripts (like clp_package_utils/scripts/native/search.py). The native scripts are internal components that receive pre-validated parameters, so they don't need their own dataset validation logic.
Learnt from: Bill-hbrhbr
PR: y-scope/clp#868
File: components/clp-package-utils/clp_package_utils/scripts/start_clp.py:869-870
Timestamp: 2025-06-10T05:56:56.572Z
Learning: The team prefers keeping simple conditional logic inline rather than extracting it into helper functions, especially for straightforward operations like string concatenation based on storage engine type.
Learnt from: jackluo923
PR: y-scope/clp#1054
File: components/core/tools/scripts/lib_install/musllinux_1_2/install-packages-from-source.sh:6-8
Timestamp: 2025-07-01T14:51:19.130Z
Learning: In CLP installation scripts within `components/core/tools/scripts/lib_install/`, maintain consistency with existing variable declaration patterns across platforms rather than adding individual improvements like `readonly` declarations.
Learnt from: jackluo923
PR: y-scope/clp#1054
File: components/core/tools/scripts/lib_install/musllinux_1_2/install-packages-from-source.sh:6-8
Timestamp: 2025-07-01T14:51:19.130Z
Learning: In CLP installation scripts within `components/core/tools/scripts/lib_install/`, maintain consistency with existing variable declaration patterns across platforms rather than adding individual improvements like `readonly` declarations.
🧬 Code Graph Analysis (2)
components/job-orchestration/job_orchestration/scheduler/compress/compression_scheduler.py (2)
components/clp-py-utils/clp_py_utils/clp_metadata_db_utils.py (2)
  • get_tags_table_name (231-232)
  • add_dataset (137-170)
components/clp-py-utils/clp_py_utils/clp_config.py (1)
  • StorageEngine (47-49)
components/clp-package-utils/clp_package_utils/scripts/native/search.py (1)
components/clp-py-utils/clp_py_utils/clp_config.py (3)
  • Database (80-159)
  • ResultsCache (282-309)
  • StorageEngine (47-49)
🪛 Pylint (3.3.7)
components/clp-package-utils/clp_package_utils/scripts/native/compress.py

[error] 10-10: Unable to import 'brotli'

(E0401)


[error] 11-11: Unable to import 'msgpack'

(E0401)


[error] 12-17: Unable to import 'clp_py_utils.clp_config'

(E0401)


[warning] 148-148: Using an f-string that does not have any interpolated variables

(W1309)

components/clp-package-utils/clp_package_utils/scripts/native/search.py

[error] 13-18: Unable to import 'clp_py_utils.clp_config'

(E0401)

components/job-orchestration/job_orchestration/scheduler/query/query_scheduler.py

[error] 39-45: Unable to import 'clp_py_utils.clp_metadata_db_utils'

(E0401)

🪛 Flake8 (7.2.0)
components/clp-package-utils/clp_package_utils/scripts/native/compress.py

[error] 148-148: f-string is missing placeholders

(F541)

⏰ Context from checks skipped due to timeout of 90000ms (1)
  • GitHub Check: lint-check (macos-latest)
🔇 Additional comments (15)
components/clp-package-utils/clp_package_utils/scripts/native/compress.py (4)

8-8: LGTM! Import additions align with dataset parameter functionality.

The new imports properly support the updated function signatures and dataset handling logic.

Also applies to: 12-17


138-143: LGTM! Function signature correctly updated for dataset parameter.

The addition of the optional dataset parameter maintains backwards compatibility while enabling dataset-aware configuration.


150-150: LGTM! Dataset parameter properly passed to config constructors.

The dataset parameter is correctly propagated to both FsInputConfig and S3InputConfig constructors.

Also applies to: 165-165


236-241: LGTM! Dataset determination logic is correct.

The conditional logic properly assigns CLP_DEFAULT_DATASET_NAME for CLP_S storage engine and None otherwise, which aligns with the dataset handling strategy across the codebase.

components/job-orchestration/job_orchestration/scheduler/compress/compression_scheduler.py (4)

26-26: LGTM! Import addition supports dynamic table naming.

The get_tags_table_name import enables the transition from hardcoded table suffixes to centralized table name generation.


186-197: LGTM! Simplified dataset handling improves code clarity.

Directly retrieving the dataset from the input config and conditionally adding it to the database eliminates the previous storage engine conditional logic, making the code more straightforward and maintainable.


270-270: LGTM! Dynamic table name generation replaces hardcoded suffixes.

Using get_tags_table_name(table_prefix, dataset) instead of hardcoded table suffix concatenation provides better consistency and centralized table naming logic.

Also applies to: 272-272, 277-277


424-424: LGTM! Direct storage engine access is cleaner.

Accessing the storage engine directly from clp_config.package.storage_engine without storing it locally simplifies the code and aligns with the removal of the clp_storage_engine parameter.

components/clp-package-utils/clp_package_utils/scripts/native/search.py (3)

13-18: LGTM! Import additions support dataset parameter functionality.

The new imports properly enable dataset handling and storage engine conditional logic throughout the search workflow.


40-40: LGTM! Dataset parameter consistently propagated through call chain.

The dataset parameter is properly threaded through all functions from main down to create_and_monitor_job_in_db, maintaining clean function signatures and enabling dataset-aware search job configuration.

Also applies to: 52-52, 123-123, 158-158, 196-196, 211-211, 225-225


296-300: LGTM! Dataset determination logic aligns with codebase patterns.

The conditional assignment of CLP_DEFAULT_DATASET_NAME for CLP_S storage engine and None otherwise follows the same pattern established in other components, ensuring consistency across the codebase.

Also applies to: 306-306

components/job-orchestration/job_orchestration/scheduler/query/query_scheduler.py (4)

39-45: LGTM! Import additions enable dynamic table name resolution.

The imported functions (fetch_existing_datasets, get_archive_tags_table_name, get_archives_table_name, get_files_table_name, get_tags_table_name) replace hardcoded table suffix logic with centralized table name generation.


171-171: LGTM! Dynamic table name functions replace hardcoded suffixes.

The transition to using get_archives_table_name, get_archive_tags_table_name, get_tags_table_name, and get_files_table_name with optional dataset parameters provides better consistency and centralized table naming logic throughout the query scheduler.

Also applies to: 393-393, 395-395, 403-408, 464-464, 480-484


630-633: LGTM! Improved dataset validation logic.

Using existing_datasets.update(new_datasets) instead of overwriting the set is more efficient and maintains the accumulated dataset knowledge across multiple validation calls.


663-678: LGTM! Robust dataset validation with proper error handling.

The addition of dataset validation for query jobs with appropriate error handling ensures that jobs with invalid datasets are properly failed and logged, preventing downstream processing issues.

Comment on lines 236 to 240
dataset = (
CLP_DEFAULT_DATASET_NAME
if StorageEngine.CLP_S == clp_config.package.storage_engine
else None
)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since this is only used inside _generate_clp_io_config, why not move this into _generate_clp_io_config?

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
components/clp-package-utils/clp_package_utils/scripts/native/compress.py (1)

152-152: LGTM! Error message improvements are correctly applied.

The removal of unnecessary f-string formatting for static error messages is a good improvement that addresses the previous linting issues.

Also applies to: 161-161

📜 Review details

Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 90ce0a4 and a4e6f83.

📒 Files selected for processing (3)
  • components/clp-package-utils/clp_package_utils/scripts/native/compress.py (2 hunks)
  • components/job-orchestration/job_orchestration/scheduler/compress/compression_scheduler.py (4 hunks)
  • components/job-orchestration/job_orchestration/scheduler/query/query_scheduler.py (7 hunks)
🧰 Additional context used
🧠 Learnings (4)
📓 Common learnings
Learnt from: gibber9809
PR: y-scope/clp#504
File: components/core/src/clp_s/search/kql/CMakeLists.txt:29-29
Timestamp: 2024-10-22T15:36:04.655Z
Learning: When reviewing pull requests, focus on the changes within the PR and avoid commenting on issues outside the scope of the PR.
Learnt from: Bill-hbrhbr
PR: y-scope/clp#1004
File: components/clp-package-utils/clp_package_utils/scripts/native/decompress.py:139-144
Timestamp: 2025-06-24T08:54:14.438Z
Learning: In the CLP codebase, the get_orig_file_id function signature was changed after a recent merge to no longer accept a dataset parameter, making previous suggestions that reference this parameter invalid.
Learnt from: Bill-hbrhbr
PR: y-scope/clp#868
File: components/clp-py-utils/clp_py_utils/clp_metadata_db_utils.py:141-144
Timestamp: 2025-05-05T16:32:55.163Z
Learning: The column metadata table (created by `_create_column_metadata_table`) is only needed for dataset-specific workflows in `CLP_S` and is obsolete for non-dataset workflows.
Learnt from: gibber9809
PR: y-scope/clp#672
File: components/core/src/clp_s/indexer/MySQLIndexStorage.cpp:30-38
Timestamp: 2025-01-23T17:08:55.566Z
Learning: SQL identifiers (table names, column names) in the codebase are currently from trusted sources and directly interpolated into queries. However, the team prefers to implement sanitization for robustness, even for trusted inputs.
components/job-orchestration/job_orchestration/scheduler/compress/compression_scheduler.py (9)
Learnt from: haiqi96
PR: y-scope/clp#651
File: components/clp-package-utils/clp_package_utils/scripts/compress.py:0-0
Timestamp: 2025-01-16T16:58:43.190Z
Learning: In the clp-package compression flow, path validation and error handling is performed at the scheduler level rather than in the compress.py script to maintain simplicity and avoid code duplication.
Learnt from: Bill-hbrhbr
PR: y-scope/clp#1004
File: components/clp-package-utils/clp_package_utils/scripts/native/decompress.py:139-144
Timestamp: 2025-06-24T08:54:14.438Z
Learning: In the CLP codebase, the get_orig_file_id function signature was changed after a recent merge to no longer accept a dataset parameter, making previous suggestions that reference this parameter invalid.
Learnt from: haiqi96
PR: y-scope/clp#594
File: components/clp-package-utils/clp_package_utils/scripts/native/del_archives.py:104-110
Timestamp: 2024-11-15T16:21:52.122Z
Learning: In `clp_package_utils/scripts/native/del_archives.py`, when deleting archives, the `archive` variable retrieved from the database is controlled and is always a single string without path components. Therefore, it's acceptable to skip additional validation checks for directory traversal in this context.
Learnt from: haiqi96
PR: y-scope/clp#634
File: components/job-orchestration/job_orchestration/executor/compress/fs_compression_task.py:0-0
Timestamp: 2024-12-16T21:49:06.483Z
Learning: In `fs_compression_task.py`, when refactoring the archive processing loop in Python, the `src_archive_file.unlink()` operation should remain outside of the `process_archive` function.
Learnt from: Bill-hbrhbr
PR: y-scope/clp#868
File: components/clp-py-utils/clp_py_utils/clp_metadata_db_utils.py:141-144
Timestamp: 2025-05-05T16:32:55.163Z
Learning: The column metadata table (created by `_create_column_metadata_table`) is only needed for dataset-specific workflows in `CLP_S` and is obsolete for non-dataset workflows.
Learnt from: Bill-hbrhbr
PR: y-scope/clp#1036
File: components/clp-package-utils/clp_package_utils/general.py:564-579
Timestamp: 2025-06-28T07:10:47.295Z
Learning: The validate_dataset function in components/clp-package-utils/clp_package_utils/general.py is designed to be called once upon function startup for dataset validation, not repeatedly during execution, making caching optimizations unnecessary.
Learnt from: gibber9809
PR: y-scope/clp#955
File: components/core/src/clp_s/search/sql/CMakeLists.txt:8-26
Timestamp: 2025-06-02T18:22:24.060Z
Learning: In the clp project, ANTLR code generation at build time is being removed by another PR. When reviewing CMake files, be aware that some temporary suboptimal configurations may exist to reduce merge conflicts between concurrent PRs, especially around ANTLR_TARGET calls.
Learnt from: haiqi96
PR: y-scope/clp#594
File: components/clp-package-utils/clp_package_utils/scripts/native/del_archives.py:90-96
Timestamp: 2024-11-15T16:22:33.635Z
Learning: In the CLP codebase, `table_prefix` is a trusted configuration value and can be safely embedded into SQL queries.
Learnt from: Bill-hbrhbr
PR: y-scope/clp#1050
File: components/clp-package-utils/clp_package_utils/scripts/native/search.py:239-244
Timestamp: 2025-06-29T22:01:05.569Z
Learning: In the CLP codebase, dataset validation is handled at the outer layer in wrapper scripts (like clp_package_utils/scripts/search.py) that call native implementation scripts (like clp_package_utils/scripts/native/search.py). The native scripts are internal components that receive pre-validated parameters, so they don't need their own dataset validation logic.
components/clp-package-utils/clp_package_utils/scripts/native/compress.py (6)
Learnt from: Bill-hbrhbr
PR: y-scope/clp#1004
File: components/clp-package-utils/clp_package_utils/scripts/native/decompress.py:139-144
Timestamp: 2025-06-24T08:54:14.438Z
Learning: In the CLP codebase, the get_orig_file_id function signature was changed after a recent merge to no longer accept a dataset parameter, making previous suggestions that reference this parameter invalid.
Learnt from: haiqi96
PR: y-scope/clp#651
File: components/clp-package-utils/clp_package_utils/scripts/compress.py:0-0
Timestamp: 2025-01-16T16:58:43.190Z
Learning: In the clp-package compression flow, path validation and error handling is performed at the scheduler level rather than in the compress.py script to maintain simplicity and avoid code duplication.
Learnt from: Bill-hbrhbr
PR: y-scope/clp#1050
File: components/clp-package-utils/clp_package_utils/scripts/native/search.py:239-244
Timestamp: 2025-06-29T22:01:05.569Z
Learning: In the CLP codebase, dataset validation is handled at the outer layer in wrapper scripts (like clp_package_utils/scripts/search.py) that call native implementation scripts (like clp_package_utils/scripts/native/search.py). The native scripts are internal components that receive pre-validated parameters, so they don't need their own dataset validation logic.
Learnt from: Bill-hbrhbr
PR: y-scope/clp#1036
File: components/clp-package-utils/clp_package_utils/general.py:564-579
Timestamp: 2025-06-28T07:10:47.295Z
Learning: The validate_dataset function in components/clp-package-utils/clp_package_utils/general.py is designed to be called once upon function startup for dataset validation, not repeatedly during execution, making caching optimizations unnecessary.
Learnt from: AVMatthews
PR: y-scope/clp#543
File: components/core/src/clp_s/clp-s.cpp:196-265
Timestamp: 2024-10-07T20:10:08.254Z
Learning: In `clp-s.cpp`, the `run_serializer` function interleaves serialization and writing of IR files, making it difficult to restructure it into separate functions.
Learnt from: AVMatthews
PR: y-scope/clp#543
File: components/core/src/clp_s/clp-s.cpp:196-265
Timestamp: 2024-10-08T15:52:50.753Z
Learning: In `clp-s.cpp`, the `run_serializer` function interleaves serialization and writing of IR files, making it difficult to restructure it into separate functions.
components/job-orchestration/job_orchestration/scheduler/query/query_scheduler.py (11)
Learnt from: Bill-hbrhbr
PR: y-scope/clp#1004
File: components/clp-package-utils/clp_package_utils/scripts/native/decompress.py:139-144
Timestamp: 2025-06-24T08:54:14.438Z
Learning: In the CLP codebase, the get_orig_file_id function signature was changed after a recent merge to no longer accept a dataset parameter, making previous suggestions that reference this parameter invalid.
Learnt from: haiqi96
PR: y-scope/clp#594
File: components/clp-package-utils/clp_package_utils/scripts/native/del_archives.py:104-110
Timestamp: 2024-11-15T16:21:52.122Z
Learning: In `clp_package_utils/scripts/native/del_archives.py`, when deleting archives, the `archive` variable retrieved from the database is controlled and is always a single string without path components. Therefore, it's acceptable to skip additional validation checks for directory traversal in this context.
Learnt from: Bill-hbrhbr
PR: y-scope/clp#868
File: components/clp-py-utils/clp_py_utils/clp_metadata_db_utils.py:141-144
Timestamp: 2025-05-05T16:32:55.163Z
Learning: The column metadata table (created by `_create_column_metadata_table`) is only needed for dataset-specific workflows in `CLP_S` and is obsolete for non-dataset workflows.
Learnt from: haiqi96
PR: y-scope/clp#569
File: components/job-orchestration/job_orchestration/scheduler/query/query_scheduler.py:380-392
Timestamp: 2024-11-15T20:07:22.256Z
Learning: The current implementation assumes single-threaded execution, so race conditions in functions like `is_target_extracted` in `components/job-orchestration/job_orchestration/scheduler/query/query_scheduler.py` are not currently a concern.
Learnt from: haiqi96
PR: y-scope/clp#594
File: components/clp-package-utils/clp_package_utils/scripts/native/del_archives.py:90-96
Timestamp: 2024-11-15T16:22:33.635Z
Learning: In the CLP codebase, `table_prefix` is a trusted configuration value and can be safely embedded into SQL queries.
Learnt from: haiqi96
PR: y-scope/clp#634
File: components/job-orchestration/job_orchestration/executor/compress/fs_compression_task.py:0-0
Timestamp: 2024-12-16T21:49:06.483Z
Learning: In `fs_compression_task.py`, when refactoring the archive processing loop in Python, the `src_archive_file.unlink()` operation should remain outside of the `process_archive` function.
Learnt from: Bill-hbrhbr
PR: y-scope/clp#1036
File: components/clp-package-utils/clp_package_utils/general.py:564-579
Timestamp: 2025-06-28T07:10:47.295Z
Learning: The validate_dataset function in components/clp-package-utils/clp_package_utils/general.py is designed to be called once upon function startup for dataset validation, not repeatedly during execution, making caching optimizations unnecessary.
Learnt from: Bill-hbrhbr
PR: y-scope/clp#1050
File: components/clp-package-utils/clp_package_utils/scripts/native/search.py:239-244
Timestamp: 2025-06-29T22:01:05.569Z
Learning: In the CLP codebase, dataset validation is handled at the outer layer in wrapper scripts (like clp_package_utils/scripts/search.py) that call native implementation scripts (like clp_package_utils/scripts/native/search.py). The native scripts are internal components that receive pre-validated parameters, so they don't need their own dataset validation logic.
Learnt from: Bill-hbrhbr
PR: y-scope/clp#868
File: components/clp-package-utils/clp_package_utils/scripts/start_clp.py:869-870
Timestamp: 2025-06-10T05:56:56.572Z
Learning: The team prefers keeping simple conditional logic inline rather than extracting it into helper functions, especially for straightforward operations like string concatenation based on storage engine type.
Learnt from: jackluo923
PR: y-scope/clp#1054
File: components/core/tools/scripts/lib_install/musllinux_1_2/install-packages-from-source.sh:6-8
Timestamp: 2025-07-01T14:51:19.172Z
Learning: In CLP installation scripts within `components/core/tools/scripts/lib_install/`, maintain consistency with existing variable declaration patterns across platforms rather than adding individual improvements like `readonly` declarations.
Learnt from: jackluo923
PR: y-scope/clp#1054
File: components/core/tools/scripts/lib_install/musllinux_1_2/install-packages-from-source.sh:6-8
Timestamp: 2025-07-01T14:51:19.172Z
Learning: In CLP installation scripts within `components/core/tools/scripts/lib_install/`, maintain consistency with existing variable declaration patterns across platforms rather than adding individual improvements like `readonly` declarations.
🧬 Code Graph Analysis (1)
components/job-orchestration/job_orchestration/scheduler/compress/compression_scheduler.py (2)
components/clp-py-utils/clp_py_utils/clp_metadata_db_utils.py (2)
  • get_tags_table_name (231-232)
  • add_dataset (137-170)
components/clp-py-utils/clp_py_utils/clp_config.py (1)
  • StorageEngine (47-49)
🪛 Flake8 (7.2.0)
components/clp-package-utils/clp_package_utils/scripts/native/compress.py

[error] 8-8: 'typing.Optional' imported but unused

(F401)

⏰ Context from checks skipped due to timeout of 90000ms (1)
  • GitHub Check: lint-check (ubuntu-latest)
🔇 Additional comments (14)
components/clp-package-utils/clp_package_utils/scripts/native/compress.py (2)

12-17: LGTM! Import additions support the dataset handling refactoring.

The new imports for CLP_DEFAULT_DATASET_NAME and StorageEngine are correctly added to support the internal dataset determination logic.


143-147: LGTM! Dataset determination logic is correctly implemented.

The logic properly determines the dataset based on the storage engine type: using the default dataset name for CLP_S and None for other engines. This encapsulation within the function aligns well with the PR's objective to centralize dataset handling.

components/job-orchestration/job_orchestration/scheduler/compress/compression_scheduler.py (5)

26-26: LGTM!

The addition of get_tags_table_name import aligns with the PR's objective to use dynamic table name retrieval functions.


156-162: Function signature simplified appropriately.

The removal of the clp_storage_engine parameter aligns with the broader refactoring to eliminate storage engine-based conditional logic throughout the codebase.


186-198: Dataset handling logic is properly implemented.

The code correctly retrieves the dataset from the input config and adds it if not already present. The caching mechanism with existing_datasets is efficient.


271-282: Dynamic table name generation properly implemented.

The use of get_tags_table_name with the dataset parameter ensures correct table resolution for dataset-specific workflows.


425-428: Direct config access improves code clarity.

Accessing the storage engine directly from the config eliminates unnecessary variable storage while maintaining the same functionality.

components/job-orchestration/job_orchestration/scheduler/query/query_scheduler.py (7)

39-45: LGTM!

The addition of table name getter function imports supports the transition to dynamic table name resolution.


171-171: Dataset parameter correctly propagated.

The addition of the dataset parameter from the job config ensures the correct dataset-specific archives table is queried.


393-409: Dynamic table resolution properly implemented for archive searches.

The function correctly extracts the dataset and uses it to generate appropriate table names for archives, archive tags, and tags tables.


464-464: Files table correctly uses None for dataset parameter.

This aligns with the learning that file-related functions no longer use dataset parameters, indicating files are not dataset-specific.


477-490: Function signature properly updated for dataset support.

The addition of the optional dataset parameter and dynamic table name generation ensures correct dataset-specific archive queries.


631-634: Dataset validation correctly updates the existing set.

The use of update() instead of assignment ensures that previously cached datasets are retained while adding newly discovered ones.


664-679: Robust dataset validation with proper error handling.

The code correctly validates the dataset existence and marks the job as failed with appropriate error logging when the dataset doesn't exist.

import typing
from contextlib import closing
from typing import List
from typing import List, Optional, Union
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Remove unused import.

The Optional import is not used anywhere in the code and should be removed to keep imports clean.

-from typing import List, Optional, Union
+from typing import List, Union
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
from typing import List, Optional, Union
-from typing import List, Optional, Union
+from typing import List, Union
🧰 Tools
🪛 Flake8 (7.2.0)

[error] 8-8: 'typing.Optional' imported but unused

(F401)

🤖 Prompt for AI Agents
In components/clp-package-utils/clp_package_utils/scripts/native/compress.py at
line 8, the import statement includes Optional which is not used anywhere in the
code. Remove Optional from the import statement to clean up unused imports and
keep the code tidy.

ResultSetHeader,
} from "mysql2/promise";

import settings from "../../settings.json";
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you need with {type: "json"};

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between a4e6f83 and 7b42568.

📒 Files selected for processing (1)
  • components/webui/server/src/plugins/DbManager.ts (2 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
`**/*.{cpp,hpp,java,js,jsx,tpp,ts,tsx}`: - Prefer `false == ` rather than `!`.

**/*.{cpp,hpp,java,js,jsx,tpp,ts,tsx}: - Prefer false == <expression> rather than !<expression>.

⚙️ Source: CodeRabbit Configuration File

List of files the instruction was applied to:

  • components/webui/server/src/plugins/DbManager.ts
🧠 Learnings (2)
📓 Common learnings
Learnt from: gibber9809
PR: y-scope/clp#504
File: components/core/src/clp_s/search/kql/CMakeLists.txt:29-29
Timestamp: 2024-10-22T15:36:04.655Z
Learning: When reviewing pull requests, focus on the changes within the PR and avoid commenting on issues outside the scope of the PR.
Learnt from: Bill-hbrhbr
PR: y-scope/clp#868
File: components/clp-py-utils/clp_py_utils/clp_metadata_db_utils.py:141-144
Timestamp: 2025-05-05T16:32:55.163Z
Learning: The column metadata table (created by `_create_column_metadata_table`) is only needed for dataset-specific workflows in `CLP_S` and is obsolete for non-dataset workflows.
Learnt from: Bill-hbrhbr
PR: y-scope/clp#1004
File: components/clp-package-utils/clp_package_utils/scripts/native/decompress.py:139-144
Timestamp: 2025-06-24T08:54:14.438Z
Learning: In the CLP codebase, the get_orig_file_id function signature was changed after a recent merge to no longer accept a dataset parameter, making previous suggestions that reference this parameter invalid.
Learnt from: gibber9809
PR: y-scope/clp#672
File: components/core/src/clp_s/indexer/MySQLIndexStorage.cpp:30-38
Timestamp: 2025-01-23T17:08:55.566Z
Learning: SQL identifiers (table names, column names) in the codebase are currently from trusted sources and directly interpolated into queries. However, the team prefers to implement sanitization for robustness, even for trusted inputs.
components/webui/server/src/plugins/DbManager.ts (1)
Learnt from: junhaoliao
PR: y-scope/clp#596
File: components/log-viewer-webui/client/src/api/query.js:16-23
Timestamp: 2024-11-21T15:51:33.203Z
Learning: In `components/log-viewer-webui/client/src/api/query.js`, the `ExtractJsonResp` type definition is accurate as-is and does not require modification. When suggesting changes to type definitions, ensure they align with the server-side definitions, referencing the source code if necessary.
🧬 Code Graph Analysis (1)
components/webui/server/src/plugins/DbManager.ts (1)
components/webui/server/src/configConstants.ts (2)
  • CLP_STORAGE_ENGINE_CLP_S (8-8)
  • CLP_DEFAULT_DATASET_NAME (7-7)
⏰ Context from checks skipped due to timeout of 90000ms (1)
  • GitHub Check: lint-check (ubuntu-latest)
🔇 Additional comments (1)
components/webui/server/src/plugins/DbManager.ts (1)

12-16: Confirm tool-chain support for the with { type: "json" } import assertion

Node.js currently only supports the older assert { type: "json" } syntax unless run with experimental flags, and TypeScript requires resolveJsonModule plus "module": "NodeNext" to emit the correct loader code. Please ensure the runtime (and test) environment actually understands the with form, otherwise the server will fail to start.

Comment on lines 132 to 138
jobConfig = {
dataset: CLP_STORAGE_ENGINE_CLP_S === settings.ClpStorageEngine ?
CLP_DEFAULT_DATASET_NAME :
null,
archive_id: streamId,
target_chunk_size: targetUncompressedSize,
};
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Avoid sending a dataset: null field when it’s not needed

Down-stream scheduler components typically look for the dataset key only when the storage engine is CLP_S. Serialising the field with a null value adds noise and may confuse strict schema validators.

-            jobConfig = {
-                dataset: CLP_STORAGE_ENGINE_CLP_S === settings.ClpStorageEngine ?
-                    CLP_DEFAULT_DATASET_NAME :
-                    null,
-                archive_id: streamId,
-                target_chunk_size: targetUncompressedSize,
-            };
+            jobConfig = {
+                archive_id: streamId,
+                target_chunk_size: targetUncompressedSize,
+                ...(CLP_STORAGE_ENGINE_CLP_S === settings.ClpStorageEngine && {
+                    dataset: CLP_DEFAULT_DATASET_NAME,
+                }),
+            };

This keeps the payload minimal and makes future schema evolution easier.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
jobConfig = {
dataset: CLP_STORAGE_ENGINE_CLP_S === settings.ClpStorageEngine ?
CLP_DEFAULT_DATASET_NAME :
null,
archive_id: streamId,
target_chunk_size: targetUncompressedSize,
};
jobConfig = {
archive_id: streamId,
target_chunk_size: targetUncompressedSize,
...(CLP_STORAGE_ENGINE_CLP_S === settings.ClpStorageEngine && {
dataset: CLP_DEFAULT_DATASET_NAME,
}),
};
🤖 Prompt for AI Agents
In components/webui/server/src/plugins/DbManager.ts around lines 132 to 138, the
jobConfig object always includes the dataset field, even when its value is null.
To avoid sending dataset: null, modify the code to only add the dataset field to
jobConfig when settings.ClpStorageEngine equals CLP_STORAGE_ENGINE_CLP_S. This
can be done by conditionally adding the dataset property instead of assigning
null, ensuring the field is omitted entirely when not needed.

@@ -0,0 +1,9 @@
// NOTE: These settings are duplicated from components/webui/client/src/config/index.ts, but will be
// removed in the near future.
const CLP_STORAGE_ENGINE_CLP_S = "clp-s";
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is it better to use an enum like

enum CLP_STORAGE_ENGINE {
   CLP: "clp",
   CLP_S: "clp-s",
}

then compare the values against

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

probably better to move stuff from this file into the typings directory

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We have this enum in the client code. This constant is just temporary to keep the webui working in this PR. In #1050, we're going to pass the dataset from the client code, so the logic int he last two commits can be reverted.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Or are you suggesting we duplicate the entire enum in the serve code for now?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

right i was going to suggest reusing / duplicating this enum: https://github.com/haiqi96/clp_fork/blob/7b42568fa9d335ecbe85f13edc5d3c111b4e8587/components/webui/client/src/config/index.ts

so the logic in the last two commits can be reverted.

sure. in that case i think we can temporarily leave the constant as-is then

Copy link
Member

@junhaoliao junhaoliao left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Member

@kirkrodrigues kirkrodrigues left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For the PR title & body, how about:

refactor(clp-package): Refactor dataset-related logic:
- Add helper methods to generate SQL table names.
- Make `dataset` optional in job configs.
- Change if-guards around dataset logic from checking for a specific storage engine to checking whether the dataset is set.

@haiqi96 haiqi96 merged commit 5cb0800 into y-scope:main Jul 3, 2025
7 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet